00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2239 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3498 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/lvol-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/lvol-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.248 Using shallow fetch with depth 1 00:00:00.248 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.248 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.341 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.341 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.247 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.265 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.277 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:08.277 > git config core.sparsecheckout # timeout=10 00:00:08.290 > git read-tree -mu HEAD # timeout=10 00:00:08.310 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:08.328 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:08.328 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:08.425 [Pipeline] Start of Pipeline 00:00:08.440 [Pipeline] library 00:00:08.442 Loading library shm_lib@master 00:00:08.442 Library shm_lib@master is cached. Copying from home. 00:00:08.456 [Pipeline] node 00:00:23.458 Still waiting to schedule task 00:00:23.459 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:45.702 Running on VM-host-SM9 in /var/jenkins/workspace/lvol-vg-autotest 00:04:45.704 [Pipeline] { 00:04:45.716 [Pipeline] catchError 00:04:45.718 [Pipeline] { 00:04:45.729 [Pipeline] wrap 00:04:45.740 [Pipeline] { 00:04:45.751 [Pipeline] stage 00:04:45.753 [Pipeline] { (Prologue) 00:04:45.773 [Pipeline] echo 00:04:45.774 Node: VM-host-SM9 00:04:45.778 [Pipeline] cleanWs 00:04:45.787 [WS-CLEANUP] Deleting project workspace... 00:04:45.787 [WS-CLEANUP] Deferred wipeout is used... 00:04:45.793 [WS-CLEANUP] done 00:04:46.229 [Pipeline] setCustomBuildProperty 00:04:46.320 [Pipeline] httpRequest 00:04:46.721 [Pipeline] echo 00:04:46.722 Sorcerer 10.211.164.101 is alive 00:04:46.733 [Pipeline] retry 00:04:46.735 [Pipeline] { 00:04:46.751 [Pipeline] httpRequest 00:04:46.755 HttpMethod: GET 00:04:46.756 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:04:46.756 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:04:46.757 Response Code: HTTP/1.1 200 OK 00:04:46.758 Success: Status code 200 is in the accepted range: 200,404 00:04:46.758 Saving response body to /var/jenkins/workspace/lvol-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:04:46.906 [Pipeline] } 00:04:46.924 [Pipeline] // retry 00:04:46.931 [Pipeline] sh 00:04:47.208 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:04:47.219 [Pipeline] httpRequest 00:04:47.616 [Pipeline] echo 00:04:47.617 Sorcerer 10.211.164.101 is alive 00:04:47.626 [Pipeline] retry 00:04:47.628 [Pipeline] { 00:04:47.642 [Pipeline] httpRequest 00:04:47.646 HttpMethod: GET 00:04:47.646 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:04:47.647 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:04:47.648 Response Code: HTTP/1.1 200 OK 00:04:47.648 Success: Status code 200 is in the accepted range: 200,404 00:04:47.649 Saving response body to /var/jenkins/workspace/lvol-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:04:49.835 [Pipeline] } 00:04:49.852 [Pipeline] // retry 00:04:49.858 [Pipeline] sh 00:04:50.133 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:04:54.331 [Pipeline] sh 00:04:54.612 + git -C spdk log --oneline -n5 00:04:54.612 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:04:54.612 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:04:54.612 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:04:54.612 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:04:54.612 9469ea403 nvme/fio_plugin: add trim support 00:04:54.631 [Pipeline] writeFile 00:04:54.647 [Pipeline] sh 00:04:54.928 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:54.940 [Pipeline] sh 00:04:55.219 + cat autorun-spdk.conf 00:04:55.219 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:55.219 SPDK_TEST_LVOL=1 00:04:55.219 SPDK_RUN_ASAN=1 00:04:55.220 SPDK_RUN_UBSAN=1 00:04:55.220 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:55.226 RUN_NIGHTLY=1 00:04:55.229 [Pipeline] } 00:04:55.243 [Pipeline] // stage 00:04:55.260 [Pipeline] stage 00:04:55.262 [Pipeline] { (Run VM) 00:04:55.276 [Pipeline] sh 00:04:55.557 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:55.557 + echo 'Start stage prepare_nvme.sh' 00:04:55.557 Start stage prepare_nvme.sh 00:04:55.557 + [[ -n 5 ]] 00:04:55.557 + disk_prefix=ex5 00:04:55.557 + [[ -n /var/jenkins/workspace/lvol-vg-autotest ]] 00:04:55.557 + [[ -e /var/jenkins/workspace/lvol-vg-autotest/autorun-spdk.conf ]] 00:04:55.557 + source /var/jenkins/workspace/lvol-vg-autotest/autorun-spdk.conf 00:04:55.557 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:55.557 ++ SPDK_TEST_LVOL=1 00:04:55.557 ++ SPDK_RUN_ASAN=1 00:04:55.557 ++ SPDK_RUN_UBSAN=1 00:04:55.557 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:55.557 ++ RUN_NIGHTLY=1 00:04:55.557 + cd /var/jenkins/workspace/lvol-vg-autotest 00:04:55.557 + nvme_files=() 00:04:55.557 + declare -A nvme_files 00:04:55.557 + backend_dir=/var/lib/libvirt/images/backends 00:04:55.557 + nvme_files['nvme.img']=5G 00:04:55.557 + nvme_files['nvme-cmb.img']=5G 00:04:55.557 + nvme_files['nvme-multi0.img']=4G 00:04:55.557 + nvme_files['nvme-multi1.img']=4G 00:04:55.557 + nvme_files['nvme-multi2.img']=4G 00:04:55.557 + nvme_files['nvme-openstack.img']=8G 00:04:55.557 + nvme_files['nvme-zns.img']=5G 00:04:55.557 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:55.557 + (( SPDK_TEST_FTL == 1 )) 00:04:55.557 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:55.557 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:55.557 + for nvme in "${!nvme_files[@]}" 00:04:55.557 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:04:55.557 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:55.557 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:04:55.557 + echo 'End stage prepare_nvme.sh' 00:04:55.563 End stage prepare_nvme.sh 00:04:55.570 [Pipeline] sh 00:04:55.851 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:55.851 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:04:55.851 00:04:55.851 DIR=/var/jenkins/workspace/lvol-vg-autotest/spdk/scripts/vagrant 00:04:55.851 SPDK_DIR=/var/jenkins/workspace/lvol-vg-autotest/spdk 00:04:55.851 VAGRANT_TARGET=/var/jenkins/workspace/lvol-vg-autotest 00:04:55.851 HELP=0 00:04:55.851 DRY_RUN=0 00:04:55.851 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:04:55.851 NVME_DISKS_TYPE=nvme,nvme, 00:04:55.851 NVME_AUTO_CREATE=0 00:04:55.851 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:04:55.851 NVME_CMB=,, 00:04:55.851 NVME_PMR=,, 00:04:55.851 NVME_ZNS=,, 00:04:55.851 NVME_MS=,, 00:04:55.851 NVME_FDP=,, 00:04:55.852 SPDK_VAGRANT_DISTRO=fedora39 00:04:55.852 SPDK_VAGRANT_VMCPU=10 00:04:55.852 SPDK_VAGRANT_VMRAM=12288 00:04:55.852 SPDK_VAGRANT_PROVIDER=libvirt 00:04:55.852 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:55.852 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:55.852 SPDK_OPENSTACK_NETWORK=0 00:04:55.852 VAGRANT_PACKAGE_BOX=0 00:04:55.852 VAGRANTFILE=/var/jenkins/workspace/lvol-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:55.852 FORCE_DISTRO=true 00:04:55.852 VAGRANT_BOX_VERSION= 00:04:55.852 EXTRA_VAGRANTFILES= 00:04:55.852 NIC_MODEL=e1000 00:04:55.852 00:04:55.852 mkdir: created directory '/var/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt' 00:04:55.852 /var/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt /var/jenkins/workspace/lvol-vg-autotest 00:04:59.135 Bringing machine 'default' up with 'libvirt' provider... 00:05:00.069 ==> default: Creating image (snapshot of base box volume). 00:05:00.069 ==> default: Creating domain with the following settings... 00:05:00.069 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727785482_2ed1cb50e568fa71b4ee 00:05:00.069 ==> default: -- Domain type: kvm 00:05:00.069 ==> default: -- Cpus: 10 00:05:00.069 ==> default: -- Feature: acpi 00:05:00.069 ==> default: -- Feature: apic 00:05:00.069 ==> default: -- Feature: pae 00:05:00.069 ==> default: -- Memory: 12288M 00:05:00.069 ==> default: -- Memory Backing: hugepages: 00:05:00.069 ==> default: -- Management MAC: 00:05:00.069 ==> default: -- Loader: 00:05:00.069 ==> default: -- Nvram: 00:05:00.069 ==> default: -- Base box: spdk/fedora39 00:05:00.069 ==> default: -- Storage pool: default 00:05:00.069 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727785482_2ed1cb50e568fa71b4ee.img (20G) 00:05:00.069 ==> default: -- Volume Cache: default 00:05:00.069 ==> default: -- Kernel: 00:05:00.069 ==> default: -- Initrd: 00:05:00.069 ==> default: -- Graphics Type: vnc 00:05:00.070 ==> default: -- Graphics Port: -1 00:05:00.070 ==> default: -- Graphics IP: 127.0.0.1 00:05:00.070 ==> default: -- Graphics Password: Not defined 00:05:00.070 ==> default: -- Video Type: cirrus 00:05:00.070 ==> default: -- Video VRAM: 9216 00:05:00.070 ==> default: -- Sound Type: 00:05:00.070 ==> default: -- Keymap: en-us 00:05:00.070 ==> default: -- TPM Path: 00:05:00.070 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:00.070 ==> default: -- Command line args: 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:05:00.070 ==> default: -> value=-drive, 00:05:00.070 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:05:00.070 ==> default: -> value=-drive, 00:05:00.070 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.070 ==> default: -> value=-drive, 00:05:00.070 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.070 ==> default: -> value=-drive, 00:05:00.070 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:00.070 ==> default: -> value=-device, 00:05:00.070 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.330 ==> default: Creating shared folders metadata... 00:05:00.330 ==> default: Starting domain. 00:05:01.707 ==> default: Waiting for domain to get an IP address... 00:05:16.587 ==> default: Waiting for SSH to become available... 00:05:17.961 ==> default: Configuring and enabling network interfaces... 00:05:22.143 default: SSH address: 192.168.121.99:22 00:05:22.143 default: SSH username: vagrant 00:05:22.143 default: SSH auth method: private key 00:05:24.043 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/lvol-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:32.280 ==> default: Mounting SSHFS shared folder... 00:05:33.213 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:33.213 ==> default: Checking Mount.. 00:05:34.148 ==> default: Folder Successfully Mounted! 00:05:34.148 ==> default: Running provisioner: file... 00:05:34.714 default: ~/.gitconfig => .gitconfig 00:05:35.283 00:05:35.283 SUCCESS! 00:05:35.283 00:05:35.283 cd to /var/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:35.283 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:35.283 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:35.283 00:05:35.292 [Pipeline] } 00:05:35.307 [Pipeline] // stage 00:05:35.317 [Pipeline] dir 00:05:35.317 Running in /var/jenkins/workspace/lvol-vg-autotest/fedora39-libvirt 00:05:35.320 [Pipeline] { 00:05:35.332 [Pipeline] catchError 00:05:35.334 [Pipeline] { 00:05:35.347 [Pipeline] sh 00:05:35.624 + vagrant ssh-config --host vagrant 00:05:35.624 + sed -ne /^Host/,$p 00:05:35.624 + tee ssh_conf 00:05:39.808 Host vagrant 00:05:39.808 HostName 192.168.121.99 00:05:39.808 User vagrant 00:05:39.808 Port 22 00:05:39.808 UserKnownHostsFile /dev/null 00:05:39.808 StrictHostKeyChecking no 00:05:39.808 PasswordAuthentication no 00:05:39.808 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:39.808 IdentitiesOnly yes 00:05:39.808 LogLevel FATAL 00:05:39.808 ForwardAgent yes 00:05:39.808 ForwardX11 yes 00:05:39.808 00:05:39.823 [Pipeline] withEnv 00:05:39.825 [Pipeline] { 00:05:39.841 [Pipeline] sh 00:05:40.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:40.120 source /etc/os-release 00:05:40.120 [[ -e /image.version ]] && img=$(< /image.version) 00:05:40.120 # Minimal, systemd-like check. 00:05:40.120 if [[ -e /.dockerenv ]]; then 00:05:40.120 # Clear garbage from the node's name: 00:05:40.120 # agt-er_autotest_547-896 -> autotest_547-896 00:05:40.120 # $HOSTNAME is the actual container id 00:05:40.120 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:40.120 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:40.120 # We can assume this is a mount from a host where container is running, 00:05:40.120 # so fetch its hostname to easily identify the target swarm worker. 00:05:40.120 container="$(< /etc/hostname) ($agent)" 00:05:40.120 else 00:05:40.120 # Fallback 00:05:40.120 container=$agent 00:05:40.120 fi 00:05:40.120 fi 00:05:40.120 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:40.120 00:05:40.130 [Pipeline] } 00:05:40.150 [Pipeline] // withEnv 00:05:40.159 [Pipeline] setCustomBuildProperty 00:05:40.176 [Pipeline] stage 00:05:40.179 [Pipeline] { (Tests) 00:05:40.200 [Pipeline] sh 00:05:40.483 + scp -F ssh_conf -r /var/jenkins/workspace/lvol-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:40.498 [Pipeline] sh 00:05:40.777 + scp -F ssh_conf -r /var/jenkins/workspace/lvol-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:40.794 [Pipeline] timeout 00:05:40.795 Timeout set to expire in 20 min 00:05:40.797 [Pipeline] { 00:05:40.815 [Pipeline] sh 00:05:41.093 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:41.661 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:05:41.674 [Pipeline] sh 00:05:41.957 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:42.228 [Pipeline] sh 00:05:42.507 + scp -F ssh_conf -r /var/jenkins/workspace/lvol-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:42.782 [Pipeline] sh 00:05:43.062 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=lvol-vg-autotest ./autoruner.sh spdk_repo 00:05:43.321 ++ readlink -f spdk_repo 00:05:43.321 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:43.321 + [[ -n /home/vagrant/spdk_repo ]] 00:05:43.321 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:43.321 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:43.321 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:43.321 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:43.321 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:43.321 + [[ lvol-vg-autotest == pkgdep-* ]] 00:05:43.321 + cd /home/vagrant/spdk_repo 00:05:43.321 + source /etc/os-release 00:05:43.321 ++ NAME='Fedora Linux' 00:05:43.321 ++ VERSION='39 (Cloud Edition)' 00:05:43.321 ++ ID=fedora 00:05:43.321 ++ VERSION_ID=39 00:05:43.321 ++ VERSION_CODENAME= 00:05:43.321 ++ PLATFORM_ID=platform:f39 00:05:43.321 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:43.321 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:43.321 ++ LOGO=fedora-logo-icon 00:05:43.321 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:43.321 ++ HOME_URL=https://fedoraproject.org/ 00:05:43.321 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:43.321 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:43.321 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:43.321 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:43.321 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:43.321 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:43.321 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:43.321 ++ SUPPORT_END=2024-11-12 00:05:43.321 ++ VARIANT='Cloud Edition' 00:05:43.321 ++ VARIANT_ID=cloud 00:05:43.321 + uname -a 00:05:43.321 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:43.321 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.321 Hugepages 00:05:43.321 node hugesize free / total 00:05:43.321 node0 1048576kB 0 / 0 00:05:43.321 node0 2048kB 0 / 0 00:05:43.321 00:05:43.321 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.321 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.321 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.321 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:43.321 + rm -f /tmp/spdk-ld-path 00:05:43.321 + source autorun-spdk.conf 00:05:43.321 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:43.321 ++ SPDK_TEST_LVOL=1 00:05:43.321 ++ SPDK_RUN_ASAN=1 00:05:43.321 ++ SPDK_RUN_UBSAN=1 00:05:43.321 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:43.321 ++ RUN_NIGHTLY=1 00:05:43.321 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:43.321 + [[ -n '' ]] 00:05:43.321 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:43.321 + for M in /var/spdk/build-*-manifest.txt 00:05:43.321 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:43.321 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:43.321 + for M in /var/spdk/build-*-manifest.txt 00:05:43.321 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:43.321 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:43.607 + for M in /var/spdk/build-*-manifest.txt 00:05:43.607 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:43.607 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:43.607 ++ uname 00:05:43.607 + [[ Linux == \L\i\n\u\x ]] 00:05:43.607 + sudo dmesg -T 00:05:43.607 + sudo dmesg --clear 00:05:43.607 + dmesg_pid=5229 00:05:43.607 + [[ Fedora Linux == FreeBSD ]] 00:05:43.607 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.607 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.607 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:43.607 + [[ -x /usr/src/fio-static/fio ]] 00:05:43.607 + sudo dmesg -Tw 00:05:43.607 + export FIO_BIN=/usr/src/fio-static/fio 00:05:43.607 + FIO_BIN=/usr/src/fio-static/fio 00:05:43.607 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:43.607 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:43.607 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:43.607 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.607 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.607 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:43.607 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.607 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.607 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:43.607 Test configuration: 00:05:43.607 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:43.607 SPDK_TEST_LVOL=1 00:05:43.607 SPDK_RUN_ASAN=1 00:05:43.607 SPDK_RUN_UBSAN=1 00:05:43.607 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:43.607 RUN_NIGHTLY=1 12:25:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.607 12:25:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:43.607 12:25:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.607 12:25:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.607 12:25:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.607 12:25:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.607 12:25:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.607 12:25:25 -- paths/export.sh@5 -- $ export PATH 00:05:43.607 12:25:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.607 12:25:25 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:43.607 12:25:25 -- common/autobuild_common.sh@440 -- $ date +%s 00:05:43.607 12:25:25 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727785525.XXXXXX 00:05:43.607 12:25:25 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727785525.BjXFAP 00:05:43.607 12:25:25 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:05:43.607 12:25:25 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:05:43.607 12:25:25 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:43.607 12:25:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:43.607 12:25:25 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:43.607 12:25:25 -- common/autobuild_common.sh@456 -- $ get_config_params 00:05:43.607 12:25:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:05:43.607 12:25:25 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.607 12:25:26 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:05:43.607 12:25:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:43.607 12:25:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:43.607 12:25:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:43.607 12:25:26 -- spdk/autobuild.sh@16 -- $ date -u 00:05:43.607 Tue Oct 1 12:25:26 PM UTC 2024 00:05:43.607 12:25:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:43.607 LTS-66-g726a04d70 00:05:43.607 12:25:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:43.607 12:25:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:43.607 12:25:26 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:05:43.607 12:25:26 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:05:43.607 12:25:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.607 ************************************ 00:05:43.607 START TEST asan 00:05:43.607 ************************************ 00:05:43.607 using asan 00:05:43.607 12:25:26 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:05:43.607 00:05:43.607 real 0m0.001s 00:05:43.608 user 0m0.000s 00:05:43.608 sys 0m0.000s 00:05:43.608 12:25:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:43.608 12:25:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.608 ************************************ 00:05:43.608 END TEST asan 00:05:43.608 ************************************ 00:05:43.608 12:25:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:43.608 12:25:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:43.608 12:25:26 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:05:43.608 12:25:26 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:05:43.608 12:25:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.608 ************************************ 00:05:43.608 START TEST ubsan 00:05:43.608 ************************************ 00:05:43.608 using ubsan 00:05:43.608 12:25:26 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:05:43.608 00:05:43.608 real 0m0.000s 00:05:43.608 user 0m0.000s 00:05:43.608 sys 0m0.000s 00:05:43.608 12:25:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:43.608 12:25:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.608 ************************************ 00:05:43.608 END TEST ubsan 00:05:43.608 ************************************ 00:05:43.904 12:25:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:43.905 12:25:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:43.905 12:25:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:43.905 12:25:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:05:43.905 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:43.905 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:44.163 Using 'verbs' RDMA provider 00:05:57.297 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:06:09.510 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:06:09.510 Creating mk/config.mk...done. 00:06:09.510 Creating mk/cc.flags.mk...done. 00:06:09.510 Type 'make' to build. 00:06:09.510 12:25:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:06:09.510 12:25:51 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:06:09.510 12:25:51 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:06:09.510 12:25:51 -- common/autotest_common.sh@10 -- $ set +x 00:06:09.510 ************************************ 00:06:09.510 START TEST make 00:06:09.510 ************************************ 00:06:09.510 12:25:51 -- common/autotest_common.sh@1104 -- $ make -j10 00:06:09.510 make[1]: Nothing to be done for 'all'. 00:06:24.425 The Meson build system 00:06:24.425 Version: 1.5.0 00:06:24.425 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:24.425 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:24.425 Build type: native build 00:06:24.425 Program cat found: YES (/usr/bin/cat) 00:06:24.425 Project name: DPDK 00:06:24.425 Project version: 23.11.0 00:06:24.425 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:24.425 C linker for the host machine: cc ld.bfd 2.40-14 00:06:24.425 Host machine cpu family: x86_64 00:06:24.425 Host machine cpu: x86_64 00:06:24.425 Message: ## Building in Developer Mode ## 00:06:24.425 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:24.425 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:24.425 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:24.425 Program python3 found: YES (/usr/bin/python3) 00:06:24.425 Program cat found: YES (/usr/bin/cat) 00:06:24.425 Compiler for C supports arguments -march=native: YES 00:06:24.425 Checking for size of "void *" : 8 00:06:24.425 Checking for size of "void *" : 8 (cached) 00:06:24.425 Library m found: YES 00:06:24.425 Library numa found: YES 00:06:24.425 Has header "numaif.h" : YES 00:06:24.425 Library fdt found: NO 00:06:24.425 Library execinfo found: NO 00:06:24.425 Has header "execinfo.h" : YES 00:06:24.425 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:24.425 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:24.425 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:24.425 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:24.425 Run-time dependency openssl found: YES 3.1.1 00:06:24.425 Run-time dependency libpcap found: YES 1.10.4 00:06:24.425 Has header "pcap.h" with dependency libpcap: YES 00:06:24.425 Compiler for C supports arguments -Wcast-qual: YES 00:06:24.425 Compiler for C supports arguments -Wdeprecated: YES 00:06:24.425 Compiler for C supports arguments -Wformat: YES 00:06:24.425 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:24.425 Compiler for C supports arguments -Wformat-security: NO 00:06:24.425 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:24.425 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:24.425 Compiler for C supports arguments -Wnested-externs: YES 00:06:24.425 Compiler for C supports arguments -Wold-style-definition: YES 00:06:24.425 Compiler for C supports arguments -Wpointer-arith: YES 00:06:24.425 Compiler for C supports arguments -Wsign-compare: YES 00:06:24.425 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:24.425 Compiler for C supports arguments -Wundef: YES 00:06:24.425 Compiler for C supports arguments -Wwrite-strings: YES 00:06:24.425 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:24.425 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:24.425 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:24.425 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:24.425 Program objdump found: YES (/usr/bin/objdump) 00:06:24.425 Compiler for C supports arguments -mavx512f: YES 00:06:24.425 Checking if "AVX512 checking" compiles: YES 00:06:24.425 Fetching value of define "__SSE4_2__" : 1 00:06:24.425 Fetching value of define "__AES__" : 1 00:06:24.425 Fetching value of define "__AVX__" : 1 00:06:24.425 Fetching value of define "__AVX2__" : 1 00:06:24.425 Fetching value of define "__AVX512BW__" : (undefined) 00:06:24.425 Fetching value of define "__AVX512CD__" : (undefined) 00:06:24.425 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:24.425 Fetching value of define "__AVX512F__" : (undefined) 00:06:24.425 Fetching value of define "__AVX512VL__" : (undefined) 00:06:24.425 Fetching value of define "__PCLMUL__" : 1 00:06:24.425 Fetching value of define "__RDRND__" : 1 00:06:24.425 Fetching value of define "__RDSEED__" : 1 00:06:24.425 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:24.425 Fetching value of define "__znver1__" : (undefined) 00:06:24.425 Fetching value of define "__znver2__" : (undefined) 00:06:24.425 Fetching value of define "__znver3__" : (undefined) 00:06:24.425 Fetching value of define "__znver4__" : (undefined) 00:06:24.425 Library asan found: YES 00:06:24.425 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:24.425 Message: lib/log: Defining dependency "log" 00:06:24.425 Message: lib/kvargs: Defining dependency "kvargs" 00:06:24.425 Message: lib/telemetry: Defining dependency "telemetry" 00:06:24.425 Library rt found: YES 00:06:24.425 Checking for function "getentropy" : NO 00:06:24.425 Message: lib/eal: Defining dependency "eal" 00:06:24.425 Message: lib/ring: Defining dependency "ring" 00:06:24.425 Message: lib/rcu: Defining dependency "rcu" 00:06:24.425 Message: lib/mempool: Defining dependency "mempool" 00:06:24.425 Message: lib/mbuf: Defining dependency "mbuf" 00:06:24.425 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:24.425 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:24.425 Compiler for C supports arguments -mpclmul: YES 00:06:24.425 Compiler for C supports arguments -maes: YES 00:06:24.425 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:24.425 Compiler for C supports arguments -mavx512bw: YES 00:06:24.425 Compiler for C supports arguments -mavx512dq: YES 00:06:24.425 Compiler for C supports arguments -mavx512vl: YES 00:06:24.425 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:24.425 Compiler for C supports arguments -mavx2: YES 00:06:24.425 Compiler for C supports arguments -mavx: YES 00:06:24.425 Message: lib/net: Defining dependency "net" 00:06:24.425 Message: lib/meter: Defining dependency "meter" 00:06:24.425 Message: lib/ethdev: Defining dependency "ethdev" 00:06:24.425 Message: lib/pci: Defining dependency "pci" 00:06:24.425 Message: lib/cmdline: Defining dependency "cmdline" 00:06:24.425 Message: lib/hash: Defining dependency "hash" 00:06:24.425 Message: lib/timer: Defining dependency "timer" 00:06:24.425 Message: lib/compressdev: Defining dependency "compressdev" 00:06:24.425 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:24.425 Message: lib/dmadev: Defining dependency "dmadev" 00:06:24.425 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:24.425 Message: lib/power: Defining dependency "power" 00:06:24.425 Message: lib/reorder: Defining dependency "reorder" 00:06:24.425 Message: lib/security: Defining dependency "security" 00:06:24.425 Has header "linux/userfaultfd.h" : YES 00:06:24.425 Has header "linux/vduse.h" : YES 00:06:24.425 Message: lib/vhost: Defining dependency "vhost" 00:06:24.425 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:24.425 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:24.425 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:24.425 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:24.425 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:24.425 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:24.425 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:24.425 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:24.425 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:24.425 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:24.425 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:24.425 Configuring doxy-api-html.conf using configuration 00:06:24.425 Configuring doxy-api-man.conf using configuration 00:06:24.425 Program mandb found: YES (/usr/bin/mandb) 00:06:24.425 Program sphinx-build found: NO 00:06:24.425 Configuring rte_build_config.h using configuration 00:06:24.425 Message: 00:06:24.425 ================= 00:06:24.425 Applications Enabled 00:06:24.425 ================= 00:06:24.425 00:06:24.425 apps: 00:06:24.425 00:06:24.425 00:06:24.425 Message: 00:06:24.425 ================= 00:06:24.425 Libraries Enabled 00:06:24.425 ================= 00:06:24.425 00:06:24.425 libs: 00:06:24.425 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:24.425 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:24.425 cryptodev, dmadev, power, reorder, security, vhost, 00:06:24.425 00:06:24.425 Message: 00:06:24.425 =============== 00:06:24.425 Drivers Enabled 00:06:24.425 =============== 00:06:24.425 00:06:24.425 common: 00:06:24.425 00:06:24.425 bus: 00:06:24.425 pci, vdev, 00:06:24.425 mempool: 00:06:24.425 ring, 00:06:24.425 dma: 00:06:24.425 00:06:24.425 net: 00:06:24.425 00:06:24.425 crypto: 00:06:24.425 00:06:24.425 compress: 00:06:24.425 00:06:24.425 vdpa: 00:06:24.425 00:06:24.425 00:06:24.425 Message: 00:06:24.425 ================= 00:06:24.425 Content Skipped 00:06:24.425 ================= 00:06:24.425 00:06:24.425 apps: 00:06:24.425 dumpcap: explicitly disabled via build config 00:06:24.425 graph: explicitly disabled via build config 00:06:24.425 pdump: explicitly disabled via build config 00:06:24.425 proc-info: explicitly disabled via build config 00:06:24.425 test-acl: explicitly disabled via build config 00:06:24.425 test-bbdev: explicitly disabled via build config 00:06:24.425 test-cmdline: explicitly disabled via build config 00:06:24.425 test-compress-perf: explicitly disabled via build config 00:06:24.425 test-crypto-perf: explicitly disabled via build config 00:06:24.425 test-dma-perf: explicitly disabled via build config 00:06:24.425 test-eventdev: explicitly disabled via build config 00:06:24.425 test-fib: explicitly disabled via build config 00:06:24.425 test-flow-perf: explicitly disabled via build config 00:06:24.425 test-gpudev: explicitly disabled via build config 00:06:24.425 test-mldev: explicitly disabled via build config 00:06:24.425 test-pipeline: explicitly disabled via build config 00:06:24.425 test-pmd: explicitly disabled via build config 00:06:24.426 test-regex: explicitly disabled via build config 00:06:24.426 test-sad: explicitly disabled via build config 00:06:24.426 test-security-perf: explicitly disabled via build config 00:06:24.426 00:06:24.426 libs: 00:06:24.426 metrics: explicitly disabled via build config 00:06:24.426 acl: explicitly disabled via build config 00:06:24.426 bbdev: explicitly disabled via build config 00:06:24.426 bitratestats: explicitly disabled via build config 00:06:24.426 bpf: explicitly disabled via build config 00:06:24.426 cfgfile: explicitly disabled via build config 00:06:24.426 distributor: explicitly disabled via build config 00:06:24.426 efd: explicitly disabled via build config 00:06:24.426 eventdev: explicitly disabled via build config 00:06:24.426 dispatcher: explicitly disabled via build config 00:06:24.426 gpudev: explicitly disabled via build config 00:06:24.426 gro: explicitly disabled via build config 00:06:24.426 gso: explicitly disabled via build config 00:06:24.426 ip_frag: explicitly disabled via build config 00:06:24.426 jobstats: explicitly disabled via build config 00:06:24.426 latencystats: explicitly disabled via build config 00:06:24.426 lpm: explicitly disabled via build config 00:06:24.426 member: explicitly disabled via build config 00:06:24.426 pcapng: explicitly disabled via build config 00:06:24.426 rawdev: explicitly disabled via build config 00:06:24.426 regexdev: explicitly disabled via build config 00:06:24.426 mldev: explicitly disabled via build config 00:06:24.426 rib: explicitly disabled via build config 00:06:24.426 sched: explicitly disabled via build config 00:06:24.426 stack: explicitly disabled via build config 00:06:24.426 ipsec: explicitly disabled via build config 00:06:24.426 pdcp: explicitly disabled via build config 00:06:24.426 fib: explicitly disabled via build config 00:06:24.426 port: explicitly disabled via build config 00:06:24.426 pdump: explicitly disabled via build config 00:06:24.426 table: explicitly disabled via build config 00:06:24.426 pipeline: explicitly disabled via build config 00:06:24.426 graph: explicitly disabled via build config 00:06:24.426 node: explicitly disabled via build config 00:06:24.426 00:06:24.426 drivers: 00:06:24.426 common/cpt: not in enabled drivers build config 00:06:24.426 common/dpaax: not in enabled drivers build config 00:06:24.426 common/iavf: not in enabled drivers build config 00:06:24.426 common/idpf: not in enabled drivers build config 00:06:24.426 common/mvep: not in enabled drivers build config 00:06:24.426 common/octeontx: not in enabled drivers build config 00:06:24.426 bus/auxiliary: not in enabled drivers build config 00:06:24.426 bus/cdx: not in enabled drivers build config 00:06:24.426 bus/dpaa: not in enabled drivers build config 00:06:24.426 bus/fslmc: not in enabled drivers build config 00:06:24.426 bus/ifpga: not in enabled drivers build config 00:06:24.426 bus/platform: not in enabled drivers build config 00:06:24.426 bus/vmbus: not in enabled drivers build config 00:06:24.426 common/cnxk: not in enabled drivers build config 00:06:24.426 common/mlx5: not in enabled drivers build config 00:06:24.426 common/nfp: not in enabled drivers build config 00:06:24.426 common/qat: not in enabled drivers build config 00:06:24.426 common/sfc_efx: not in enabled drivers build config 00:06:24.426 mempool/bucket: not in enabled drivers build config 00:06:24.426 mempool/cnxk: not in enabled drivers build config 00:06:24.426 mempool/dpaa: not in enabled drivers build config 00:06:24.426 mempool/dpaa2: not in enabled drivers build config 00:06:24.426 mempool/octeontx: not in enabled drivers build config 00:06:24.426 mempool/stack: not in enabled drivers build config 00:06:24.426 dma/cnxk: not in enabled drivers build config 00:06:24.426 dma/dpaa: not in enabled drivers build config 00:06:24.426 dma/dpaa2: not in enabled drivers build config 00:06:24.426 dma/hisilicon: not in enabled drivers build config 00:06:24.426 dma/idxd: not in enabled drivers build config 00:06:24.426 dma/ioat: not in enabled drivers build config 00:06:24.426 dma/skeleton: not in enabled drivers build config 00:06:24.426 net/af_packet: not in enabled drivers build config 00:06:24.426 net/af_xdp: not in enabled drivers build config 00:06:24.426 net/ark: not in enabled drivers build config 00:06:24.426 net/atlantic: not in enabled drivers build config 00:06:24.426 net/avp: not in enabled drivers build config 00:06:24.426 net/axgbe: not in enabled drivers build config 00:06:24.426 net/bnx2x: not in enabled drivers build config 00:06:24.426 net/bnxt: not in enabled drivers build config 00:06:24.426 net/bonding: not in enabled drivers build config 00:06:24.426 net/cnxk: not in enabled drivers build config 00:06:24.426 net/cpfl: not in enabled drivers build config 00:06:24.426 net/cxgbe: not in enabled drivers build config 00:06:24.426 net/dpaa: not in enabled drivers build config 00:06:24.426 net/dpaa2: not in enabled drivers build config 00:06:24.426 net/e1000: not in enabled drivers build config 00:06:24.426 net/ena: not in enabled drivers build config 00:06:24.426 net/enetc: not in enabled drivers build config 00:06:24.426 net/enetfec: not in enabled drivers build config 00:06:24.426 net/enic: not in enabled drivers build config 00:06:24.426 net/failsafe: not in enabled drivers build config 00:06:24.426 net/fm10k: not in enabled drivers build config 00:06:24.426 net/gve: not in enabled drivers build config 00:06:24.426 net/hinic: not in enabled drivers build config 00:06:24.426 net/hns3: not in enabled drivers build config 00:06:24.426 net/i40e: not in enabled drivers build config 00:06:24.426 net/iavf: not in enabled drivers build config 00:06:24.426 net/ice: not in enabled drivers build config 00:06:24.426 net/idpf: not in enabled drivers build config 00:06:24.426 net/igc: not in enabled drivers build config 00:06:24.426 net/ionic: not in enabled drivers build config 00:06:24.426 net/ipn3ke: not in enabled drivers build config 00:06:24.426 net/ixgbe: not in enabled drivers build config 00:06:24.426 net/mana: not in enabled drivers build config 00:06:24.426 net/memif: not in enabled drivers build config 00:06:24.426 net/mlx4: not in enabled drivers build config 00:06:24.426 net/mlx5: not in enabled drivers build config 00:06:24.426 net/mvneta: not in enabled drivers build config 00:06:24.426 net/mvpp2: not in enabled drivers build config 00:06:24.426 net/netvsc: not in enabled drivers build config 00:06:24.426 net/nfb: not in enabled drivers build config 00:06:24.426 net/nfp: not in enabled drivers build config 00:06:24.426 net/ngbe: not in enabled drivers build config 00:06:24.426 net/null: not in enabled drivers build config 00:06:24.426 net/octeontx: not in enabled drivers build config 00:06:24.426 net/octeon_ep: not in enabled drivers build config 00:06:24.426 net/pcap: not in enabled drivers build config 00:06:24.426 net/pfe: not in enabled drivers build config 00:06:24.426 net/qede: not in enabled drivers build config 00:06:24.426 net/ring: not in enabled drivers build config 00:06:24.426 net/sfc: not in enabled drivers build config 00:06:24.426 net/softnic: not in enabled drivers build config 00:06:24.426 net/tap: not in enabled drivers build config 00:06:24.426 net/thunderx: not in enabled drivers build config 00:06:24.426 net/txgbe: not in enabled drivers build config 00:06:24.426 net/vdev_netvsc: not in enabled drivers build config 00:06:24.426 net/vhost: not in enabled drivers build config 00:06:24.426 net/virtio: not in enabled drivers build config 00:06:24.426 net/vmxnet3: not in enabled drivers build config 00:06:24.426 raw/*: missing internal dependency, "rawdev" 00:06:24.426 crypto/armv8: not in enabled drivers build config 00:06:24.426 crypto/bcmfs: not in enabled drivers build config 00:06:24.426 crypto/caam_jr: not in enabled drivers build config 00:06:24.426 crypto/ccp: not in enabled drivers build config 00:06:24.426 crypto/cnxk: not in enabled drivers build config 00:06:24.426 crypto/dpaa_sec: not in enabled drivers build config 00:06:24.426 crypto/dpaa2_sec: not in enabled drivers build config 00:06:24.426 crypto/ipsec_mb: not in enabled drivers build config 00:06:24.426 crypto/mlx5: not in enabled drivers build config 00:06:24.426 crypto/mvsam: not in enabled drivers build config 00:06:24.426 crypto/nitrox: not in enabled drivers build config 00:06:24.426 crypto/null: not in enabled drivers build config 00:06:24.426 crypto/octeontx: not in enabled drivers build config 00:06:24.426 crypto/openssl: not in enabled drivers build config 00:06:24.426 crypto/scheduler: not in enabled drivers build config 00:06:24.426 crypto/uadk: not in enabled drivers build config 00:06:24.426 crypto/virtio: not in enabled drivers build config 00:06:24.426 compress/isal: not in enabled drivers build config 00:06:24.426 compress/mlx5: not in enabled drivers build config 00:06:24.426 compress/octeontx: not in enabled drivers build config 00:06:24.426 compress/zlib: not in enabled drivers build config 00:06:24.426 regex/*: missing internal dependency, "regexdev" 00:06:24.426 ml/*: missing internal dependency, "mldev" 00:06:24.426 vdpa/ifc: not in enabled drivers build config 00:06:24.426 vdpa/mlx5: not in enabled drivers build config 00:06:24.426 vdpa/nfp: not in enabled drivers build config 00:06:24.426 vdpa/sfc: not in enabled drivers build config 00:06:24.426 event/*: missing internal dependency, "eventdev" 00:06:24.426 baseband/*: missing internal dependency, "bbdev" 00:06:24.426 gpu/*: missing internal dependency, "gpudev" 00:06:24.426 00:06:24.426 00:06:24.993 Build targets in project: 85 00:06:24.993 00:06:24.993 DPDK 23.11.0 00:06:24.993 00:06:24.993 User defined options 00:06:24.993 buildtype : debug 00:06:24.993 default_library : shared 00:06:24.993 libdir : lib 00:06:24.993 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:24.993 b_sanitize : address 00:06:24.993 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:06:24.993 c_link_args : 00:06:24.993 cpu_instruction_set: native 00:06:24.993 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:24.993 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:24.993 enable_docs : false 00:06:24.993 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:24.993 enable_kmods : false 00:06:24.993 tests : false 00:06:24.993 00:06:24.993 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:25.603 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:25.862 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:25.862 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:25.862 [3/265] Linking static target lib/librte_kvargs.a 00:06:25.862 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:25.862 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:25.862 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:26.120 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:26.120 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:26.120 [9/265] Linking static target lib/librte_log.a 00:06:26.120 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:26.688 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.946 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:27.205 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:27.205 [14/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.205 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:27.205 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:27.205 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:27.205 [18/265] Linking static target lib/librte_telemetry.a 00:06:27.205 [19/265] Linking target lib/librte_log.so.24.0 00:06:27.463 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:27.463 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:27.463 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:06:27.720 [23/265] Linking target lib/librte_kvargs.so.24.0 00:06:27.720 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:27.720 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:27.978 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:06:27.978 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:28.236 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:28.236 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.236 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:28.236 [31/265] Linking target lib/librte_telemetry.so.24.0 00:06:28.236 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:28.495 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:28.495 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:28.495 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:28.495 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:06:28.753 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:28.753 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:28.753 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:29.011 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:29.012 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:29.012 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:29.012 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:29.270 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:29.270 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:29.270 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:29.530 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:29.530 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:29.788 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:29.788 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:30.046 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:30.046 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:30.046 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:30.046 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:30.304 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:30.304 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:30.304 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:30.304 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:30.565 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:30.565 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:30.565 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:30.565 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:30.565 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:31.179 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:31.179 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:31.179 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:31.179 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:31.179 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:31.437 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:31.437 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:31.437 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:31.694 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:31.694 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:31.694 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:31.694 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:31.694 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:31.694 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:31.952 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:32.209 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:32.209 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:32.467 [81/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:32.467 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:32.467 [83/265] Linking static target lib/librte_ring.a 00:06:32.467 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:32.467 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:32.724 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:32.724 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:32.724 [88/265] Linking static target lib/librte_eal.a 00:06:32.982 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:32.982 [90/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.982 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:32.982 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:32.982 [93/265] Linking static target lib/librte_mempool.a 00:06:32.982 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:33.240 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:33.240 [96/265] Linking static target lib/librte_rcu.a 00:06:33.240 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:33.498 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:33.498 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:33.498 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:33.755 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.755 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:33.755 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:34.012 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:34.012 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:34.270 [106/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:34.270 [107/265] Linking static target lib/librte_meter.a 00:06:34.270 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:34.270 [109/265] Linking static target lib/librte_net.a 00:06:34.270 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:34.270 [111/265] Linking static target lib/librte_mbuf.a 00:06:34.270 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.527 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:34.527 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:34.527 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:34.784 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.784 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.784 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:35.353 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:35.353 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:35.611 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:35.611 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:35.869 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:35.869 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:35.869 [125/265] Linking static target lib/librte_pci.a 00:06:36.127 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:36.384 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:36.384 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:36.384 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.384 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:36.384 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:36.384 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:36.384 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:36.384 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:36.640 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:36.640 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:36.640 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:36.640 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:36.640 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:36.640 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:36.640 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:36.897 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:36.897 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:36.897 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:37.155 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:37.155 [146/265] Linking static target lib/librte_cmdline.a 00:06:37.416 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:37.416 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:37.673 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:37.673 [150/265] Linking static target lib/librte_timer.a 00:06:37.673 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:37.931 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:38.189 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:38.189 [154/265] Linking static target lib/librte_compressdev.a 00:06:38.189 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:38.189 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:38.189 [157/265] Linking static target lib/librte_ethdev.a 00:06:38.447 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:38.447 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.447 [160/265] Linking static target lib/librte_hash.a 00:06:38.447 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:38.447 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:38.705 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:38.706 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:38.706 [165/265] Linking static target lib/librte_dmadev.a 00:06:38.963 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:38.963 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:38.963 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:38.963 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:39.221 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.221 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:39.479 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.479 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.737 [174/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:39.737 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:39.737 [176/265] Linking static target lib/librte_cryptodev.a 00:06:39.737 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:39.737 [178/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:39.737 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:39.737 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:39.995 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:40.253 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:40.253 [183/265] Linking static target lib/librte_power.a 00:06:40.512 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:40.512 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:40.512 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:40.512 [187/265] Linking static target lib/librte_reorder.a 00:06:40.512 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:40.512 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:40.512 [190/265] Linking static target lib/librte_security.a 00:06:40.771 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.030 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.030 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.287 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:41.546 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:41.546 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.546 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:41.803 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:41.803 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:41.803 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:42.062 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:42.062 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:42.320 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:42.320 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:42.320 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:42.320 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:42.320 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:42.320 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:42.578 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:42.578 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:42.578 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:42.578 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:42.578 [213/265] Linking static target drivers/librte_bus_pci.a 00:06:42.578 [214/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:42.578 [215/265] Linking static target drivers/librte_bus_vdev.a 00:06:42.578 [216/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:42.836 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:42.836 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:42.836 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.094 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:43.095 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:43.095 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:43.095 [223/265] Linking static target drivers/librte_mempool_ring.a 00:06:43.353 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.920 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.920 [226/265] Linking target lib/librte_eal.so.24.0 00:06:43.920 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:06:44.179 [228/265] Linking target lib/librte_pci.so.24.0 00:06:44.179 [229/265] Linking target drivers/librte_bus_vdev.so.24.0 00:06:44.179 [230/265] Linking target lib/librte_meter.so.24.0 00:06:44.179 [231/265] Linking target lib/librte_ring.so.24.0 00:06:44.179 [232/265] Linking target lib/librte_dmadev.so.24.0 00:06:44.179 [233/265] Linking target lib/librte_timer.so.24.0 00:06:44.179 [234/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:44.179 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:06:44.179 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:06:44.179 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:06:44.179 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:06:44.179 [239/265] Linking target lib/librte_rcu.so.24.0 00:06:44.179 [240/265] Linking target lib/librte_mempool.so.24.0 00:06:44.179 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:06:44.179 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:06:44.438 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:06:44.438 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:06:44.438 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:06:44.438 [246/265] Linking target lib/librte_mbuf.so.24.0 00:06:44.696 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:06:44.696 [248/265] Linking target lib/librte_compressdev.so.24.0 00:06:44.696 [249/265] Linking target lib/librte_reorder.so.24.0 00:06:44.696 [250/265] Linking target lib/librte_net.so.24.0 00:06:44.696 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:06:44.954 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:06:44.954 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:06:44.954 [254/265] Linking target lib/librte_security.so.24.0 00:06:44.954 [255/265] Linking target lib/librte_hash.so.24.0 00:06:44.954 [256/265] Linking target lib/librte_cmdline.so.24.0 00:06:45.211 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:06:45.470 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:45.470 [259/265] Linking target lib/librte_ethdev.so.24.0 00:06:45.729 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:06:45.729 [261/265] Linking target lib/librte_power.so.24.0 00:06:49.019 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:49.019 [263/265] Linking static target lib/librte_vhost.a 00:06:50.396 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.396 [265/265] Linking target lib/librte_vhost.so.24.0 00:06:50.396 INFO: autodetecting backend as ninja 00:06:50.396 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:51.773 CC lib/ut_mock/mock.o 00:06:51.774 CC lib/ut/ut.o 00:06:51.774 CC lib/log/log.o 00:06:51.774 CC lib/log/log_deprecated.o 00:06:51.774 CC lib/log/log_flags.o 00:06:51.774 LIB libspdk_ut_mock.a 00:06:51.774 SO libspdk_ut_mock.so.5.0 00:06:51.774 LIB libspdk_ut.a 00:06:51.774 LIB libspdk_log.a 00:06:51.774 SO libspdk_ut.so.1.0 00:06:51.774 SO libspdk_log.so.6.1 00:06:51.774 SYMLINK libspdk_ut_mock.so 00:06:51.774 SYMLINK libspdk_ut.so 00:06:51.774 SYMLINK libspdk_log.so 00:06:52.032 CC lib/util/base64.o 00:06:52.032 CC lib/util/cpuset.o 00:06:52.032 CC lib/util/bit_array.o 00:06:52.032 CC lib/util/crc32c.o 00:06:52.032 CC lib/util/crc16.o 00:06:52.032 CC lib/dma/dma.o 00:06:52.032 CXX lib/trace_parser/trace.o 00:06:52.032 CC lib/util/crc32.o 00:06:52.032 CC lib/ioat/ioat.o 00:06:52.032 CC lib/vfio_user/host/vfio_user_pci.o 00:06:52.291 CC lib/util/crc32_ieee.o 00:06:52.291 CC lib/vfio_user/host/vfio_user.o 00:06:52.291 CC lib/util/crc64.o 00:06:52.291 CC lib/util/dif.o 00:06:52.291 CC lib/util/fd.o 00:06:52.550 CC lib/util/file.o 00:06:52.550 CC lib/util/hexlify.o 00:06:52.550 LIB libspdk_dma.a 00:06:52.550 SO libspdk_dma.so.3.0 00:06:52.550 CC lib/util/iov.o 00:06:52.550 CC lib/util/math.o 00:06:52.550 SYMLINK libspdk_dma.so 00:06:52.550 CC lib/util/pipe.o 00:06:52.550 CC lib/util/strerror_tls.o 00:06:52.550 CC lib/util/string.o 00:06:52.550 CC lib/util/uuid.o 00:06:52.550 CC lib/util/fd_group.o 00:06:52.809 LIB libspdk_ioat.a 00:06:52.809 SO libspdk_ioat.so.6.0 00:06:52.809 LIB libspdk_vfio_user.a 00:06:52.809 CC lib/util/xor.o 00:06:52.809 SYMLINK libspdk_ioat.so 00:06:52.809 CC lib/util/zipf.o 00:06:52.809 SO libspdk_vfio_user.so.4.0 00:06:52.809 SYMLINK libspdk_vfio_user.so 00:06:53.067 LIB libspdk_util.a 00:06:53.325 SO libspdk_util.so.8.0 00:06:53.325 SYMLINK libspdk_util.so 00:06:53.584 CC lib/json/json_parse.o 00:06:53.584 CC lib/json/json_util.o 00:06:53.584 CC lib/json/json_write.o 00:06:53.584 CC lib/env_dpdk/env.o 00:06:53.584 CC lib/env_dpdk/memory.o 00:06:53.584 LIB libspdk_trace_parser.a 00:06:53.584 CC lib/vmd/vmd.o 00:06:53.584 CC lib/conf/conf.o 00:06:53.584 CC lib/rdma/common.o 00:06:53.584 CC lib/idxd/idxd.o 00:06:53.584 SO libspdk_trace_parser.so.4.0 00:06:53.842 SYMLINK libspdk_trace_parser.so 00:06:53.842 CC lib/idxd/idxd_user.o 00:06:53.842 CC lib/vmd/led.o 00:06:53.842 CC lib/env_dpdk/pci.o 00:06:53.842 LIB libspdk_conf.a 00:06:53.842 LIB libspdk_json.a 00:06:53.842 CC lib/rdma/rdma_verbs.o 00:06:53.842 SO libspdk_conf.so.5.0 00:06:53.842 SO libspdk_json.so.5.1 00:06:54.101 SYMLINK libspdk_conf.so 00:06:54.101 CC lib/idxd/idxd_kernel.o 00:06:54.101 CC lib/env_dpdk/init.o 00:06:54.101 SYMLINK libspdk_json.so 00:06:54.101 CC lib/env_dpdk/threads.o 00:06:54.101 CC lib/env_dpdk/pci_ioat.o 00:06:54.101 LIB libspdk_rdma.a 00:06:54.101 SO libspdk_rdma.so.5.0 00:06:54.101 CC lib/env_dpdk/pci_virtio.o 00:06:54.101 CC lib/env_dpdk/pci_vmd.o 00:06:54.359 SYMLINK libspdk_rdma.so 00:06:54.359 CC lib/env_dpdk/pci_idxd.o 00:06:54.359 CC lib/jsonrpc/jsonrpc_server.o 00:06:54.359 CC lib/env_dpdk/pci_event.o 00:06:54.359 LIB libspdk_idxd.a 00:06:54.359 CC lib/env_dpdk/sigbus_handler.o 00:06:54.359 CC lib/env_dpdk/pci_dpdk.o 00:06:54.359 SO libspdk_idxd.so.11.0 00:06:54.359 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:54.359 LIB libspdk_vmd.a 00:06:54.359 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:54.359 SO libspdk_vmd.so.5.0 00:06:54.359 SYMLINK libspdk_idxd.so 00:06:54.617 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:54.617 CC lib/jsonrpc/jsonrpc_client.o 00:06:54.617 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:54.617 SYMLINK libspdk_vmd.so 00:06:54.876 LIB libspdk_jsonrpc.a 00:06:54.876 SO libspdk_jsonrpc.so.5.1 00:06:54.876 SYMLINK libspdk_jsonrpc.so 00:06:55.154 CC lib/rpc/rpc.o 00:06:55.424 LIB libspdk_rpc.a 00:06:55.424 SO libspdk_rpc.so.5.0 00:06:55.424 SYMLINK libspdk_rpc.so 00:06:55.683 LIB libspdk_env_dpdk.a 00:06:55.683 CC lib/trace/trace.o 00:06:55.683 CC lib/trace/trace_flags.o 00:06:55.683 CC lib/trace/trace_rpc.o 00:06:55.683 CC lib/notify/notify.o 00:06:55.683 CC lib/notify/notify_rpc.o 00:06:55.683 CC lib/sock/sock.o 00:06:55.683 CC lib/sock/sock_rpc.o 00:06:55.683 SO libspdk_env_dpdk.so.13.0 00:06:55.683 LIB libspdk_notify.a 00:06:55.941 SO libspdk_notify.so.5.0 00:06:55.941 SYMLINK libspdk_env_dpdk.so 00:06:55.941 SYMLINK libspdk_notify.so 00:06:55.941 LIB libspdk_trace.a 00:06:55.941 SO libspdk_trace.so.9.0 00:06:55.941 SYMLINK libspdk_trace.so 00:06:56.199 LIB libspdk_sock.a 00:06:56.199 SO libspdk_sock.so.8.0 00:06:56.199 CC lib/thread/thread.o 00:06:56.200 CC lib/thread/iobuf.o 00:06:56.200 SYMLINK libspdk_sock.so 00:06:56.458 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:56.458 CC lib/nvme/nvme_ctrlr.o 00:06:56.458 CC lib/nvme/nvme_fabric.o 00:06:56.458 CC lib/nvme/nvme_ns_cmd.o 00:06:56.458 CC lib/nvme/nvme_ns.o 00:06:56.458 CC lib/nvme/nvme_pcie_common.o 00:06:56.458 CC lib/nvme/nvme_pcie.o 00:06:56.458 CC lib/nvme/nvme_qpair.o 00:06:56.716 CC lib/nvme/nvme.o 00:06:57.285 CC lib/nvme/nvme_quirks.o 00:06:57.285 CC lib/nvme/nvme_transport.o 00:06:57.544 CC lib/nvme/nvme_discovery.o 00:06:57.544 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:57.544 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:57.544 CC lib/nvme/nvme_tcp.o 00:06:57.802 CC lib/nvme/nvme_opal.o 00:06:57.802 CC lib/nvme/nvme_io_msg.o 00:06:57.802 CC lib/nvme/nvme_poll_group.o 00:06:58.061 CC lib/nvme/nvme_zns.o 00:06:58.061 CC lib/nvme/nvme_cuse.o 00:06:58.319 CC lib/nvme/nvme_vfio_user.o 00:06:58.319 LIB libspdk_thread.a 00:06:58.319 SO libspdk_thread.so.9.0 00:06:58.319 CC lib/nvme/nvme_rdma.o 00:06:58.319 SYMLINK libspdk_thread.so 00:06:58.577 CC lib/accel/accel.o 00:06:58.577 CC lib/blob/blobstore.o 00:06:58.577 CC lib/accel/accel_rpc.o 00:06:58.577 CC lib/accel/accel_sw.o 00:06:58.835 CC lib/blob/request.o 00:06:58.835 CC lib/init/json_config.o 00:06:58.835 CC lib/init/subsystem.o 00:06:58.835 CC lib/init/subsystem_rpc.o 00:06:59.093 CC lib/init/rpc.o 00:06:59.093 CC lib/blob/zeroes.o 00:06:59.093 CC lib/blob/blob_bs_dev.o 00:06:59.093 CC lib/virtio/virtio.o 00:06:59.352 LIB libspdk_init.a 00:06:59.352 CC lib/virtio/virtio_vhost_user.o 00:06:59.352 CC lib/virtio/virtio_vfio_user.o 00:06:59.352 SO libspdk_init.so.4.0 00:06:59.352 SYMLINK libspdk_init.so 00:06:59.352 CC lib/virtio/virtio_pci.o 00:06:59.609 CC lib/event/app.o 00:06:59.609 CC lib/event/log_rpc.o 00:06:59.609 CC lib/event/reactor.o 00:06:59.609 CC lib/event/app_rpc.o 00:06:59.609 CC lib/event/scheduler_static.o 00:06:59.867 LIB libspdk_virtio.a 00:06:59.867 SO libspdk_virtio.so.6.0 00:06:59.867 SYMLINK libspdk_virtio.so 00:07:00.190 LIB libspdk_nvme.a 00:07:00.190 LIB libspdk_accel.a 00:07:00.190 LIB libspdk_event.a 00:07:00.190 SO libspdk_accel.so.14.0 00:07:00.190 SO libspdk_event.so.12.0 00:07:00.448 SYMLINK libspdk_accel.so 00:07:00.448 SYMLINK libspdk_event.so 00:07:00.448 SO libspdk_nvme.so.12.0 00:07:00.448 CC lib/bdev/bdev_rpc.o 00:07:00.448 CC lib/bdev/bdev.o 00:07:00.448 CC lib/bdev/part.o 00:07:00.448 CC lib/bdev/bdev_zone.o 00:07:00.448 CC lib/bdev/scsi_nvme.o 00:07:00.706 SYMLINK libspdk_nvme.so 00:07:02.606 LIB libspdk_blob.a 00:07:02.606 SO libspdk_blob.so.10.1 00:07:02.868 SYMLINK libspdk_blob.so 00:07:03.127 CC lib/blobfs/tree.o 00:07:03.127 CC lib/blobfs/blobfs.o 00:07:03.127 CC lib/lvol/lvol.o 00:07:04.060 LIB libspdk_bdev.a 00:07:04.318 LIB libspdk_lvol.a 00:07:04.318 SO libspdk_bdev.so.14.0 00:07:04.318 LIB libspdk_blobfs.a 00:07:04.318 SO libspdk_lvol.so.9.1 00:07:04.318 SO libspdk_blobfs.so.9.0 00:07:04.318 SYMLINK libspdk_lvol.so 00:07:04.318 SYMLINK libspdk_blobfs.so 00:07:04.318 SYMLINK libspdk_bdev.so 00:07:04.576 CC lib/nvmf/ctrlr.o 00:07:04.577 CC lib/scsi/dev.o 00:07:04.577 CC lib/nbd/nbd.o 00:07:04.577 CC lib/scsi/lun.o 00:07:04.577 CC lib/nvmf/ctrlr_discovery.o 00:07:04.577 CC lib/nvmf/ctrlr_bdev.o 00:07:04.577 CC lib/scsi/port.o 00:07:04.577 CC lib/nbd/nbd_rpc.o 00:07:04.577 CC lib/ftl/ftl_core.o 00:07:04.577 CC lib/ublk/ublk.o 00:07:04.836 CC lib/ublk/ublk_rpc.o 00:07:04.836 CC lib/ftl/ftl_init.o 00:07:04.836 CC lib/ftl/ftl_layout.o 00:07:05.093 CC lib/ftl/ftl_debug.o 00:07:05.093 CC lib/scsi/scsi.o 00:07:05.093 CC lib/ftl/ftl_io.o 00:07:05.093 CC lib/ftl/ftl_sb.o 00:07:05.353 CC lib/ftl/ftl_l2p.o 00:07:05.353 CC lib/nvmf/subsystem.o 00:07:05.353 CC lib/scsi/scsi_bdev.o 00:07:05.353 LIB libspdk_nbd.a 00:07:05.353 SO libspdk_nbd.so.6.0 00:07:05.353 SYMLINK libspdk_nbd.so 00:07:05.353 CC lib/scsi/scsi_pr.o 00:07:05.353 CC lib/nvmf/nvmf.o 00:07:05.353 CC lib/nvmf/nvmf_rpc.o 00:07:05.353 CC lib/ftl/ftl_l2p_flat.o 00:07:05.611 CC lib/nvmf/transport.o 00:07:05.611 LIB libspdk_ublk.a 00:07:05.611 SO libspdk_ublk.so.2.0 00:07:05.611 SYMLINK libspdk_ublk.so 00:07:05.611 CC lib/ftl/ftl_nv_cache.o 00:07:05.611 CC lib/ftl/ftl_band.o 00:07:05.869 CC lib/scsi/scsi_rpc.o 00:07:05.869 CC lib/scsi/task.o 00:07:05.869 CC lib/nvmf/tcp.o 00:07:06.127 CC lib/nvmf/rdma.o 00:07:06.127 LIB libspdk_scsi.a 00:07:06.127 SO libspdk_scsi.so.8.0 00:07:06.127 CC lib/ftl/ftl_band_ops.o 00:07:06.386 SYMLINK libspdk_scsi.so 00:07:06.386 CC lib/ftl/ftl_writer.o 00:07:06.386 CC lib/ftl/ftl_rq.o 00:07:06.644 CC lib/iscsi/conn.o 00:07:06.644 CC lib/iscsi/init_grp.o 00:07:06.644 CC lib/iscsi/iscsi.o 00:07:06.644 CC lib/ftl/ftl_reloc.o 00:07:06.902 CC lib/iscsi/md5.o 00:07:06.902 CC lib/iscsi/param.o 00:07:07.160 CC lib/vhost/vhost.o 00:07:07.160 CC lib/iscsi/portal_grp.o 00:07:07.160 CC lib/vhost/vhost_rpc.o 00:07:07.160 CC lib/ftl/ftl_l2p_cache.o 00:07:07.160 CC lib/iscsi/tgt_node.o 00:07:07.418 CC lib/iscsi/iscsi_subsystem.o 00:07:07.418 CC lib/iscsi/iscsi_rpc.o 00:07:07.418 CC lib/ftl/ftl_p2l.o 00:07:07.984 CC lib/vhost/vhost_scsi.o 00:07:07.984 CC lib/iscsi/task.o 00:07:07.984 CC lib/ftl/mngt/ftl_mngt.o 00:07:07.984 CC lib/vhost/vhost_blk.o 00:07:07.984 CC lib/vhost/rte_vhost_user.o 00:07:07.984 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:07.984 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:08.243 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:08.243 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:08.243 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:08.243 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:08.243 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:08.502 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:08.502 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:08.502 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:08.761 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:08.761 LIB libspdk_iscsi.a 00:07:08.761 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:08.761 CC lib/ftl/utils/ftl_conf.o 00:07:08.761 CC lib/ftl/utils/ftl_md.o 00:07:08.761 SO libspdk_iscsi.so.7.0 00:07:09.019 CC lib/ftl/utils/ftl_mempool.o 00:07:09.019 CC lib/ftl/utils/ftl_bitmap.o 00:07:09.019 CC lib/ftl/utils/ftl_property.o 00:07:09.019 SYMLINK libspdk_iscsi.so 00:07:09.019 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:09.019 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:09.277 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:09.277 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:09.277 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:09.277 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:09.277 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:09.277 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:09.277 LIB libspdk_vhost.a 00:07:09.535 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:09.535 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:09.535 CC lib/ftl/base/ftl_base_dev.o 00:07:09.535 CC lib/ftl/base/ftl_base_bdev.o 00:07:09.535 SO libspdk_vhost.so.7.1 00:07:09.535 CC lib/ftl/ftl_trace.o 00:07:09.535 SYMLINK libspdk_vhost.so 00:07:09.794 LIB libspdk_nvmf.a 00:07:09.794 LIB libspdk_ftl.a 00:07:09.794 SO libspdk_nvmf.so.17.0 00:07:10.051 SYMLINK libspdk_nvmf.so 00:07:10.051 SO libspdk_ftl.so.8.0 00:07:10.615 SYMLINK libspdk_ftl.so 00:07:10.615 CC module/env_dpdk/env_dpdk_rpc.o 00:07:10.874 CC module/accel/error/accel_error.o 00:07:10.874 CC module/sock/posix/posix.o 00:07:10.874 CC module/accel/ioat/accel_ioat.o 00:07:10.874 CC module/accel/dsa/accel_dsa.o 00:07:10.874 CC module/accel/iaa/accel_iaa.o 00:07:10.874 CC module/scheduler/gscheduler/gscheduler.o 00:07:10.874 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:10.874 CC module/blob/bdev/blob_bdev.o 00:07:10.874 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:10.874 LIB libspdk_env_dpdk_rpc.a 00:07:10.874 SO libspdk_env_dpdk_rpc.so.5.0 00:07:11.132 CC module/accel/iaa/accel_iaa_rpc.o 00:07:11.132 SYMLINK libspdk_env_dpdk_rpc.so 00:07:11.132 CC module/accel/ioat/accel_ioat_rpc.o 00:07:11.132 LIB libspdk_scheduler_dynamic.a 00:07:11.132 LIB libspdk_scheduler_gscheduler.a 00:07:11.132 LIB libspdk_scheduler_dpdk_governor.a 00:07:11.132 SO libspdk_scheduler_dynamic.so.3.0 00:07:11.132 SO libspdk_scheduler_gscheduler.so.3.0 00:07:11.132 SO libspdk_scheduler_dpdk_governor.so.3.0 00:07:11.132 CC module/accel/error/accel_error_rpc.o 00:07:11.132 CC module/accel/dsa/accel_dsa_rpc.o 00:07:11.132 SYMLINK libspdk_scheduler_dynamic.so 00:07:11.132 SYMLINK libspdk_scheduler_gscheduler.so 00:07:11.132 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:11.132 LIB libspdk_accel_iaa.a 00:07:11.132 LIB libspdk_blob_bdev.a 00:07:11.132 LIB libspdk_accel_ioat.a 00:07:11.132 SO libspdk_blob_bdev.so.10.1 00:07:11.132 SO libspdk_accel_ioat.so.5.0 00:07:11.390 SO libspdk_accel_iaa.so.2.0 00:07:11.390 LIB libspdk_accel_error.a 00:07:11.390 LIB libspdk_accel_dsa.a 00:07:11.390 SYMLINK libspdk_accel_ioat.so 00:07:11.390 SO libspdk_accel_error.so.1.0 00:07:11.390 SYMLINK libspdk_blob_bdev.so 00:07:11.390 SYMLINK libspdk_accel_iaa.so 00:07:11.390 SO libspdk_accel_dsa.so.4.0 00:07:11.390 SYMLINK libspdk_accel_error.so 00:07:11.390 SYMLINK libspdk_accel_dsa.so 00:07:11.648 CC module/blobfs/bdev/blobfs_bdev.o 00:07:11.648 CC module/bdev/nvme/bdev_nvme.o 00:07:11.648 CC module/bdev/lvol/vbdev_lvol.o 00:07:11.648 CC module/bdev/malloc/bdev_malloc.o 00:07:11.648 CC module/bdev/gpt/gpt.o 00:07:11.648 CC module/bdev/delay/vbdev_delay.o 00:07:11.648 CC module/bdev/error/vbdev_error.o 00:07:11.648 CC module/bdev/null/bdev_null.o 00:07:11.648 CC module/bdev/passthru/vbdev_passthru.o 00:07:11.906 LIB libspdk_sock_posix.a 00:07:11.906 SO libspdk_sock_posix.so.5.0 00:07:11.906 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:11.906 CC module/bdev/null/bdev_null_rpc.o 00:07:11.907 CC module/bdev/gpt/vbdev_gpt.o 00:07:11.907 SYMLINK libspdk_sock_posix.so 00:07:11.907 CC module/bdev/error/vbdev_error_rpc.o 00:07:11.907 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:11.907 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:12.164 LIB libspdk_bdev_null.a 00:07:12.164 LIB libspdk_bdev_error.a 00:07:12.164 SO libspdk_bdev_null.so.5.0 00:07:12.164 SO libspdk_bdev_error.so.5.0 00:07:12.164 LIB libspdk_bdev_delay.a 00:07:12.164 LIB libspdk_blobfs_bdev.a 00:07:12.164 SYMLINK libspdk_bdev_null.so 00:07:12.164 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:12.164 SYMLINK libspdk_bdev_error.so 00:07:12.164 SO libspdk_blobfs_bdev.so.5.0 00:07:12.164 SO libspdk_bdev_delay.so.5.0 00:07:12.422 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:12.422 SYMLINK libspdk_blobfs_bdev.so 00:07:12.422 SYMLINK libspdk_bdev_delay.so 00:07:12.422 CC module/bdev/split/vbdev_split.o 00:07:12.422 CC module/bdev/raid/bdev_raid.o 00:07:12.422 CC module/bdev/raid/bdev_raid_rpc.o 00:07:12.422 LIB libspdk_bdev_lvol.a 00:07:12.422 LIB libspdk_bdev_gpt.a 00:07:12.422 SO libspdk_bdev_lvol.so.5.0 00:07:12.422 LIB libspdk_bdev_passthru.a 00:07:12.422 CC module/bdev/aio/bdev_aio.o 00:07:12.422 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:12.422 SO libspdk_bdev_gpt.so.5.0 00:07:12.422 SO libspdk_bdev_passthru.so.5.0 00:07:12.422 LIB libspdk_bdev_malloc.a 00:07:12.422 SYMLINK libspdk_bdev_lvol.so 00:07:12.422 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:12.422 SO libspdk_bdev_malloc.so.5.0 00:07:12.680 SYMLINK libspdk_bdev_gpt.so 00:07:12.680 CC module/bdev/split/vbdev_split_rpc.o 00:07:12.680 SYMLINK libspdk_bdev_passthru.so 00:07:12.680 SYMLINK libspdk_bdev_malloc.so 00:07:12.680 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:12.680 CC module/bdev/nvme/nvme_rpc.o 00:07:12.680 CC module/bdev/ftl/bdev_ftl.o 00:07:12.680 CC module/bdev/raid/bdev_raid_sb.o 00:07:12.680 CC module/bdev/iscsi/bdev_iscsi.o 00:07:12.937 LIB libspdk_bdev_split.a 00:07:12.937 CC module/bdev/aio/bdev_aio_rpc.o 00:07:12.937 SO libspdk_bdev_split.so.5.0 00:07:12.937 LIB libspdk_bdev_zone_block.a 00:07:12.937 SYMLINK libspdk_bdev_split.so 00:07:12.937 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:12.937 SO libspdk_bdev_zone_block.so.5.0 00:07:12.937 LIB libspdk_bdev_aio.a 00:07:12.937 SYMLINK libspdk_bdev_zone_block.so 00:07:12.937 CC module/bdev/nvme/bdev_mdns_client.o 00:07:13.194 SO libspdk_bdev_aio.so.5.0 00:07:13.194 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:13.194 CC module/bdev/nvme/vbdev_opal.o 00:07:13.194 SYMLINK libspdk_bdev_aio.so 00:07:13.194 CC module/bdev/raid/raid1.o 00:07:13.194 CC module/bdev/raid/raid0.o 00:07:13.194 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:13.194 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:13.194 LIB libspdk_bdev_iscsi.a 00:07:13.194 SO libspdk_bdev_iscsi.so.5.0 00:07:13.451 SYMLINK libspdk_bdev_iscsi.so 00:07:13.451 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:13.451 LIB libspdk_bdev_ftl.a 00:07:13.451 SO libspdk_bdev_ftl.so.5.0 00:07:13.451 CC module/bdev/raid/concat.o 00:07:13.451 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:13.451 SYMLINK libspdk_bdev_ftl.so 00:07:13.451 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:13.710 LIB libspdk_bdev_raid.a 00:07:13.710 SO libspdk_bdev_raid.so.5.0 00:07:13.968 SYMLINK libspdk_bdev_raid.so 00:07:13.968 LIB libspdk_bdev_virtio.a 00:07:13.968 SO libspdk_bdev_virtio.so.5.0 00:07:14.227 SYMLINK libspdk_bdev_virtio.so 00:07:15.163 LIB libspdk_bdev_nvme.a 00:07:15.447 SO libspdk_bdev_nvme.so.6.0 00:07:15.447 SYMLINK libspdk_bdev_nvme.so 00:07:15.705 CC module/event/subsystems/scheduler/scheduler.o 00:07:15.705 CC module/event/subsystems/iobuf/iobuf.o 00:07:15.705 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:15.705 CC module/event/subsystems/vmd/vmd.o 00:07:15.705 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:15.705 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:15.705 CC module/event/subsystems/sock/sock.o 00:07:15.963 LIB libspdk_event_sock.a 00:07:15.963 LIB libspdk_event_iobuf.a 00:07:15.963 LIB libspdk_event_vhost_blk.a 00:07:15.963 LIB libspdk_event_scheduler.a 00:07:15.963 LIB libspdk_event_vmd.a 00:07:15.963 SO libspdk_event_sock.so.4.0 00:07:15.963 SO libspdk_event_iobuf.so.2.0 00:07:15.963 SO libspdk_event_vhost_blk.so.2.0 00:07:15.963 SO libspdk_event_scheduler.so.3.0 00:07:15.963 SO libspdk_event_vmd.so.5.0 00:07:15.963 SYMLINK libspdk_event_sock.so 00:07:15.963 SYMLINK libspdk_event_scheduler.so 00:07:15.963 SYMLINK libspdk_event_vhost_blk.so 00:07:15.963 SYMLINK libspdk_event_vmd.so 00:07:15.963 SYMLINK libspdk_event_iobuf.so 00:07:16.221 CC module/event/subsystems/accel/accel.o 00:07:16.479 LIB libspdk_event_accel.a 00:07:16.479 SO libspdk_event_accel.so.5.0 00:07:16.479 SYMLINK libspdk_event_accel.so 00:07:16.737 CC module/event/subsystems/bdev/bdev.o 00:07:16.737 LIB libspdk_event_bdev.a 00:07:16.737 SO libspdk_event_bdev.so.5.0 00:07:16.994 SYMLINK libspdk_event_bdev.so 00:07:16.994 CC module/event/subsystems/nbd/nbd.o 00:07:16.994 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:16.994 CC module/event/subsystems/ublk/ublk.o 00:07:16.994 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:16.994 CC module/event/subsystems/scsi/scsi.o 00:07:17.252 LIB libspdk_event_nbd.a 00:07:17.252 LIB libspdk_event_scsi.a 00:07:17.252 SO libspdk_event_nbd.so.5.0 00:07:17.252 SO libspdk_event_scsi.so.5.0 00:07:17.252 LIB libspdk_event_ublk.a 00:07:17.252 SO libspdk_event_ublk.so.2.0 00:07:17.252 SYMLINK libspdk_event_scsi.so 00:07:17.252 SYMLINK libspdk_event_nbd.so 00:07:17.252 SYMLINK libspdk_event_ublk.so 00:07:17.510 LIB libspdk_event_nvmf.a 00:07:17.510 SO libspdk_event_nvmf.so.5.0 00:07:17.510 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:17.510 CC module/event/subsystems/iscsi/iscsi.o 00:07:17.510 SYMLINK libspdk_event_nvmf.so 00:07:17.510 LIB libspdk_event_vhost_scsi.a 00:07:17.768 LIB libspdk_event_iscsi.a 00:07:17.768 SO libspdk_event_vhost_scsi.so.2.0 00:07:17.768 SO libspdk_event_iscsi.so.5.0 00:07:17.768 SYMLINK libspdk_event_vhost_scsi.so 00:07:17.768 SYMLINK libspdk_event_iscsi.so 00:07:17.768 SO libspdk.so.5.0 00:07:17.768 SYMLINK libspdk.so 00:07:18.025 TEST_HEADER include/spdk/accel.h 00:07:18.025 TEST_HEADER include/spdk/accel_module.h 00:07:18.025 CXX app/trace/trace.o 00:07:18.025 TEST_HEADER include/spdk/assert.h 00:07:18.025 TEST_HEADER include/spdk/barrier.h 00:07:18.025 TEST_HEADER include/spdk/base64.h 00:07:18.025 TEST_HEADER include/spdk/bdev.h 00:07:18.025 TEST_HEADER include/spdk/bdev_module.h 00:07:18.025 TEST_HEADER include/spdk/bdev_zone.h 00:07:18.025 TEST_HEADER include/spdk/bit_array.h 00:07:18.025 TEST_HEADER include/spdk/bit_pool.h 00:07:18.025 TEST_HEADER include/spdk/blob_bdev.h 00:07:18.025 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:18.025 TEST_HEADER include/spdk/blobfs.h 00:07:18.025 TEST_HEADER include/spdk/blob.h 00:07:18.025 TEST_HEADER include/spdk/conf.h 00:07:18.025 TEST_HEADER include/spdk/config.h 00:07:18.025 TEST_HEADER include/spdk/cpuset.h 00:07:18.025 TEST_HEADER include/spdk/crc16.h 00:07:18.025 TEST_HEADER include/spdk/crc32.h 00:07:18.025 TEST_HEADER include/spdk/crc64.h 00:07:18.025 TEST_HEADER include/spdk/dif.h 00:07:18.025 TEST_HEADER include/spdk/dma.h 00:07:18.025 TEST_HEADER include/spdk/endian.h 00:07:18.025 TEST_HEADER include/spdk/env_dpdk.h 00:07:18.025 TEST_HEADER include/spdk/env.h 00:07:18.025 TEST_HEADER include/spdk/event.h 00:07:18.025 TEST_HEADER include/spdk/fd_group.h 00:07:18.025 TEST_HEADER include/spdk/fd.h 00:07:18.025 TEST_HEADER include/spdk/file.h 00:07:18.025 CC examples/accel/perf/accel_perf.o 00:07:18.025 TEST_HEADER include/spdk/ftl.h 00:07:18.025 TEST_HEADER include/spdk/gpt_spec.h 00:07:18.025 TEST_HEADER include/spdk/hexlify.h 00:07:18.025 TEST_HEADER include/spdk/histogram_data.h 00:07:18.025 TEST_HEADER include/spdk/idxd.h 00:07:18.025 CC test/event/event_perf/event_perf.o 00:07:18.025 TEST_HEADER include/spdk/idxd_spec.h 00:07:18.025 TEST_HEADER include/spdk/init.h 00:07:18.025 TEST_HEADER include/spdk/ioat.h 00:07:18.025 TEST_HEADER include/spdk/ioat_spec.h 00:07:18.025 CC test/accel/dif/dif.o 00:07:18.025 TEST_HEADER include/spdk/iscsi_spec.h 00:07:18.025 CC test/blobfs/mkfs/mkfs.o 00:07:18.025 TEST_HEADER include/spdk/json.h 00:07:18.025 TEST_HEADER include/spdk/jsonrpc.h 00:07:18.025 TEST_HEADER include/spdk/likely.h 00:07:18.284 TEST_HEADER include/spdk/log.h 00:07:18.284 TEST_HEADER include/spdk/lvol.h 00:07:18.284 TEST_HEADER include/spdk/memory.h 00:07:18.284 TEST_HEADER include/spdk/mmio.h 00:07:18.284 TEST_HEADER include/spdk/nbd.h 00:07:18.284 TEST_HEADER include/spdk/notify.h 00:07:18.284 TEST_HEADER include/spdk/nvme.h 00:07:18.284 CC test/app/bdev_svc/bdev_svc.o 00:07:18.284 TEST_HEADER include/spdk/nvme_intel.h 00:07:18.284 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:18.284 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:18.284 TEST_HEADER include/spdk/nvme_spec.h 00:07:18.284 CC test/dma/test_dma/test_dma.o 00:07:18.284 TEST_HEADER include/spdk/nvme_zns.h 00:07:18.284 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:18.284 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:18.284 TEST_HEADER include/spdk/nvmf.h 00:07:18.284 CC test/env/mem_callbacks/mem_callbacks.o 00:07:18.284 TEST_HEADER include/spdk/nvmf_spec.h 00:07:18.284 CC test/bdev/bdevio/bdevio.o 00:07:18.284 TEST_HEADER include/spdk/nvmf_transport.h 00:07:18.284 TEST_HEADER include/spdk/opal.h 00:07:18.284 TEST_HEADER include/spdk/opal_spec.h 00:07:18.284 TEST_HEADER include/spdk/pci_ids.h 00:07:18.284 TEST_HEADER include/spdk/pipe.h 00:07:18.284 TEST_HEADER include/spdk/queue.h 00:07:18.284 TEST_HEADER include/spdk/reduce.h 00:07:18.284 TEST_HEADER include/spdk/rpc.h 00:07:18.284 TEST_HEADER include/spdk/scheduler.h 00:07:18.284 TEST_HEADER include/spdk/scsi.h 00:07:18.284 TEST_HEADER include/spdk/scsi_spec.h 00:07:18.284 TEST_HEADER include/spdk/sock.h 00:07:18.284 TEST_HEADER include/spdk/stdinc.h 00:07:18.284 TEST_HEADER include/spdk/string.h 00:07:18.284 TEST_HEADER include/spdk/thread.h 00:07:18.284 TEST_HEADER include/spdk/trace.h 00:07:18.284 TEST_HEADER include/spdk/trace_parser.h 00:07:18.284 LINK event_perf 00:07:18.284 TEST_HEADER include/spdk/tree.h 00:07:18.284 TEST_HEADER include/spdk/ublk.h 00:07:18.284 TEST_HEADER include/spdk/util.h 00:07:18.284 TEST_HEADER include/spdk/uuid.h 00:07:18.284 TEST_HEADER include/spdk/version.h 00:07:18.284 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:18.284 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:18.284 TEST_HEADER include/spdk/vhost.h 00:07:18.284 TEST_HEADER include/spdk/vmd.h 00:07:18.284 TEST_HEADER include/spdk/xor.h 00:07:18.284 TEST_HEADER include/spdk/zipf.h 00:07:18.284 CXX test/cpp_headers/accel.o 00:07:18.542 LINK mkfs 00:07:18.542 LINK bdev_svc 00:07:18.542 CC test/event/reactor/reactor.o 00:07:18.542 CXX test/cpp_headers/accel_module.o 00:07:18.542 LINK reactor 00:07:18.801 LINK bdevio 00:07:18.801 LINK dif 00:07:18.801 CXX test/cpp_headers/assert.o 00:07:18.801 LINK accel_perf 00:07:18.801 LINK spdk_trace 00:07:18.801 LINK test_dma 00:07:18.801 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:18.801 CC test/lvol/esnap/esnap.o 00:07:18.801 CC test/event/reactor_perf/reactor_perf.o 00:07:18.801 CXX test/cpp_headers/barrier.o 00:07:19.118 CC app/trace_record/trace_record.o 00:07:19.118 CC test/event/app_repeat/app_repeat.o 00:07:19.118 LINK reactor_perf 00:07:19.118 CXX test/cpp_headers/base64.o 00:07:19.118 CC examples/bdev/hello_world/hello_bdev.o 00:07:19.118 CC examples/bdev/bdevperf/bdevperf.o 00:07:19.118 CXX test/cpp_headers/bdev.o 00:07:19.118 LINK mem_callbacks 00:07:19.118 LINK app_repeat 00:07:19.375 LINK spdk_trace_record 00:07:19.375 CC test/nvme/aer/aer.o 00:07:19.375 CXX test/cpp_headers/bdev_module.o 00:07:19.375 CC test/nvme/reset/reset.o 00:07:19.375 CC test/env/vtophys/vtophys.o 00:07:19.375 LINK hello_bdev 00:07:19.375 LINK nvme_fuzz 00:07:19.633 CC test/event/scheduler/scheduler.o 00:07:19.633 CC app/nvmf_tgt/nvmf_main.o 00:07:19.633 LINK vtophys 00:07:19.633 CXX test/cpp_headers/bdev_zone.o 00:07:19.633 CC test/rpc_client/rpc_client_test.o 00:07:19.633 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:19.633 LINK reset 00:07:19.633 LINK aer 00:07:19.891 LINK nvmf_tgt 00:07:19.891 LINK scheduler 00:07:19.891 CXX test/cpp_headers/bit_array.o 00:07:19.891 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:19.891 CXX test/cpp_headers/bit_pool.o 00:07:19.891 LINK rpc_client_test 00:07:20.149 LINK env_dpdk_post_init 00:07:20.149 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:20.149 CC test/nvme/sgl/sgl.o 00:07:20.149 CC app/iscsi_tgt/iscsi_tgt.o 00:07:20.149 CC test/app/histogram_perf/histogram_perf.o 00:07:20.149 CC test/app/jsoncat/jsoncat.o 00:07:20.149 CXX test/cpp_headers/blob_bdev.o 00:07:20.149 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:20.407 LINK bdevperf 00:07:20.407 LINK histogram_perf 00:07:20.407 LINK jsoncat 00:07:20.407 CXX test/cpp_headers/blobfs_bdev.o 00:07:20.407 CC test/env/memory/memory_ut.o 00:07:20.407 LINK iscsi_tgt 00:07:20.407 CXX test/cpp_headers/blobfs.o 00:07:20.665 CXX test/cpp_headers/blob.o 00:07:20.665 LINK sgl 00:07:20.665 CC app/spdk_tgt/spdk_tgt.o 00:07:20.923 LINK vhost_fuzz 00:07:20.923 CXX test/cpp_headers/conf.o 00:07:20.923 CC examples/ioat/perf/perf.o 00:07:20.923 CC examples/blob/hello_world/hello_blob.o 00:07:20.923 CC examples/nvme/hello_world/hello_world.o 00:07:20.923 CC test/nvme/e2edp/nvme_dp.o 00:07:20.923 LINK spdk_tgt 00:07:20.923 CXX test/cpp_headers/config.o 00:07:20.923 CC test/nvme/overhead/overhead.o 00:07:21.181 CXX test/cpp_headers/cpuset.o 00:07:21.181 LINK ioat_perf 00:07:21.181 LINK hello_world 00:07:21.181 LINK hello_blob 00:07:21.181 CC app/spdk_lspci/spdk_lspci.o 00:07:21.439 CXX test/cpp_headers/crc16.o 00:07:21.439 LINK nvme_dp 00:07:21.439 CC examples/nvme/reconnect/reconnect.o 00:07:21.439 CC examples/ioat/verify/verify.o 00:07:21.439 LINK overhead 00:07:21.439 LINK spdk_lspci 00:07:21.439 CXX test/cpp_headers/crc32.o 00:07:21.439 CC examples/blob/cli/blobcli.o 00:07:21.697 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:21.697 CC test/nvme/err_injection/err_injection.o 00:07:21.697 LINK memory_ut 00:07:21.697 LINK verify 00:07:21.697 CXX test/cpp_headers/crc64.o 00:07:21.956 CC app/spdk_nvme_perf/perf.o 00:07:21.956 LINK reconnect 00:07:21.956 CXX test/cpp_headers/dif.o 00:07:21.956 LINK err_injection 00:07:22.214 CC test/env/pci/pci_ut.o 00:07:22.214 CC examples/sock/hello_world/hello_sock.o 00:07:22.214 LINK iscsi_fuzz 00:07:22.214 CXX test/cpp_headers/dma.o 00:07:22.214 CXX test/cpp_headers/endian.o 00:07:22.214 LINK blobcli 00:07:22.214 CC test/nvme/startup/startup.o 00:07:22.472 CXX test/cpp_headers/env_dpdk.o 00:07:22.472 CC examples/nvme/arbitration/arbitration.o 00:07:22.472 CC test/app/stub/stub.o 00:07:22.472 LINK nvme_manage 00:07:22.472 LINK hello_sock 00:07:22.472 LINK startup 00:07:22.472 CXX test/cpp_headers/env.o 00:07:22.472 CXX test/cpp_headers/event.o 00:07:22.731 LINK pci_ut 00:07:22.731 CXX test/cpp_headers/fd_group.o 00:07:22.731 CXX test/cpp_headers/fd.o 00:07:22.731 LINK stub 00:07:22.731 CC test/nvme/reserve/reserve.o 00:07:22.731 CC test/nvme/simple_copy/simple_copy.o 00:07:22.989 CXX test/cpp_headers/file.o 00:07:22.989 CXX test/cpp_headers/ftl.o 00:07:22.989 LINK arbitration 00:07:22.989 CC test/nvme/connect_stress/connect_stress.o 00:07:22.989 CC examples/nvme/hotplug/hotplug.o 00:07:22.989 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:22.989 LINK reserve 00:07:22.989 CXX test/cpp_headers/gpt_spec.o 00:07:22.989 LINK spdk_nvme_perf 00:07:23.275 LINK connect_stress 00:07:23.275 CC test/nvme/boot_partition/boot_partition.o 00:07:23.275 LINK simple_copy 00:07:23.275 CC test/nvme/compliance/nvme_compliance.o 00:07:23.275 LINK cmb_copy 00:07:23.275 LINK hotplug 00:07:23.275 CXX test/cpp_headers/hexlify.o 00:07:23.275 LINK boot_partition 00:07:23.275 CXX test/cpp_headers/histogram_data.o 00:07:23.275 CXX test/cpp_headers/idxd.o 00:07:23.534 CC app/spdk_nvme_discover/discovery_aer.o 00:07:23.534 CC app/spdk_nvme_identify/identify.o 00:07:23.534 CC app/spdk_top/spdk_top.o 00:07:23.534 CC examples/nvme/abort/abort.o 00:07:23.534 CXX test/cpp_headers/idxd_spec.o 00:07:23.534 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:23.534 CC test/nvme/fused_ordering/fused_ordering.o 00:07:23.534 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:23.534 LINK spdk_nvme_discover 00:07:23.534 LINK nvme_compliance 00:07:23.793 CXX test/cpp_headers/init.o 00:07:23.793 LINK pmr_persistence 00:07:23.793 LINK fused_ordering 00:07:23.793 CXX test/cpp_headers/ioat.o 00:07:23.793 LINK doorbell_aers 00:07:24.052 CC test/nvme/fdp/fdp.o 00:07:24.052 CC test/nvme/cuse/cuse.o 00:07:24.052 LINK abort 00:07:24.052 CXX test/cpp_headers/ioat_spec.o 00:07:24.052 CC app/vhost/vhost.o 00:07:24.052 CC app/spdk_dd/spdk_dd.o 00:07:24.052 CC app/fio/nvme/fio_plugin.o 00:07:24.312 CXX test/cpp_headers/iscsi_spec.o 00:07:24.312 LINK vhost 00:07:24.312 CC examples/vmd/lsvmd/lsvmd.o 00:07:24.312 LINK fdp 00:07:24.312 CXX test/cpp_headers/json.o 00:07:24.571 LINK spdk_nvme_identify 00:07:24.571 LINK lsvmd 00:07:24.571 CC app/fio/bdev/fio_plugin.o 00:07:24.571 LINK spdk_dd 00:07:24.571 LINK spdk_top 00:07:24.831 CXX test/cpp_headers/jsonrpc.o 00:07:24.831 CC test/thread/poller_perf/poller_perf.o 00:07:24.831 CC examples/vmd/led/led.o 00:07:24.831 CC examples/nvmf/nvmf/nvmf.o 00:07:24.831 CXX test/cpp_headers/likely.o 00:07:24.831 CXX test/cpp_headers/log.o 00:07:25.088 LINK led 00:07:25.088 CC examples/util/zipf/zipf.o 00:07:25.088 LINK poller_perf 00:07:25.088 CXX test/cpp_headers/lvol.o 00:07:25.088 LINK spdk_nvme 00:07:25.088 CXX test/cpp_headers/memory.o 00:07:25.088 LINK zipf 00:07:25.347 LINK nvmf 00:07:25.347 CC examples/thread/thread/thread_ex.o 00:07:25.347 LINK cuse 00:07:25.347 LINK spdk_bdev 00:07:25.347 CC examples/idxd/perf/perf.o 00:07:25.347 CXX test/cpp_headers/mmio.o 00:07:25.347 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:25.347 CXX test/cpp_headers/nbd.o 00:07:25.347 CXX test/cpp_headers/notify.o 00:07:25.347 CXX test/cpp_headers/nvme.o 00:07:25.347 CXX test/cpp_headers/nvme_intel.o 00:07:25.605 CXX test/cpp_headers/nvme_ocssd.o 00:07:25.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:25.605 LINK thread 00:07:25.605 LINK interrupt_tgt 00:07:25.605 CXX test/cpp_headers/nvme_spec.o 00:07:25.605 CXX test/cpp_headers/nvme_zns.o 00:07:25.605 CXX test/cpp_headers/nvmf_cmd.o 00:07:25.863 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:25.863 LINK esnap 00:07:25.863 LINK idxd_perf 00:07:25.863 CXX test/cpp_headers/nvmf.o 00:07:25.863 CXX test/cpp_headers/nvmf_spec.o 00:07:25.863 CXX test/cpp_headers/nvmf_transport.o 00:07:25.863 CXX test/cpp_headers/opal.o 00:07:25.863 CXX test/cpp_headers/opal_spec.o 00:07:25.863 CXX test/cpp_headers/pci_ids.o 00:07:25.863 CXX test/cpp_headers/pipe.o 00:07:25.863 CXX test/cpp_headers/queue.o 00:07:25.863 CXX test/cpp_headers/reduce.o 00:07:26.121 CXX test/cpp_headers/rpc.o 00:07:26.121 CXX test/cpp_headers/scheduler.o 00:07:26.121 CXX test/cpp_headers/scsi.o 00:07:26.121 CXX test/cpp_headers/scsi_spec.o 00:07:26.121 CXX test/cpp_headers/sock.o 00:07:26.121 CXX test/cpp_headers/stdinc.o 00:07:26.121 CXX test/cpp_headers/string.o 00:07:26.121 CXX test/cpp_headers/thread.o 00:07:26.121 CXX test/cpp_headers/trace.o 00:07:26.121 CXX test/cpp_headers/trace_parser.o 00:07:26.121 CXX test/cpp_headers/tree.o 00:07:26.121 CXX test/cpp_headers/ublk.o 00:07:26.121 CXX test/cpp_headers/util.o 00:07:26.379 CXX test/cpp_headers/uuid.o 00:07:26.380 CXX test/cpp_headers/version.o 00:07:26.380 CXX test/cpp_headers/vfio_user_pci.o 00:07:26.380 CXX test/cpp_headers/vfio_user_spec.o 00:07:26.380 CXX test/cpp_headers/vhost.o 00:07:26.380 CXX test/cpp_headers/vmd.o 00:07:26.380 CXX test/cpp_headers/xor.o 00:07:26.380 CXX test/cpp_headers/zipf.o 00:07:26.637 00:07:26.637 real 1m17.631s 00:07:26.637 user 8m15.637s 00:07:26.637 sys 1m35.163s 00:07:26.638 12:27:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:07:26.638 12:27:08 -- common/autotest_common.sh@10 -- $ set +x 00:07:26.638 ************************************ 00:07:26.638 END TEST make 00:07:26.638 ************************************ 00:07:26.638 12:27:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.638 12:27:09 -- nvmf/common.sh@7 -- # uname -s 00:07:26.638 12:27:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.638 12:27:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.638 12:27:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.638 12:27:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.638 12:27:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.638 12:27:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.638 12:27:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.638 12:27:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.638 12:27:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.638 12:27:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.638 12:27:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4dd1456e-1657-4c37-b992-242c1af0be2c 00:07:26.638 12:27:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=4dd1456e-1657-4c37-b992-242c1af0be2c 00:07:26.638 12:27:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.638 12:27:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.638 12:27:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.638 12:27:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.638 12:27:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.638 12:27:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.638 12:27:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.638 12:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.638 12:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.638 12:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.638 12:27:09 -- paths/export.sh@5 -- # export PATH 00:07:26.638 12:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.638 12:27:09 -- nvmf/common.sh@46 -- # : 0 00:07:26.638 12:27:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.638 12:27:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.638 12:27:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.638 12:27:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.638 12:27:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.638 12:27:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.638 12:27:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.638 12:27:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.638 12:27:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:26.638 12:27:09 -- spdk/autotest.sh@32 -- # uname -s 00:07:26.638 12:27:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:26.638 12:27:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:26.638 12:27:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:26.638 12:27:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:26.638 12:27:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:26.638 12:27:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:26.896 12:27:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:26.896 12:27:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:26.896 12:27:09 -- spdk/autotest.sh@48 -- # udevadm_pid=47538 00:07:26.896 12:27:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:26.896 12:27:09 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:07:26.896 12:27:09 -- spdk/autotest.sh@54 -- # echo 47544 00:07:26.896 12:27:09 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:26.896 12:27:09 -- spdk/autotest.sh@56 -- # echo 47547 00:07:26.896 12:27:09 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:26.896 12:27:09 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:07:26.896 12:27:09 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:26.896 12:27:09 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:07:26.896 12:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:26.896 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:26.896 12:27:09 -- spdk/autotest.sh@70 -- # create_test_list 00:07:26.896 12:27:09 -- common/autotest_common.sh@736 -- # xtrace_disable 00:07:26.896 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:26.896 12:27:09 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:26.896 12:27:09 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:26.896 12:27:09 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:07:26.896 12:27:09 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:26.896 12:27:09 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:07:26.896 12:27:09 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:07:26.896 12:27:09 -- common/autotest_common.sh@1440 -- # uname 00:07:26.896 12:27:09 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:07:26.896 12:27:09 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:07:26.896 12:27:09 -- common/autotest_common.sh@1460 -- # uname 00:07:26.896 12:27:09 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:07:26.896 12:27:09 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:07:26.896 12:27:09 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:07:26.896 12:27:09 -- spdk/autotest.sh@83 -- # hash lcov 00:07:26.896 12:27:09 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:26.896 12:27:09 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:07:26.896 --rc lcov_branch_coverage=1 00:07:26.896 --rc lcov_function_coverage=1 00:07:26.896 --rc genhtml_branch_coverage=1 00:07:26.896 --rc genhtml_function_coverage=1 00:07:26.896 --rc genhtml_legend=1 00:07:26.896 --rc geninfo_all_blocks=1 00:07:26.896 ' 00:07:26.896 12:27:09 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:07:26.896 --rc lcov_branch_coverage=1 00:07:26.896 --rc lcov_function_coverage=1 00:07:26.896 --rc genhtml_branch_coverage=1 00:07:26.896 --rc genhtml_function_coverage=1 00:07:26.896 --rc genhtml_legend=1 00:07:26.896 --rc geninfo_all_blocks=1 00:07:26.896 ' 00:07:26.896 12:27:09 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:07:26.896 --rc lcov_branch_coverage=1 00:07:26.896 --rc lcov_function_coverage=1 00:07:26.896 --rc genhtml_branch_coverage=1 00:07:26.896 --rc genhtml_function_coverage=1 00:07:26.896 --rc genhtml_legend=1 00:07:26.896 --rc geninfo_all_blocks=1 00:07:26.896 --no-external' 00:07:26.896 12:27:09 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:07:26.896 --rc lcov_branch_coverage=1 00:07:26.896 --rc lcov_function_coverage=1 00:07:26.896 --rc genhtml_branch_coverage=1 00:07:26.896 --rc genhtml_function_coverage=1 00:07:26.896 --rc genhtml_legend=1 00:07:26.896 --rc geninfo_all_blocks=1 00:07:26.896 --no-external' 00:07:26.896 12:27:09 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:07:26.896 lcov: LCOV version 1.15 00:07:26.896 12:27:09 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:36.960 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:36.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:36.960 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:36.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:36.960 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:36.960 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:07:58.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:58.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:58.905 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:58.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:58.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:59.165 12:27:41 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:07:59.165 12:27:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.165 12:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:59.165 12:27:41 -- spdk/autotest.sh@102 -- # rm -f 00:07:59.165 12:27:41 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:59.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.732 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:07:59.732 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:07:59.732 12:27:42 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:07:59.732 12:27:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:07:59.732 12:27:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:07:59.732 12:27:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:07:59.732 12:27:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:59.732 12:27:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:07:59.732 12:27:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:07:59.732 12:27:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:59.732 12:27:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:59.732 12:27:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:59.732 12:27:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:07:59.732 12:27:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:07:59.732 12:27:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:59.732 12:27:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:59.733 12:27:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:59.733 12:27:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:07:59.733 12:27:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:07:59.733 12:27:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:59.733 12:27:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:59.733 12:27:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:59.733 12:27:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:07:59.733 12:27:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:07:59.733 12:27:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:59.733 12:27:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:59.733 12:27:42 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:07:59.733 12:27:42 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:07:59.733 12:27:42 -- spdk/autotest.sh@121 -- # grep -v p 00:07:59.733 12:27:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:07:59.733 12:27:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:07:59.733 12:27:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:07:59.733 12:27:42 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:07:59.733 12:27:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:00.013 No valid GPT data, bailing 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # pt= 00:08:00.013 12:27:42 -- scripts/common.sh@394 -- # return 1 00:08:00.013 12:27:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:00.013 1+0 records in 00:08:00.013 1+0 records out 00:08:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0032173 s, 326 MB/s 00:08:00.013 12:27:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:00.013 12:27:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:08:00.013 12:27:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:08:00.013 12:27:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:08:00.013 12:27:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:00.013 No valid GPT data, bailing 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # pt= 00:08:00.013 12:27:42 -- scripts/common.sh@394 -- # return 1 00:08:00.013 12:27:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:00.013 1+0 records in 00:08:00.013 1+0 records out 00:08:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00374474 s, 280 MB/s 00:08:00.013 12:27:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:00.013 12:27:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:08:00.013 12:27:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:08:00.013 12:27:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:08:00.013 12:27:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:00.013 No valid GPT data, bailing 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # pt= 00:08:00.013 12:27:42 -- scripts/common.sh@394 -- # return 1 00:08:00.013 12:27:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:00.013 1+0 records in 00:08:00.013 1+0 records out 00:08:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040096 s, 262 MB/s 00:08:00.013 12:27:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:00.013 12:27:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:08:00.013 12:27:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:08:00.013 12:27:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:08:00.013 12:27:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:00.013 No valid GPT data, bailing 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:00.013 12:27:42 -- scripts/common.sh@393 -- # pt= 00:08:00.013 12:27:42 -- scripts/common.sh@394 -- # return 1 00:08:00.013 12:27:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:00.013 1+0 records in 00:08:00.013 1+0 records out 00:08:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428892 s, 244 MB/s 00:08:00.013 12:27:42 -- spdk/autotest.sh@129 -- # sync 00:08:00.288 12:27:42 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:00.288 12:27:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:00.288 12:27:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:02.192 12:27:44 -- spdk/autotest.sh@135 -- # uname -s 00:08:02.192 12:27:44 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:08:02.192 12:27:44 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:02.192 12:27:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.192 12:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.192 12:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.192 ************************************ 00:08:02.192 START TEST setup.sh 00:08:02.193 ************************************ 00:08:02.193 12:27:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:02.193 * Looking for test storage... 00:08:02.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:02.193 12:27:44 -- setup/test-setup.sh@10 -- # uname -s 00:08:02.193 12:27:44 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:02.193 12:27:44 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:02.193 12:27:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.193 12:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.193 12:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.193 ************************************ 00:08:02.193 START TEST acl 00:08:02.193 ************************************ 00:08:02.193 12:27:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:02.193 * Looking for test storage... 00:08:02.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:02.193 12:27:44 -- setup/acl.sh@10 -- # get_zoned_devs 00:08:02.193 12:27:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:08:02.193 12:27:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:08:02.193 12:27:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:08:02.193 12:27:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:02.193 12:27:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:08:02.193 12:27:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:08:02.193 12:27:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:02.193 12:27:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:08:02.193 12:27:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:08:02.193 12:27:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:02.193 12:27:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:08:02.193 12:27:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:08:02.193 12:27:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:02.193 12:27:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:08:02.193 12:27:44 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:08:02.193 12:27:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:02.193 12:27:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:02.193 12:27:44 -- setup/acl.sh@12 -- # devs=() 00:08:02.193 12:27:44 -- setup/acl.sh@12 -- # declare -a devs 00:08:02.193 12:27:44 -- setup/acl.sh@13 -- # drivers=() 00:08:02.193 12:27:44 -- setup/acl.sh@13 -- # declare -A drivers 00:08:02.193 12:27:44 -- setup/acl.sh@51 -- # setup reset 00:08:02.193 12:27:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:02.193 12:27:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:02.759 12:27:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:08:02.759 12:27:45 -- setup/acl.sh@16 -- # local dev driver 00:08:02.759 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:02.759 12:27:45 -- setup/acl.sh@15 -- # setup output status 00:08:02.759 12:27:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:02.759 12:27:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:02.759 Hugepages 00:08:02.759 node hugesize free / total 00:08:02.759 12:27:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:02.759 12:27:45 -- setup/acl.sh@19 -- # continue 00:08:02.759 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:02.759 00:08:02.759 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:02.759 12:27:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:02.759 12:27:45 -- setup/acl.sh@19 -- # continue 00:08:02.759 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:03.017 12:27:45 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:03.017 12:27:45 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:03.017 12:27:45 -- setup/acl.sh@20 -- # continue 00:08:03.018 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:03.018 12:27:45 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:08:03.018 12:27:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:03.018 12:27:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:03.018 12:27:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:03.018 12:27:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:03.018 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:03.018 12:27:45 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:08:03.018 12:27:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:03.018 12:27:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:03.018 12:27:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:03.018 12:27:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:03.018 12:27:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:03.018 12:27:45 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:03.018 12:27:45 -- setup/acl.sh@54 -- # run_test denied denied 00:08:03.018 12:27:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.018 12:27:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.018 12:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:03.018 ************************************ 00:08:03.018 START TEST denied 00:08:03.018 ************************************ 00:08:03.018 12:27:45 -- common/autotest_common.sh@1104 -- # denied 00:08:03.018 12:27:45 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:08:03.018 12:27:45 -- setup/acl.sh@38 -- # setup output config 00:08:03.018 12:27:45 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:08:03.018 12:27:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:03.018 12:27:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:03.954 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:08:03.954 12:27:46 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:08:03.954 12:27:46 -- setup/acl.sh@28 -- # local dev driver 00:08:03.954 12:27:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:03.954 12:27:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:08:03.954 12:27:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:08:03.954 12:27:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:03.954 12:27:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:03.954 12:27:46 -- setup/acl.sh@41 -- # setup reset 00:08:03.954 12:27:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:03.954 12:27:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:04.520 00:08:04.520 real 0m1.445s 00:08:04.520 user 0m0.609s 00:08:04.520 sys 0m0.784s 00:08:04.520 12:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.520 12:27:46 -- common/autotest_common.sh@10 -- # set +x 00:08:04.520 ************************************ 00:08:04.520 END TEST denied 00:08:04.520 ************************************ 00:08:04.520 12:27:46 -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:04.520 12:27:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.520 12:27:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.520 12:27:46 -- common/autotest_common.sh@10 -- # set +x 00:08:04.520 ************************************ 00:08:04.520 START TEST allowed 00:08:04.520 ************************************ 00:08:04.520 12:27:46 -- common/autotest_common.sh@1104 -- # allowed 00:08:04.520 12:27:46 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:08:04.520 12:27:46 -- setup/acl.sh@45 -- # setup output config 00:08:04.520 12:27:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:04.520 12:27:46 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:08:04.520 12:27:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:05.456 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:05.456 12:27:47 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:08:05.456 12:27:47 -- setup/acl.sh@28 -- # local dev driver 00:08:05.456 12:27:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:05.456 12:27:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:08:05.456 12:27:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:08:05.456 12:27:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:05.456 12:27:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:05.456 12:27:47 -- setup/acl.sh@48 -- # setup reset 00:08:05.456 12:27:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:05.456 12:27:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:06.023 00:08:06.023 real 0m1.463s 00:08:06.023 user 0m0.695s 00:08:06.023 sys 0m0.783s 00:08:06.023 12:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.023 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.023 ************************************ 00:08:06.023 END TEST allowed 00:08:06.023 ************************************ 00:08:06.023 ************************************ 00:08:06.023 END TEST acl 00:08:06.023 ************************************ 00:08:06.023 00:08:06.023 real 0m4.064s 00:08:06.023 user 0m1.829s 00:08:06.023 sys 0m2.229s 00:08:06.023 12:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.023 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.023 12:27:48 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:06.023 12:27:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.023 12:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.023 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.023 ************************************ 00:08:06.023 START TEST hugepages 00:08:06.023 ************************************ 00:08:06.023 12:27:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:06.023 * Looking for test storage... 00:08:06.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:06.283 12:27:48 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:06.283 12:27:48 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:06.283 12:27:48 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:06.283 12:27:48 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:06.283 12:27:48 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:06.283 12:27:48 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:06.283 12:27:48 -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:06.283 12:27:48 -- setup/common.sh@18 -- # local node= 00:08:06.283 12:27:48 -- setup/common.sh@19 -- # local var val 00:08:06.283 12:27:48 -- setup/common.sh@20 -- # local mem_f mem 00:08:06.283 12:27:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:06.283 12:27:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:06.283 12:27:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:06.283 12:27:48 -- setup/common.sh@28 -- # mapfile -t mem 00:08:06.283 12:27:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 5839316 kB' 'MemAvailable: 7362120 kB' 'Buffers: 2684 kB' 'Cached: 1736588 kB' 'SwapCached: 0 kB' 'Active: 440744 kB' 'Inactive: 1401428 kB' 'Active(anon): 113408 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 104580 kB' 'Mapped: 50820 kB' 'Shmem: 10508 kB' 'KReclaimable: 61972 kB' 'Slab: 155156 kB' 'SReclaimable: 61972 kB' 'SUnreclaim: 93184 kB' 'KernelStack: 6524 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 315668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.283 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.283 12:27:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # continue 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # IFS=': ' 00:08:06.284 12:27:48 -- setup/common.sh@31 -- # read -r var val _ 00:08:06.284 12:27:48 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:06.284 12:27:48 -- setup/common.sh@33 -- # echo 2048 00:08:06.284 12:27:48 -- setup/common.sh@33 -- # return 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:06.284 12:27:48 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:06.284 12:27:48 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:06.284 12:27:48 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:06.284 12:27:48 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:06.284 12:27:48 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:06.284 12:27:48 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:06.284 12:27:48 -- setup/hugepages.sh@207 -- # get_nodes 00:08:06.284 12:27:48 -- setup/hugepages.sh@27 -- # local node 00:08:06.284 12:27:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:06.284 12:27:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:06.284 12:27:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:06.284 12:27:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:06.284 12:27:48 -- setup/hugepages.sh@208 -- # clear_hp 00:08:06.284 12:27:48 -- setup/hugepages.sh@37 -- # local node hp 00:08:06.284 12:27:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:06.284 12:27:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:06.284 12:27:48 -- setup/hugepages.sh@41 -- # echo 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:06.284 12:27:48 -- setup/hugepages.sh@41 -- # echo 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:06.284 12:27:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:06.284 12:27:48 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:06.284 12:27:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.284 12:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.284 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.284 ************************************ 00:08:06.284 START TEST default_setup 00:08:06.284 ************************************ 00:08:06.284 12:27:48 -- common/autotest_common.sh@1104 -- # default_setup 00:08:06.284 12:27:48 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:06.284 12:27:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:06.284 12:27:48 -- setup/hugepages.sh@51 -- # shift 00:08:06.284 12:27:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:06.284 12:27:48 -- setup/hugepages.sh@52 -- # local node_ids 00:08:06.284 12:27:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:06.284 12:27:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:06.284 12:27:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:06.284 12:27:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:06.284 12:27:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:06.284 12:27:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:06.284 12:27:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:06.284 12:27:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:06.284 12:27:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:06.284 12:27:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:06.284 12:27:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:06.284 12:27:48 -- setup/hugepages.sh@73 -- # return 0 00:08:06.284 12:27:48 -- setup/hugepages.sh@137 -- # setup output 00:08:06.284 12:27:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:06.284 12:27:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:06.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:06.889 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:06.889 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:07.152 12:27:49 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:07.152 12:27:49 -- setup/hugepages.sh@89 -- # local node 00:08:07.152 12:27:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:07.152 12:27:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:07.152 12:27:49 -- setup/hugepages.sh@92 -- # local surp 00:08:07.152 12:27:49 -- setup/hugepages.sh@93 -- # local resv 00:08:07.152 12:27:49 -- setup/hugepages.sh@94 -- # local anon 00:08:07.152 12:27:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:07.152 12:27:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:07.152 12:27:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:07.152 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.152 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.152 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.152 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.152 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.152 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.152 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.152 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7945212 kB' 'MemAvailable: 9467816 kB' 'Buffers: 2684 kB' 'Cached: 1736576 kB' 'SwapCached: 0 kB' 'Active: 455872 kB' 'Inactive: 1401428 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 50960 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154908 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93336 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.152 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.152 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.153 12:27:49 -- setup/common.sh@33 -- # echo 0 00:08:07.153 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.153 12:27:49 -- setup/hugepages.sh@97 -- # anon=0 00:08:07.153 12:27:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:07.153 12:27:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:07.153 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.153 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.153 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.153 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.153 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.153 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.153 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.153 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7944964 kB' 'MemAvailable: 9467568 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455852 kB' 'Inactive: 1401428 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 50960 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154892 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93320 kB' 'KernelStack: 6480 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.153 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.153 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.154 12:27:49 -- setup/common.sh@33 -- # echo 0 00:08:07.154 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.154 12:27:49 -- setup/hugepages.sh@99 -- # surp=0 00:08:07.154 12:27:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:07.154 12:27:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:07.154 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.154 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.154 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.154 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.154 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.154 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.154 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.154 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7944964 kB' 'MemAvailable: 9467568 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455696 kB' 'Inactive: 1401428 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 50960 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154896 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93324 kB' 'KernelStack: 6464 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.154 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.154 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.155 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.155 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.156 12:27:49 -- setup/common.sh@33 -- # echo 0 00:08:07.156 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.156 12:27:49 -- setup/hugepages.sh@100 -- # resv=0 00:08:07.156 nr_hugepages=1024 00:08:07.156 12:27:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:07.156 resv_hugepages=0 00:08:07.156 12:27:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:07.156 surplus_hugepages=0 00:08:07.156 12:27:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:07.156 anon_hugepages=0 00:08:07.156 12:27:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:07.156 12:27:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:07.156 12:27:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:07.156 12:27:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:07.156 12:27:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:07.156 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.156 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.156 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.156 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.156 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.156 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.156 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.156 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7944964 kB' 'MemAvailable: 9467572 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455696 kB' 'Inactive: 1401432 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401432 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 50960 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154896 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93324 kB' 'KernelStack: 6464 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.156 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.156 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.157 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.157 12:27:49 -- setup/common.sh@33 -- # echo 1024 00:08:07.157 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.157 12:27:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:07.157 12:27:49 -- setup/hugepages.sh@112 -- # get_nodes 00:08:07.157 12:27:49 -- setup/hugepages.sh@27 -- # local node 00:08:07.157 12:27:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:07.157 12:27:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:07.157 12:27:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:07.157 12:27:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:07.157 12:27:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:07.157 12:27:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:07.157 12:27:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:07.157 12:27:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:07.157 12:27:49 -- setup/common.sh@18 -- # local node=0 00:08:07.157 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.157 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.157 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.157 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:07.157 12:27:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:07.157 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.157 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.157 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7944964 kB' 'MemUsed: 4294148 kB' 'SwapCached: 0 kB' 'Active: 455696 kB' 'Inactive: 1401432 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401432 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50960 kB' 'AnonPages: 119488 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154896 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.158 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.158 12:27:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.158 12:27:49 -- setup/common.sh@33 -- # echo 0 00:08:07.158 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.158 12:27:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:07.158 12:27:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:07.158 12:27:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:07.158 12:27:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:07.158 node0=1024 expecting 1024 00:08:07.158 12:27:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:07.158 12:27:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:07.158 00:08:07.158 real 0m0.948s 00:08:07.158 user 0m0.429s 00:08:07.159 sys 0m0.468s 00:08:07.159 12:27:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.159 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:07.159 ************************************ 00:08:07.159 END TEST default_setup 00:08:07.159 ************************************ 00:08:07.159 12:27:49 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:07.159 12:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.159 12:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.159 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:07.159 ************************************ 00:08:07.159 START TEST per_node_1G_alloc 00:08:07.159 ************************************ 00:08:07.159 12:27:49 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:08:07.159 12:27:49 -- setup/hugepages.sh@143 -- # local IFS=, 00:08:07.159 12:27:49 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:07.159 12:27:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:07.159 12:27:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:07.159 12:27:49 -- setup/hugepages.sh@51 -- # shift 00:08:07.159 12:27:49 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:07.159 12:27:49 -- setup/hugepages.sh@52 -- # local node_ids 00:08:07.159 12:27:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:07.159 12:27:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:07.159 12:27:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:07.159 12:27:49 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:07.159 12:27:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:07.159 12:27:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:07.159 12:27:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:07.159 12:27:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:07.159 12:27:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:07.159 12:27:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:07.159 12:27:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:07.159 12:27:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:07.159 12:27:49 -- setup/hugepages.sh@73 -- # return 0 00:08:07.159 12:27:49 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:07.159 12:27:49 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:07.159 12:27:49 -- setup/hugepages.sh@146 -- # setup output 00:08:07.159 12:27:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:07.159 12:27:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:07.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:07.682 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:07.682 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:07.682 12:27:49 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:07.682 12:27:49 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:07.682 12:27:49 -- setup/hugepages.sh@89 -- # local node 00:08:07.682 12:27:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:07.682 12:27:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:07.682 12:27:49 -- setup/hugepages.sh@92 -- # local surp 00:08:07.682 12:27:49 -- setup/hugepages.sh@93 -- # local resv 00:08:07.682 12:27:49 -- setup/hugepages.sh@94 -- # local anon 00:08:07.682 12:27:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:07.682 12:27:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:07.682 12:27:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:07.682 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.682 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.682 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.682 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.682 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.682 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.682 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.682 12:27:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8997608 kB' 'MemAvailable: 10520228 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456080 kB' 'Inactive: 1401444 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 50988 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154880 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6456 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.682 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.682 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:49 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:07.683 12:27:49 -- setup/common.sh@33 -- # echo 0 00:08:07.683 12:27:49 -- setup/common.sh@33 -- # return 0 00:08:07.683 12:27:49 -- setup/hugepages.sh@97 -- # anon=0 00:08:07.683 12:27:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:07.683 12:27:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:07.683 12:27:49 -- setup/common.sh@18 -- # local node= 00:08:07.683 12:27:49 -- setup/common.sh@19 -- # local var val 00:08:07.683 12:27:49 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.683 12:27:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.683 12:27:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.683 12:27:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.683 12:27:49 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.683 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8997608 kB' 'MemAvailable: 10520228 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455804 kB' 'Inactive: 1401444 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154880 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6416 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.683 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.683 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.684 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.684 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.684 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:07.684 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:07.684 12:27:50 -- setup/hugepages.sh@99 -- # surp=0 00:08:07.684 12:27:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:07.684 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:07.684 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:07.684 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:07.684 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.684 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.684 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.684 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.684 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.684 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8997608 kB' 'MemAvailable: 10520228 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455920 kB' 'Inactive: 1401444 kB' 'Active(anon): 128584 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154880 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93308 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.685 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.685 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:07.686 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:07.686 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:07.686 12:27:50 -- setup/hugepages.sh@100 -- # resv=0 00:08:07.686 nr_hugepages=512 00:08:07.686 12:27:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:07.686 resv_hugepages=0 00:08:07.686 12:27:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:07.686 surplus_hugepages=0 00:08:07.686 12:27:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:07.686 anon_hugepages=0 00:08:07.686 12:27:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:07.686 12:27:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:07.686 12:27:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:07.686 12:27:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:07.686 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:07.686 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:07.686 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:07.686 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.686 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.686 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:07.686 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:07.686 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.686 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.686 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.686 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8997608 kB' 'MemAvailable: 10520228 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455864 kB' 'Inactive: 1401444 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154868 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93296 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:07.686 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.687 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.687 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:07.688 12:27:50 -- setup/common.sh@33 -- # echo 512 00:08:07.688 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:07.688 12:27:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:07.688 12:27:50 -- setup/hugepages.sh@112 -- # get_nodes 00:08:07.688 12:27:50 -- setup/hugepages.sh@27 -- # local node 00:08:07.688 12:27:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:07.688 12:27:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:07.688 12:27:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:07.688 12:27:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:07.688 12:27:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:07.688 12:27:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:07.688 12:27:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:07.688 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:07.688 12:27:50 -- setup/common.sh@18 -- # local node=0 00:08:07.688 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:07.688 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:07.688 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:07.688 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:07.688 12:27:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:07.688 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:07.688 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8997608 kB' 'MemUsed: 3241504 kB' 'SwapCached: 0 kB' 'Active: 455596 kB' 'Inactive: 1401444 kB' 'Active(anon): 128260 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50840 kB' 'AnonPages: 119408 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154860 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.688 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.688 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # continue 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:07.689 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:07.689 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:07.689 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:07.689 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:07.689 12:27:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:07.689 12:27:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:07.689 12:27:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:07.689 12:27:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:07.689 node0=512 expecting 512 00:08:07.689 12:27:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:07.689 12:27:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:07.689 00:08:07.689 real 0m0.496s 00:08:07.689 user 0m0.258s 00:08:07.689 sys 0m0.269s 00:08:07.689 12:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.689 ************************************ 00:08:07.689 END TEST per_node_1G_alloc 00:08:07.689 ************************************ 00:08:07.689 12:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 12:27:50 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:07.689 12:27:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.689 12:27:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.689 12:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 ************************************ 00:08:07.689 START TEST even_2G_alloc 00:08:07.689 ************************************ 00:08:07.689 12:27:50 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:08:07.689 12:27:50 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:07.689 12:27:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:07.689 12:27:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:07.689 12:27:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:07.689 12:27:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:07.689 12:27:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:07.689 12:27:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:07.689 12:27:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:07.690 12:27:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:07.690 12:27:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:07.690 12:27:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:07.690 12:27:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:07.690 12:27:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:07.690 12:27:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:07.690 12:27:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:07.690 12:27:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:07.690 12:27:50 -- setup/hugepages.sh@83 -- # : 0 00:08:07.690 12:27:50 -- setup/hugepages.sh@84 -- # : 0 00:08:07.690 12:27:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:07.690 12:27:50 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:07.690 12:27:50 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:07.690 12:27:50 -- setup/hugepages.sh@153 -- # setup output 00:08:07.690 12:27:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:07.690 12:27:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:07.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.211 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:08.211 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:08.211 12:27:50 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:08.211 12:27:50 -- setup/hugepages.sh@89 -- # local node 00:08:08.211 12:27:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:08.211 12:27:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:08.211 12:27:50 -- setup/hugepages.sh@92 -- # local surp 00:08:08.211 12:27:50 -- setup/hugepages.sh@93 -- # local resv 00:08:08.211 12:27:50 -- setup/hugepages.sh@94 -- # local anon 00:08:08.211 12:27:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:08.211 12:27:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:08.211 12:27:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:08.211 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:08.211 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:08.211 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.211 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.211 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.211 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.211 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.211 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.211 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.211 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7947772 kB' 'MemAvailable: 9470392 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456340 kB' 'Inactive: 1401444 kB' 'Active(anon): 129004 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120112 kB' 'Mapped: 51160 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154916 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93344 kB' 'KernelStack: 6468 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.212 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.212 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.213 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:08.213 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:08.213 12:27:50 -- setup/hugepages.sh@97 -- # anon=0 00:08:08.213 12:27:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:08.213 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:08.213 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:08.213 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:08.213 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.213 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.213 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.213 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.213 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.213 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.213 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7947772 kB' 'MemAvailable: 9470392 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455684 kB' 'Inactive: 1401444 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119720 kB' 'Mapped: 51068 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154920 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6404 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.213 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.213 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.214 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:08.214 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:08.214 12:27:50 -- setup/hugepages.sh@99 -- # surp=0 00:08:08.214 12:27:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:08.214 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:08.214 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:08.214 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:08.214 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.214 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.214 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.214 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.214 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.214 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.214 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7947772 kB' 'MemAvailable: 9470392 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455696 kB' 'Inactive: 1401444 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.214 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.214 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.215 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.215 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.215 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:08.216 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:08.216 12:27:50 -- setup/hugepages.sh@100 -- # resv=0 00:08:08.216 nr_hugepages=1024 00:08:08.216 12:27:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:08.216 resv_hugepages=0 00:08:08.216 12:27:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:08.216 surplus_hugepages=0 00:08:08.216 12:27:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:08.216 anon_hugepages=0 00:08:08.216 12:27:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:08.216 12:27:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:08.216 12:27:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:08.216 12:27:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:08.216 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:08.216 12:27:50 -- setup/common.sh@18 -- # local node= 00:08:08.216 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:08.216 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.216 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.216 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.216 12:27:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.216 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.216 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7947772 kB' 'MemAvailable: 9470392 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455692 kB' 'Inactive: 1401444 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154920 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.216 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.216 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.217 12:27:50 -- setup/common.sh@33 -- # echo 1024 00:08:08.217 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:08.217 12:27:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:08.217 12:27:50 -- setup/hugepages.sh@112 -- # get_nodes 00:08:08.217 12:27:50 -- setup/hugepages.sh@27 -- # local node 00:08:08.217 12:27:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:08.217 12:27:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:08.217 12:27:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:08.217 12:27:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:08.217 12:27:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:08.217 12:27:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:08.217 12:27:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:08.217 12:27:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:08.217 12:27:50 -- setup/common.sh@18 -- # local node=0 00:08:08.217 12:27:50 -- setup/common.sh@19 -- # local var val 00:08:08.217 12:27:50 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.217 12:27:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.217 12:27:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:08.217 12:27:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:08.217 12:27:50 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.217 12:27:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948492 kB' 'MemUsed: 4290620 kB' 'SwapCached: 0 kB' 'Active: 455636 kB' 'Inactive: 1401444 kB' 'Active(anon): 128300 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50840 kB' 'AnonPages: 119704 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154920 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.217 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.217 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # continue 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.218 12:27:50 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.218 12:27:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.218 12:27:50 -- setup/common.sh@33 -- # echo 0 00:08:08.218 12:27:50 -- setup/common.sh@33 -- # return 0 00:08:08.218 12:27:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:08.218 12:27:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:08.218 12:27:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:08.218 12:27:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:08.218 node0=1024 expecting 1024 00:08:08.218 12:27:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:08.218 12:27:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:08.218 00:08:08.218 real 0m0.514s 00:08:08.218 user 0m0.257s 00:08:08.218 sys 0m0.287s 00:08:08.218 12:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.218 12:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 ************************************ 00:08:08.218 END TEST even_2G_alloc 00:08:08.218 ************************************ 00:08:08.218 12:27:50 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:08.218 12:27:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.218 12:27:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.218 12:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:08.218 ************************************ 00:08:08.218 START TEST odd_alloc 00:08:08.218 ************************************ 00:08:08.218 12:27:50 -- common/autotest_common.sh@1104 -- # odd_alloc 00:08:08.218 12:27:50 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:08.218 12:27:50 -- setup/hugepages.sh@49 -- # local size=2098176 00:08:08.218 12:27:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:08.218 12:27:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:08.218 12:27:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:08.218 12:27:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:08.219 12:27:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:08.219 12:27:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:08.219 12:27:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:08.219 12:27:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:08.219 12:27:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:08.219 12:27:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:08.219 12:27:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:08.219 12:27:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:08.219 12:27:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:08.219 12:27:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:08.219 12:27:50 -- setup/hugepages.sh@83 -- # : 0 00:08:08.219 12:27:50 -- setup/hugepages.sh@84 -- # : 0 00:08:08.219 12:27:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:08.219 12:27:50 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:08.219 12:27:50 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:08.219 12:27:50 -- setup/hugepages.sh@160 -- # setup output 00:08:08.219 12:27:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:08.219 12:27:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:08.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.791 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:08.791 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:08.791 12:27:51 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:08.791 12:27:51 -- setup/hugepages.sh@89 -- # local node 00:08:08.791 12:27:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:08.791 12:27:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:08.791 12:27:51 -- setup/hugepages.sh@92 -- # local surp 00:08:08.791 12:27:51 -- setup/hugepages.sh@93 -- # local resv 00:08:08.791 12:27:51 -- setup/hugepages.sh@94 -- # local anon 00:08:08.791 12:27:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:08.791 12:27:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:08.791 12:27:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:08.791 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:08.791 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:08.791 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.791 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.791 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.791 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.791 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.791 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.791 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.791 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948868 kB' 'MemAvailable: 9471488 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456136 kB' 'Inactive: 1401444 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 51244 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154912 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93340 kB' 'KernelStack: 6440 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:08.792 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:08.792 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:08.792 12:27:51 -- setup/hugepages.sh@97 -- # anon=0 00:08:08.792 12:27:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:08.792 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:08.792 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:08.792 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:08.792 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.792 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.792 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.792 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.792 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.792 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948868 kB' 'MemAvailable: 9471488 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455976 kB' 'Inactive: 1401444 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119728 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154936 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93364 kB' 'KernelStack: 6496 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.792 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.792 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.793 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:08.793 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:08.793 12:27:51 -- setup/hugepages.sh@99 -- # surp=0 00:08:08.793 12:27:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:08.793 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:08.793 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:08.793 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:08.793 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.793 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.793 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.793 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.793 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.793 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948868 kB' 'MemAvailable: 9471488 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455664 kB' 'Inactive: 1401444 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119452 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154936 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93364 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.793 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.793 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:08.794 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:08.794 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:08.794 nr_hugepages=1025 00:08:08.794 12:27:51 -- setup/hugepages.sh@100 -- # resv=0 00:08:08.794 12:27:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:08.794 resv_hugepages=0 00:08:08.794 12:27:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:08.794 surplus_hugepages=0 00:08:08.794 anon_hugepages=0 00:08:08.794 12:27:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:08.794 12:27:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:08.794 12:27:51 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:08.794 12:27:51 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:08.794 12:27:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:08.794 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:08.794 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:08.794 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:08.794 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.794 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.794 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:08.794 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:08.794 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.794 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948868 kB' 'MemAvailable: 9471488 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455856 kB' 'Inactive: 1401444 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119608 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154932 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93360 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.794 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:08.794 12:27:51 -- setup/common.sh@33 -- # echo 1025 00:08:08.794 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:08.794 12:27:51 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:08.794 12:27:51 -- setup/hugepages.sh@112 -- # get_nodes 00:08:08.794 12:27:51 -- setup/hugepages.sh@27 -- # local node 00:08:08.794 12:27:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:08.794 12:27:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:08.794 12:27:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:08.794 12:27:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:08.794 12:27:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:08.794 12:27:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:08.794 12:27:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:08.794 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:08.794 12:27:51 -- setup/common.sh@18 -- # local node=0 00:08:08.794 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:08.794 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:08.794 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:08.794 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:08.794 12:27:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:08.794 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:08.794 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.794 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7948868 kB' 'MemUsed: 4290244 kB' 'SwapCached: 0 kB' 'Active: 455660 kB' 'Inactive: 1401444 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50840 kB' 'AnonPages: 119444 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154932 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # continue 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:08.795 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:08.795 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:08.795 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:08.795 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:08.795 12:27:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:08.795 12:27:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:08.795 12:27:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:08.795 node0=1025 expecting 1025 00:08:08.795 12:27:51 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:08.795 12:27:51 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:08.795 00:08:08.795 real 0m0.526s 00:08:08.795 user 0m0.259s 00:08:08.795 sys 0m0.284s 00:08:08.795 12:27:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.795 12:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:08.795 ************************************ 00:08:08.795 END TEST odd_alloc 00:08:08.795 ************************************ 00:08:08.795 12:27:51 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:08.795 12:27:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.795 12:27:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.795 12:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:08.795 ************************************ 00:08:08.795 START TEST custom_alloc 00:08:08.795 ************************************ 00:08:08.795 12:27:51 -- common/autotest_common.sh@1104 -- # custom_alloc 00:08:08.795 12:27:51 -- setup/hugepages.sh@167 -- # local IFS=, 00:08:08.795 12:27:51 -- setup/hugepages.sh@169 -- # local node 00:08:08.795 12:27:51 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:08.795 12:27:51 -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:08.795 12:27:51 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:08.795 12:27:51 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:08.795 12:27:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:08.795 12:27:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:08.795 12:27:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:08.795 12:27:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:08.795 12:27:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:08.795 12:27:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:08.795 12:27:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:08.795 12:27:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@83 -- # : 0 00:08:08.795 12:27:51 -- setup/hugepages.sh@84 -- # : 0 00:08:08.795 12:27:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:08.795 12:27:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:08.795 12:27:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:08.795 12:27:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:08.795 12:27:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:08.795 12:27:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:08.795 12:27:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:08.795 12:27:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:08.795 12:27:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:08.795 12:27:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:08.795 12:27:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:08.795 12:27:51 -- setup/hugepages.sh@78 -- # return 0 00:08:08.795 12:27:51 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:08.795 12:27:51 -- setup/hugepages.sh@187 -- # setup output 00:08:08.795 12:27:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:08.795 12:27:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:09.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:09.366 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.366 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.366 12:27:51 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:09.366 12:27:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:09.366 12:27:51 -- setup/hugepages.sh@89 -- # local node 00:08:09.366 12:27:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:09.366 12:27:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:09.366 12:27:51 -- setup/hugepages.sh@92 -- # local surp 00:08:09.366 12:27:51 -- setup/hugepages.sh@93 -- # local resv 00:08:09.366 12:27:51 -- setup/hugepages.sh@94 -- # local anon 00:08:09.366 12:27:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:09.366 12:27:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:09.366 12:27:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:09.366 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:09.366 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:09.366 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.366 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.366 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.366 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.366 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.366 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9002756 kB' 'MemAvailable: 10525376 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456092 kB' 'Inactive: 1401444 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 50976 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6480 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.366 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.366 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.367 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:09.367 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:09.367 12:27:51 -- setup/hugepages.sh@97 -- # anon=0 00:08:09.367 12:27:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:09.367 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:09.367 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:09.367 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:09.367 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.367 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.367 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.367 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.367 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.367 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9002508 kB' 'MemAvailable: 10525128 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455932 kB' 'Inactive: 1401444 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.367 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.367 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.368 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:09.368 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:09.368 12:27:51 -- setup/hugepages.sh@99 -- # surp=0 00:08:09.368 12:27:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:09.368 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:09.368 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:09.368 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:09.368 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.368 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.368 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.368 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.368 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.368 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9002508 kB' 'MemAvailable: 10525128 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455960 kB' 'Inactive: 1401444 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119720 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.368 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.368 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.369 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:09.369 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:09.369 12:27:51 -- setup/hugepages.sh@100 -- # resv=0 00:08:09.369 nr_hugepages=512 00:08:09.369 12:27:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:09.369 resv_hugepages=0 00:08:09.369 12:27:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:09.369 surplus_hugepages=0 00:08:09.369 12:27:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:09.369 anon_hugepages=0 00:08:09.369 12:27:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:09.369 12:27:51 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:09.369 12:27:51 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:09.369 12:27:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:09.369 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:09.369 12:27:51 -- setup/common.sh@18 -- # local node= 00:08:09.369 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:09.369 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.369 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.369 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.369 12:27:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.369 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.369 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9003124 kB' 'MemAvailable: 10525744 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455880 kB' 'Inactive: 1401444 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119632 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.369 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.369 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.370 12:27:51 -- setup/common.sh@33 -- # echo 512 00:08:09.370 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:09.370 12:27:51 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:09.370 12:27:51 -- setup/hugepages.sh@112 -- # get_nodes 00:08:09.370 12:27:51 -- setup/hugepages.sh@27 -- # local node 00:08:09.370 12:27:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:09.370 12:27:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:09.370 12:27:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:09.370 12:27:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:09.370 12:27:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:09.370 12:27:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:09.370 12:27:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:09.370 12:27:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:09.370 12:27:51 -- setup/common.sh@18 -- # local node=0 00:08:09.370 12:27:51 -- setup/common.sh@19 -- # local var val 00:08:09.370 12:27:51 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.370 12:27:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.370 12:27:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:09.370 12:27:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:09.370 12:27:51 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.370 12:27:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9003480 kB' 'MemUsed: 3235632 kB' 'SwapCached: 0 kB' 'Active: 455696 kB' 'Inactive: 1401444 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50840 kB' 'AnonPages: 119500 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154924 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.370 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.370 12:27:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # continue 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.371 12:27:51 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.371 12:27:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.371 12:27:51 -- setup/common.sh@33 -- # echo 0 00:08:09.371 12:27:51 -- setup/common.sh@33 -- # return 0 00:08:09.371 12:27:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:09.371 12:27:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:09.371 12:27:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:09.371 12:27:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:09.371 node0=512 expecting 512 00:08:09.371 12:27:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:09.371 12:27:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:09.371 00:08:09.371 real 0m0.521s 00:08:09.371 user 0m0.285s 00:08:09.371 sys 0m0.268s 00:08:09.371 12:27:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.371 12:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.371 ************************************ 00:08:09.371 END TEST custom_alloc 00:08:09.371 ************************************ 00:08:09.371 12:27:51 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:09.371 12:27:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.371 12:27:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.371 12:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.371 ************************************ 00:08:09.371 START TEST no_shrink_alloc 00:08:09.371 ************************************ 00:08:09.371 12:27:51 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:08:09.371 12:27:51 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:09.371 12:27:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:09.371 12:27:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:09.371 12:27:51 -- setup/hugepages.sh@51 -- # shift 00:08:09.371 12:27:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:09.371 12:27:51 -- setup/hugepages.sh@52 -- # local node_ids 00:08:09.371 12:27:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:09.371 12:27:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:09.371 12:27:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:09.371 12:27:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:09.371 12:27:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:09.371 12:27:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:09.371 12:27:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:09.371 12:27:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:09.371 12:27:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:09.371 12:27:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:09.371 12:27:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:09.371 12:27:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:09.371 12:27:51 -- setup/hugepages.sh@73 -- # return 0 00:08:09.371 12:27:51 -- setup/hugepages.sh@198 -- # setup output 00:08:09.371 12:27:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:09.371 12:27:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:09.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:09.892 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.892 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.892 12:27:52 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:09.892 12:27:52 -- setup/hugepages.sh@89 -- # local node 00:08:09.892 12:27:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:09.892 12:27:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:09.892 12:27:52 -- setup/hugepages.sh@92 -- # local surp 00:08:09.892 12:27:52 -- setup/hugepages.sh@93 -- # local resv 00:08:09.892 12:27:52 -- setup/hugepages.sh@94 -- # local anon 00:08:09.892 12:27:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:09.892 12:27:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:09.892 12:27:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:09.892 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:09.892 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:09.892 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.892 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.892 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.892 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.892 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.892 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7955072 kB' 'MemAvailable: 9477692 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456440 kB' 'Inactive: 1401444 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120244 kB' 'Mapped: 50968 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154940 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93368 kB' 'KernelStack: 6520 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.892 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.892 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:09.893 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:09.893 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:09.893 12:27:52 -- setup/hugepages.sh@97 -- # anon=0 00:08:09.893 12:27:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:09.893 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:09.893 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:09.893 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:09.893 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.893 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.893 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.893 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.893 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.893 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7955072 kB' 'MemAvailable: 9477692 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456044 kB' 'Inactive: 1401444 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154940 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93368 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.893 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.893 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.894 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:09.894 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:09.894 12:27:52 -- setup/hugepages.sh@99 -- # surp=0 00:08:09.894 12:27:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:09.894 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:09.894 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:09.894 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:09.894 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.894 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.894 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.894 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.894 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.894 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.894 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7955072 kB' 'MemAvailable: 9477692 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 455996 kB' 'Inactive: 1401444 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154940 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93368 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.894 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.894 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.895 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.895 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:09.896 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:09.896 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:09.896 12:27:52 -- setup/hugepages.sh@100 -- # resv=0 00:08:09.896 nr_hugepages=1024 00:08:09.896 12:27:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:09.896 resv_hugepages=0 00:08:09.896 12:27:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:09.896 surplus_hugepages=0 00:08:09.896 12:27:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:09.896 anon_hugepages=0 00:08:09.896 12:27:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:09.896 12:27:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:09.896 12:27:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:09.896 12:27:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:09.896 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:09.896 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:09.896 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:09.896 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.896 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.896 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:09.896 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:09.896 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.896 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7955072 kB' 'MemAvailable: 9477692 kB' 'Buffers: 2684 kB' 'Cached: 1736580 kB' 'SwapCached: 0 kB' 'Active: 456040 kB' 'Inactive: 1401444 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154936 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93364 kB' 'KernelStack: 6480 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.896 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.896 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.897 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:09.897 12:27:52 -- setup/common.sh@33 -- # echo 1024 00:08:09.897 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:09.897 12:27:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:09.897 12:27:52 -- setup/hugepages.sh@112 -- # get_nodes 00:08:09.897 12:27:52 -- setup/hugepages.sh@27 -- # local node 00:08:09.897 12:27:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:09.897 12:27:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:09.897 12:27:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:09.897 12:27:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:09.897 12:27:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:09.897 12:27:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:09.897 12:27:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:09.897 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:09.897 12:27:52 -- setup/common.sh@18 -- # local node=0 00:08:09.897 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:09.897 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:09.897 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:09.897 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:09.897 12:27:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:09.897 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:09.897 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:09.897 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7955072 kB' 'MemUsed: 4284040 kB' 'SwapCached: 0 kB' 'Active: 455944 kB' 'Inactive: 1401444 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1739264 kB' 'Mapped: 50840 kB' 'AnonPages: 119748 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154936 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # continue 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:09.898 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:09.898 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:09.898 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:09.898 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:09.898 12:27:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:09.898 12:27:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:09.898 12:27:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:09.898 12:27:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:09.898 12:27:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:09.898 node0=1024 expecting 1024 00:08:09.899 12:27:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:09.899 12:27:52 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:09.899 12:27:52 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:09.899 12:27:52 -- setup/hugepages.sh@202 -- # setup output 00:08:09.899 12:27:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:09.899 12:27:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:10.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.420 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:10.420 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:10.420 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:10.420 12:27:52 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:10.420 12:27:52 -- setup/hugepages.sh@89 -- # local node 00:08:10.420 12:27:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:10.420 12:27:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:10.420 12:27:52 -- setup/hugepages.sh@92 -- # local surp 00:08:10.420 12:27:52 -- setup/hugepages.sh@93 -- # local resv 00:08:10.420 12:27:52 -- setup/hugepages.sh@94 -- # local anon 00:08:10.420 12:27:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:10.420 12:27:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:10.420 12:27:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:10.420 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:10.420 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:10.420 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:10.420 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:10.420 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:10.420 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:10.420 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:10.420 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7958020 kB' 'MemAvailable: 9480644 kB' 'Buffers: 2684 kB' 'Cached: 1736584 kB' 'SwapCached: 0 kB' 'Active: 456176 kB' 'Inactive: 1401448 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120032 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154896 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93324 kB' 'KernelStack: 6456 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.420 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.420 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.421 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.421 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:10.421 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:10.421 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:10.421 12:27:52 -- setup/hugepages.sh@97 -- # anon=0 00:08:10.421 12:27:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:10.421 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:10.421 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:10.422 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:10.422 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:10.422 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:10.422 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:10.422 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:10.422 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:10.422 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7958020 kB' 'MemAvailable: 9480644 kB' 'Buffers: 2684 kB' 'Cached: 1736584 kB' 'SwapCached: 0 kB' 'Active: 456088 kB' 'Inactive: 1401448 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154892 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93320 kB' 'KernelStack: 6440 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.422 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.422 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.423 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:10.423 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:10.423 12:27:52 -- setup/hugepages.sh@99 -- # surp=0 00:08:10.423 12:27:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:10.423 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:10.423 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:10.423 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:10.423 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:10.423 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:10.423 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:10.423 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:10.423 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:10.423 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.423 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7958020 kB' 'MemAvailable: 9480644 kB' 'Buffers: 2684 kB' 'Cached: 1736584 kB' 'SwapCached: 0 kB' 'Active: 456128 kB' 'Inactive: 1401448 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154908 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93336 kB' 'KernelStack: 6464 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.423 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.423 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.424 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.424 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:10.425 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:10.425 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:10.425 12:27:52 -- setup/hugepages.sh@100 -- # resv=0 00:08:10.425 nr_hugepages=1024 00:08:10.425 12:27:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:10.425 resv_hugepages=0 00:08:10.425 12:27:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:10.425 surplus_hugepages=0 00:08:10.425 12:27:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:10.425 anon_hugepages=0 00:08:10.425 12:27:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:10.425 12:27:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:10.425 12:27:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:10.425 12:27:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:10.425 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:10.425 12:27:52 -- setup/common.sh@18 -- # local node= 00:08:10.425 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:10.425 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:10.425 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:10.425 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:10.425 12:27:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:10.425 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:10.425 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7958020 kB' 'MemAvailable: 9480644 kB' 'Buffers: 2684 kB' 'Cached: 1736584 kB' 'SwapCached: 0 kB' 'Active: 456028 kB' 'Inactive: 1401448 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50840 kB' 'Shmem: 10484 kB' 'KReclaimable: 61572 kB' 'Slab: 154904 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93332 kB' 'KernelStack: 6464 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 331584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 5048320 kB' 'DirectMap1G: 9437184 kB' 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.425 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.425 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.426 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.426 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:10.427 12:27:52 -- setup/common.sh@33 -- # echo 1024 00:08:10.427 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:10.427 12:27:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:10.427 12:27:52 -- setup/hugepages.sh@112 -- # get_nodes 00:08:10.427 12:27:52 -- setup/hugepages.sh@27 -- # local node 00:08:10.427 12:27:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:10.427 12:27:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:10.427 12:27:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:10.427 12:27:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:10.427 12:27:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:10.427 12:27:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:10.427 12:27:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:10.427 12:27:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:10.427 12:27:52 -- setup/common.sh@18 -- # local node=0 00:08:10.427 12:27:52 -- setup/common.sh@19 -- # local var val 00:08:10.427 12:27:52 -- setup/common.sh@20 -- # local mem_f mem 00:08:10.427 12:27:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:10.427 12:27:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:10.427 12:27:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:10.427 12:27:52 -- setup/common.sh@28 -- # mapfile -t mem 00:08:10.427 12:27:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7958020 kB' 'MemUsed: 4281092 kB' 'SwapCached: 0 kB' 'Active: 455964 kB' 'Inactive: 1401448 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 327336 kB' 'Inactive(file): 1401448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1739268 kB' 'Mapped: 50840 kB' 'AnonPages: 119732 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61572 kB' 'Slab: 154904 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 93332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.427 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.427 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # continue 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # IFS=': ' 00:08:10.428 12:27:52 -- setup/common.sh@31 -- # read -r var val _ 00:08:10.428 12:27:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:10.428 12:27:52 -- setup/common.sh@33 -- # echo 0 00:08:10.428 12:27:52 -- setup/common.sh@33 -- # return 0 00:08:10.428 12:27:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:10.428 12:27:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:10.428 12:27:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:10.428 12:27:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:10.428 12:27:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:10.428 node0=1024 expecting 1024 00:08:10.428 12:27:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:10.428 00:08:10.428 real 0m1.060s 00:08:10.428 user 0m0.530s 00:08:10.428 sys 0m0.572s 00:08:10.428 12:27:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.428 12:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.428 ************************************ 00:08:10.428 END TEST no_shrink_alloc 00:08:10.428 ************************************ 00:08:10.687 12:27:52 -- setup/hugepages.sh@217 -- # clear_hp 00:08:10.687 12:27:52 -- setup/hugepages.sh@37 -- # local node hp 00:08:10.687 12:27:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:10.687 12:27:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:10.687 12:27:52 -- setup/hugepages.sh@41 -- # echo 0 00:08:10.687 12:27:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:10.687 12:27:52 -- setup/hugepages.sh@41 -- # echo 0 00:08:10.687 12:27:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:10.687 12:27:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:10.687 00:08:10.687 real 0m4.480s 00:08:10.687 user 0m2.181s 00:08:10.687 sys 0m2.386s 00:08:10.687 12:27:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.687 12:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.687 ************************************ 00:08:10.687 END TEST hugepages 00:08:10.687 ************************************ 00:08:10.687 12:27:52 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:10.687 12:27:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.687 12:27:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.687 12:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.687 ************************************ 00:08:10.687 START TEST driver 00:08:10.687 ************************************ 00:08:10.687 12:27:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:10.687 * Looking for test storage... 00:08:10.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:10.687 12:27:53 -- setup/driver.sh@68 -- # setup reset 00:08:10.687 12:27:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:10.687 12:27:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.255 12:27:53 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:11.255 12:27:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.255 12:27:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.255 12:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:11.255 ************************************ 00:08:11.255 START TEST guess_driver 00:08:11.255 ************************************ 00:08:11.255 12:27:53 -- common/autotest_common.sh@1104 -- # guess_driver 00:08:11.255 12:27:53 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:11.255 12:27:53 -- setup/driver.sh@47 -- # local fail=0 00:08:11.255 12:27:53 -- setup/driver.sh@49 -- # pick_driver 00:08:11.255 12:27:53 -- setup/driver.sh@36 -- # vfio 00:08:11.255 12:27:53 -- setup/driver.sh@21 -- # local iommu_grups 00:08:11.255 12:27:53 -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:11.255 12:27:53 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:11.255 12:27:53 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:11.255 12:27:53 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:11.255 12:27:53 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:11.255 12:27:53 -- setup/driver.sh@32 -- # return 1 00:08:11.255 12:27:53 -- setup/driver.sh@38 -- # uio 00:08:11.255 12:27:53 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:11.255 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:11.255 12:27:53 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:11.255 Looking for driver=uio_pci_generic 00:08:11.255 12:27:53 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:11.255 12:27:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:11.255 12:27:53 -- setup/driver.sh@45 -- # setup output config 00:08:11.255 12:27:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:11.255 12:27:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:11.862 12:27:54 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:11.863 12:27:54 -- setup/driver.sh@58 -- # continue 00:08:11.863 12:27:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:11.863 12:27:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:11.863 12:27:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:11.863 12:27:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:11.863 12:27:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:11.863 12:27:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:11.863 12:27:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:12.121 12:27:54 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:12.121 12:27:54 -- setup/driver.sh@65 -- # setup reset 00:08:12.121 12:27:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:12.121 12:27:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.687 00:08:12.687 real 0m1.347s 00:08:12.687 user 0m0.521s 00:08:12.687 sys 0m0.832s 00:08:12.687 12:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.687 12:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 ************************************ 00:08:12.687 END TEST guess_driver 00:08:12.687 ************************************ 00:08:12.687 00:08:12.687 real 0m1.988s 00:08:12.687 user 0m0.747s 00:08:12.687 sys 0m1.304s 00:08:12.687 12:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.687 12:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 ************************************ 00:08:12.687 END TEST driver 00:08:12.687 ************************************ 00:08:12.687 12:27:55 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:12.687 12:27:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.687 12:27:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.687 12:27:55 -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 ************************************ 00:08:12.687 START TEST devices 00:08:12.687 ************************************ 00:08:12.687 12:27:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:12.687 * Looking for test storage... 00:08:12.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:12.687 12:27:55 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:12.687 12:27:55 -- setup/devices.sh@192 -- # setup reset 00:08:12.687 12:27:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:12.687 12:27:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:13.253 12:27:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:08:13.253 12:27:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:08:13.253 12:27:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:08:13.253 12:27:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:08:13.253 12:27:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:13.253 12:27:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:08:13.253 12:27:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:08:13.253 12:27:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:13.253 12:27:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:08:13.253 12:27:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:08:13.253 12:27:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:13.253 12:27:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:08:13.253 12:27:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:08:13.253 12:27:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:08:13.253 12:27:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:08:13.253 12:27:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:08:13.253 12:27:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:13.253 12:27:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:08:13.253 12:27:55 -- setup/devices.sh@196 -- # blocks=() 00:08:13.253 12:27:55 -- setup/devices.sh@196 -- # declare -a blocks 00:08:13.253 12:27:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:13.253 12:27:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:13.253 12:27:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:13.253 12:27:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:13.253 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:13.253 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:13.253 12:27:55 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:08:13.253 12:27:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:13.253 12:27:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:13.253 12:27:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:08:13.253 12:27:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:13.512 No valid GPT data, bailing 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # pt= 00:08:13.512 12:27:55 -- scripts/common.sh@394 -- # return 1 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:13.512 12:27:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.512 12:27:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.512 12:27:55 -- setup/common.sh@80 -- # echo 5368709120 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:13.512 12:27:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:13.512 12:27:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:08:13.512 12:27:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:13.512 12:27:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:13.512 12:27:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:13.512 12:27:55 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:08:13.512 12:27:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:13.512 No valid GPT data, bailing 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # pt= 00:08:13.512 12:27:55 -- scripts/common.sh@394 -- # return 1 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:13.512 12:27:55 -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:13.512 12:27:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:13.512 12:27:55 -- setup/common.sh@80 -- # echo 4294967296 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:13.512 12:27:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:13.512 12:27:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:13.512 12:27:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:13.512 12:27:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:13.512 12:27:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:08:13.512 12:27:55 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:08:13.512 12:27:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:08:13.512 No valid GPT data, bailing 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:13.512 12:27:55 -- scripts/common.sh@393 -- # pt= 00:08:13.512 12:27:55 -- scripts/common.sh@394 -- # return 1 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:08:13.512 12:27:55 -- setup/common.sh@76 -- # local dev=nvme1n2 00:08:13.512 12:27:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:08:13.512 12:27:55 -- setup/common.sh@80 -- # echo 4294967296 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:13.512 12:27:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:13.512 12:27:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:13.512 12:27:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:08:13.512 12:27:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:13.512 12:27:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:13.512 12:27:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:13.512 12:27:55 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:08:13.512 12:27:55 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:08:13.512 12:27:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:08:13.512 No valid GPT data, bailing 00:08:13.512 12:27:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:13.512 12:27:56 -- scripts/common.sh@393 -- # pt= 00:08:13.512 12:27:56 -- scripts/common.sh@394 -- # return 1 00:08:13.512 12:27:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:08:13.512 12:27:56 -- setup/common.sh@76 -- # local dev=nvme1n3 00:08:13.512 12:27:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:08:13.512 12:27:56 -- setup/common.sh@80 -- # echo 4294967296 00:08:13.512 12:27:56 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:13.770 12:27:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:13.771 12:27:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:13.771 12:27:56 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:13.771 12:27:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:13.771 12:27:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:13.771 12:27:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.771 12:27:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.771 12:27:56 -- common/autotest_common.sh@10 -- # set +x 00:08:13.771 ************************************ 00:08:13.771 START TEST nvme_mount 00:08:13.771 ************************************ 00:08:13.771 12:27:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:08:13.771 12:27:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:13.771 12:27:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:13.771 12:27:56 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:13.771 12:27:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:13.771 12:27:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:13.771 12:27:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:13.771 12:27:56 -- setup/common.sh@40 -- # local part_no=1 00:08:13.771 12:27:56 -- setup/common.sh@41 -- # local size=1073741824 00:08:13.771 12:27:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:13.771 12:27:56 -- setup/common.sh@44 -- # parts=() 00:08:13.771 12:27:56 -- setup/common.sh@44 -- # local parts 00:08:13.771 12:27:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:13.771 12:27:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:13.771 12:27:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:13.771 12:27:56 -- setup/common.sh@46 -- # (( part++ )) 00:08:13.771 12:27:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:13.771 12:27:56 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:13.771 12:27:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:13.771 12:27:56 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:14.709 Creating new GPT entries in memory. 00:08:14.709 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:14.709 other utilities. 00:08:14.709 12:27:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:14.709 12:27:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:14.709 12:27:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:14.709 12:27:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:14.709 12:27:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:15.646 Creating new GPT entries in memory. 00:08:15.646 The operation has completed successfully. 00:08:15.646 12:27:58 -- setup/common.sh@57 -- # (( part++ )) 00:08:15.646 12:27:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:15.646 12:27:58 -- setup/common.sh@62 -- # wait 51672 00:08:15.646 12:27:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:15.646 12:27:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:15.646 12:27:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:15.646 12:27:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:15.646 12:27:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:15.646 12:27:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:15.646 12:27:58 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:15.646 12:27:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:15.646 12:27:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:15.646 12:27:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:15.646 12:27:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:15.646 12:27:58 -- setup/devices.sh@53 -- # local found=0 00:08:15.646 12:27:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:15.646 12:27:58 -- setup/devices.sh@56 -- # : 00:08:15.646 12:27:58 -- setup/devices.sh@59 -- # local pci status 00:08:15.905 12:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:15.905 12:27:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:15.905 12:27:58 -- setup/devices.sh@47 -- # setup output config 00:08:15.905 12:27:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:15.905 12:27:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:15.905 12:27:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:15.905 12:27:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:15.905 12:27:58 -- setup/devices.sh@63 -- # found=1 00:08:15.905 12:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:15.905 12:27:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:15.905 12:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:16.164 12:27:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:16.164 12:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:16.424 12:27:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:16.424 12:27:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:16.424 12:27:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:16.424 12:27:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:16.424 12:27:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.424 12:27:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:16.424 12:27:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:16.424 12:27:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:08:16.424 12:27:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.424 12:27:58 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.424 12:27:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:16.424 12:27:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:16.424 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:16.424 12:27:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:16.424 12:27:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:16.684 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.684 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.684 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:16.684 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:16.684 12:27:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:16.684 12:27:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:16.684 12:27:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.684 12:27:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:16.684 12:27:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:16.684 12:27:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.684 12:27:59 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:16.684 12:27:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:16.684 12:27:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:16.684 12:27:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:16.684 12:27:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:16.684 12:27:59 -- setup/devices.sh@53 -- # local found=0 00:08:16.684 12:27:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:16.684 12:27:59 -- setup/devices.sh@56 -- # : 00:08:16.684 12:27:59 -- setup/devices.sh@59 -- # local pci status 00:08:16.684 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:16.684 12:27:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:16.684 12:27:59 -- setup/devices.sh@47 -- # setup output config 00:08:16.684 12:27:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:16.684 12:27:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:16.943 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:16.943 12:27:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:16.943 12:27:59 -- setup/devices.sh@63 -- # found=1 00:08:16.943 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:16.943 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:16.943 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.202 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:17.202 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.202 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:17.202 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.202 12:27:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:17.202 12:27:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:17.202 12:27:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:17.202 12:27:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:17.202 12:27:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:17.461 12:27:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:17.461 12:27:59 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:08:17.461 12:27:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:17.461 12:27:59 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:17.461 12:27:59 -- setup/devices.sh@50 -- # local mount_point= 00:08:17.461 12:27:59 -- setup/devices.sh@51 -- # local test_file= 00:08:17.461 12:27:59 -- setup/devices.sh@53 -- # local found=0 00:08:17.461 12:27:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:17.461 12:27:59 -- setup/devices.sh@59 -- # local pci status 00:08:17.461 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.461 12:27:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:17.461 12:27:59 -- setup/devices.sh@47 -- # setup output config 00:08:17.461 12:27:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:17.461 12:27:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:17.461 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:17.461 12:27:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:17.461 12:27:59 -- setup/devices.sh@63 -- # found=1 00:08:17.461 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.461 12:27:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:17.461 12:27:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:18.028 12:28:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:18.028 12:28:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:18.028 12:28:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:18.028 12:28:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:18.028 12:28:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:18.028 12:28:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:18.028 12:28:00 -- setup/devices.sh@68 -- # return 0 00:08:18.029 12:28:00 -- setup/devices.sh@128 -- # cleanup_nvme 00:08:18.029 12:28:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:18.029 12:28:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:18.029 12:28:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:18.029 12:28:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:18.029 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:18.029 00:08:18.029 real 0m4.407s 00:08:18.029 user 0m1.043s 00:08:18.029 sys 0m1.080s 00:08:18.029 12:28:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.029 ************************************ 00:08:18.029 END TEST nvme_mount 00:08:18.029 ************************************ 00:08:18.029 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 12:28:00 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:18.029 12:28:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.029 12:28:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.029 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 ************************************ 00:08:18.029 START TEST dm_mount 00:08:18.029 ************************************ 00:08:18.029 12:28:00 -- common/autotest_common.sh@1104 -- # dm_mount 00:08:18.029 12:28:00 -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:18.029 12:28:00 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:18.029 12:28:00 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:18.029 12:28:00 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:18.029 12:28:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:18.029 12:28:00 -- setup/common.sh@40 -- # local part_no=2 00:08:18.029 12:28:00 -- setup/common.sh@41 -- # local size=1073741824 00:08:18.029 12:28:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:18.029 12:28:00 -- setup/common.sh@44 -- # parts=() 00:08:18.029 12:28:00 -- setup/common.sh@44 -- # local parts 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:18.029 12:28:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part++ )) 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:18.029 12:28:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part++ )) 00:08:18.029 12:28:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:18.029 12:28:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:18.029 12:28:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:18.029 12:28:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:19.407 Creating new GPT entries in memory. 00:08:19.407 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:19.407 other utilities. 00:08:19.407 12:28:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:19.407 12:28:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:19.407 12:28:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:19.407 12:28:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:19.407 12:28:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:20.344 Creating new GPT entries in memory. 00:08:20.344 The operation has completed successfully. 00:08:20.344 12:28:02 -- setup/common.sh@57 -- # (( part++ )) 00:08:20.344 12:28:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:20.344 12:28:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:20.344 12:28:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:20.344 12:28:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:21.280 The operation has completed successfully. 00:08:21.280 12:28:03 -- setup/common.sh@57 -- # (( part++ )) 00:08:21.280 12:28:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:21.280 12:28:03 -- setup/common.sh@62 -- # wait 52126 00:08:21.280 12:28:03 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:21.280 12:28:03 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.280 12:28:03 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:21.280 12:28:03 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:21.280 12:28:03 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:21.280 12:28:03 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:21.280 12:28:03 -- setup/devices.sh@161 -- # break 00:08:21.280 12:28:03 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:21.280 12:28:03 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:21.280 12:28:03 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:21.280 12:28:03 -- setup/devices.sh@166 -- # dm=dm-0 00:08:21.280 12:28:03 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:21.280 12:28:03 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:21.280 12:28:03 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.280 12:28:03 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:21.280 12:28:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.280 12:28:03 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:21.280 12:28:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:21.280 12:28:03 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.280 12:28:03 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:21.280 12:28:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:21.280 12:28:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:21.280 12:28:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.280 12:28:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:21.280 12:28:03 -- setup/devices.sh@53 -- # local found=0 00:08:21.280 12:28:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:21.280 12:28:03 -- setup/devices.sh@56 -- # : 00:08:21.280 12:28:03 -- setup/devices.sh@59 -- # local pci status 00:08:21.280 12:28:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.280 12:28:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:21.280 12:28:03 -- setup/devices.sh@47 -- # setup output config 00:08:21.280 12:28:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:21.280 12:28:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:21.280 12:28:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:21.280 12:28:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:21.280 12:28:03 -- setup/devices.sh@63 -- # found=1 00:08:21.280 12:28:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.539 12:28:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:21.539 12:28:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.797 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:21.797 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.797 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:21.797 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.797 12:28:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:21.797 12:28:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:21.797 12:28:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.797 12:28:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:21.797 12:28:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:21.797 12:28:04 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:21.797 12:28:04 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:21.797 12:28:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:21.797 12:28:04 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:21.798 12:28:04 -- setup/devices.sh@50 -- # local mount_point= 00:08:21.798 12:28:04 -- setup/devices.sh@51 -- # local test_file= 00:08:21.798 12:28:04 -- setup/devices.sh@53 -- # local found=0 00:08:21.798 12:28:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:21.798 12:28:04 -- setup/devices.sh@59 -- # local pci status 00:08:21.798 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:21.798 12:28:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:21.798 12:28:04 -- setup/devices.sh@47 -- # setup output config 00:08:21.798 12:28:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:21.798 12:28:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:22.056 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:22.056 12:28:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:22.056 12:28:04 -- setup/devices.sh@63 -- # found=1 00:08:22.056 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:22.056 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:22.056 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:22.314 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:22.314 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:22.314 12:28:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:22.314 12:28:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:22.314 12:28:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:22.314 12:28:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:22.314 12:28:04 -- setup/devices.sh@68 -- # return 0 00:08:22.314 12:28:04 -- setup/devices.sh@187 -- # cleanup_dm 00:08:22.314 12:28:04 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:22.314 12:28:04 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:22.314 12:28:04 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:22.314 12:28:04 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:22.314 12:28:04 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:22.314 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:22.314 12:28:04 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:22.314 12:28:04 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:22.572 00:08:22.572 real 0m4.353s 00:08:22.572 user 0m0.615s 00:08:22.572 sys 0m0.677s 00:08:22.572 12:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.572 12:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.572 ************************************ 00:08:22.572 END TEST dm_mount 00:08:22.572 ************************************ 00:08:22.572 12:28:04 -- setup/devices.sh@1 -- # cleanup 00:08:22.572 12:28:04 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:22.572 12:28:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:22.572 12:28:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:22.572 12:28:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:22.572 12:28:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:22.572 12:28:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:22.831 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:22.831 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:22.831 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:22.831 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:22.831 12:28:05 -- setup/devices.sh@12 -- # cleanup_dm 00:08:22.831 12:28:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:22.831 12:28:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:22.831 12:28:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:22.831 12:28:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:22.831 12:28:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:22.831 12:28:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:22.831 00:08:22.831 real 0m10.150s 00:08:22.831 user 0m2.250s 00:08:22.831 sys 0m2.271s 00:08:22.831 12:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.831 ************************************ 00:08:22.831 12:28:05 -- common/autotest_common.sh@10 -- # set +x 00:08:22.831 END TEST devices 00:08:22.831 ************************************ 00:08:22.831 00:08:22.831 real 0m20.936s 00:08:22.831 user 0m7.088s 00:08:22.831 sys 0m8.353s 00:08:22.831 12:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.831 12:28:05 -- common/autotest_common.sh@10 -- # set +x 00:08:22.831 ************************************ 00:08:22.831 END TEST setup.sh 00:08:22.831 ************************************ 00:08:22.831 12:28:05 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:23.089 Hugepages 00:08:23.089 node hugesize free / total 00:08:23.089 node0 1048576kB 0 / 0 00:08:23.089 node0 2048kB 2048 / 2048 00:08:23.089 00:08:23.089 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:23.089 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:23.089 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:23.089 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:23.089 12:28:05 -- spdk/autotest.sh@141 -- # uname -s 00:08:23.089 12:28:05 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:08:23.089 12:28:05 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:08:23.089 12:28:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:23.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.914 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.914 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.914 12:28:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:25.289 12:28:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:25.289 12:28:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:25.289 12:28:07 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:08:25.289 12:28:07 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:08:25.289 12:28:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.289 12:28:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.289 12:28:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.289 12:28:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.289 12:28:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.289 12:28:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:25.289 12:28:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:25.289 12:28:07 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:25.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:25.289 Waiting for block devices as requested 00:08:25.289 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.547 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.547 12:28:07 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:08:25.547 12:28:07 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # grep oacs 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:08:25.547 12:28:07 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:08:25.547 12:28:07 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:08:25.547 12:28:07 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1542 -- # continue 00:08:25.547 12:28:07 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:08:25.547 12:28:07 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:08:25.547 12:28:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # grep oacs 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:08:25.547 12:28:07 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:08:25.547 12:28:07 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:08:25.547 12:28:07 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:08:25.547 12:28:07 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:08:25.547 12:28:08 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:08:25.547 12:28:08 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:08:25.547 12:28:08 -- common/autotest_common.sh@1542 -- # continue 00:08:25.547 12:28:08 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:08:25.547 12:28:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:25.547 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:25.547 12:28:08 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:08:25.547 12:28:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:25.547 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:25.547 12:28:08 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:26.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.483 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.483 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.483 12:28:08 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:08:26.483 12:28:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:26.483 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:26.483 12:28:08 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:08:26.483 12:28:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:26.483 12:28:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:26.483 12:28:08 -- common/autotest_common.sh@1562 -- # bdfs=() 00:08:26.483 12:28:08 -- common/autotest_common.sh@1562 -- # local bdfs 00:08:26.483 12:28:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:26.483 12:28:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:26.483 12:28:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:26.483 12:28:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:26.483 12:28:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:26.483 12:28:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:26.483 12:28:08 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:26.483 12:28:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:26.483 12:28:08 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:08:26.483 12:28:08 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:08:26.483 12:28:08 -- common/autotest_common.sh@1565 -- # device=0x0010 00:08:26.483 12:28:08 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:26.483 12:28:08 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:08:26.483 12:28:08 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:08:26.483 12:28:08 -- common/autotest_common.sh@1565 -- # device=0x0010 00:08:26.483 12:28:08 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:26.483 12:28:08 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:08:26.483 12:28:08 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:26.483 12:28:08 -- common/autotest_common.sh@1578 -- # return 0 00:08:26.483 12:28:08 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:08:26.483 12:28:08 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:26.483 12:28:08 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:26.483 12:28:08 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:26.483 12:28:08 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:26.483 12:28:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:26.483 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:26.483 12:28:08 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:26.483 12:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.483 12:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.483 12:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:26.483 ************************************ 00:08:26.483 START TEST env 00:08:26.483 ************************************ 00:08:26.483 12:28:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:26.740 * Looking for test storage... 00:08:26.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:26.740 12:28:09 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:26.740 12:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.740 12:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.740 12:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:26.740 ************************************ 00:08:26.740 START TEST env_memory 00:08:26.740 ************************************ 00:08:26.740 12:28:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:26.740 00:08:26.740 00:08:26.740 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.740 http://cunit.sourceforge.net/ 00:08:26.740 00:08:26.740 00:08:26.740 Suite: memory 00:08:26.740 Test: alloc and free memory map ...[2024-10-01 12:28:09.118715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:26.740 passed 00:08:26.740 Test: mem map translation ...[2024-10-01 12:28:09.168165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:26.740 [2024-10-01 12:28:09.168270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:26.740 [2024-10-01 12:28:09.168380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:26.740 [2024-10-01 12:28:09.168420] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:26.740 passed 00:08:26.740 Test: mem map registration ...[2024-10-01 12:28:09.247712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:26.740 [2024-10-01 12:28:09.247803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:26.996 passed 00:08:26.996 Test: mem map adjacent registrations ...passed 00:08:26.996 00:08:26.996 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.996 suites 1 1 n/a 0 0 00:08:26.996 tests 4 4 4 0 0 00:08:26.996 asserts 152 152 152 0 n/a 00:08:26.996 00:08:26.996 Elapsed time = 0.278 seconds 00:08:26.996 00:08:26.996 real 0m0.311s 00:08:26.996 user 0m0.289s 00:08:26.996 sys 0m0.018s 00:08:26.996 12:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.996 12:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:26.996 ************************************ 00:08:26.996 END TEST env_memory 00:08:26.996 ************************************ 00:08:26.996 12:28:09 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:26.996 12:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.996 12:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.996 12:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:26.996 ************************************ 00:08:26.996 START TEST env_vtophys 00:08:26.996 ************************************ 00:08:26.996 12:28:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:26.996 EAL: lib.eal log level changed from notice to debug 00:08:26.996 EAL: Detected lcore 0 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 1 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 2 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 3 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 4 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 5 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 6 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 7 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 8 as core 0 on socket 0 00:08:26.996 EAL: Detected lcore 9 as core 0 on socket 0 00:08:26.996 EAL: Maximum logical cores by configuration: 128 00:08:26.996 EAL: Detected CPU lcores: 10 00:08:26.996 EAL: Detected NUMA nodes: 1 00:08:26.996 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:26.996 EAL: Detected shared linkage of DPDK 00:08:27.255 EAL: No shared files mode enabled, IPC will be disabled 00:08:27.255 EAL: Selected IOVA mode 'PA' 00:08:27.255 EAL: Probing VFIO support... 00:08:27.255 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:27.255 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:27.255 EAL: Ask a virtual area of 0x2e000 bytes 00:08:27.255 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:27.255 EAL: Setting up physically contiguous memory... 00:08:27.255 EAL: Setting maximum number of open files to 524288 00:08:27.255 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:27.255 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:27.255 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.255 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:27.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.255 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.255 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:27.255 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:27.255 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.255 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:27.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.255 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.255 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:27.255 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:27.255 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.255 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:27.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.255 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.255 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:27.255 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:27.255 EAL: Ask a virtual area of 0x61000 bytes 00:08:27.255 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:27.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:27.255 EAL: Ask a virtual area of 0x400000000 bytes 00:08:27.255 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:27.255 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:27.255 EAL: Hugepages will be freed exactly as allocated. 00:08:27.255 EAL: No shared files mode enabled, IPC is disabled 00:08:27.255 EAL: No shared files mode enabled, IPC is disabled 00:08:27.255 EAL: TSC frequency is ~2200000 KHz 00:08:27.255 EAL: Main lcore 0 is ready (tid=7fe3afe11a40;cpuset=[0]) 00:08:27.255 EAL: Trying to obtain current memory policy. 00:08:27.255 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.255 EAL: Restoring previous memory policy: 0 00:08:27.255 EAL: request: mp_malloc_sync 00:08:27.255 EAL: No shared files mode enabled, IPC is disabled 00:08:27.255 EAL: Heap on socket 0 was expanded by 2MB 00:08:27.255 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:27.255 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:27.255 EAL: Mem event callback 'spdk:(nil)' registered 00:08:27.255 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:27.255 00:08:27.255 00:08:27.255 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.255 http://cunit.sourceforge.net/ 00:08:27.255 00:08:27.255 00:08:27.255 Suite: components_suite 00:08:27.820 Test: vtophys_malloc_test ...passed 00:08:27.820 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:27.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.820 EAL: Restoring previous memory policy: 4 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was expanded by 4MB 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was shrunk by 4MB 00:08:27.820 EAL: Trying to obtain current memory policy. 00:08:27.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.820 EAL: Restoring previous memory policy: 4 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was expanded by 6MB 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was shrunk by 6MB 00:08:27.820 EAL: Trying to obtain current memory policy. 00:08:27.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.820 EAL: Restoring previous memory policy: 4 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was expanded by 10MB 00:08:27.820 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.820 EAL: request: mp_malloc_sync 00:08:27.820 EAL: No shared files mode enabled, IPC is disabled 00:08:27.820 EAL: Heap on socket 0 was shrunk by 10MB 00:08:27.820 EAL: Trying to obtain current memory policy. 00:08:27.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.821 EAL: Restoring previous memory policy: 4 00:08:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.821 EAL: request: mp_malloc_sync 00:08:27.821 EAL: No shared files mode enabled, IPC is disabled 00:08:27.821 EAL: Heap on socket 0 was expanded by 18MB 00:08:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.821 EAL: request: mp_malloc_sync 00:08:27.821 EAL: No shared files mode enabled, IPC is disabled 00:08:27.821 EAL: Heap on socket 0 was shrunk by 18MB 00:08:27.821 EAL: Trying to obtain current memory policy. 00:08:27.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.821 EAL: Restoring previous memory policy: 4 00:08:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.821 EAL: request: mp_malloc_sync 00:08:27.821 EAL: No shared files mode enabled, IPC is disabled 00:08:27.821 EAL: Heap on socket 0 was expanded by 34MB 00:08:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.821 EAL: request: mp_malloc_sync 00:08:27.821 EAL: No shared files mode enabled, IPC is disabled 00:08:27.821 EAL: Heap on socket 0 was shrunk by 34MB 00:08:27.821 EAL: Trying to obtain current memory policy. 00:08:27.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:27.821 EAL: Restoring previous memory policy: 4 00:08:27.821 EAL: Calling mem event callback 'spdk:(nil)' 00:08:27.821 EAL: request: mp_malloc_sync 00:08:27.821 EAL: No shared files mode enabled, IPC is disabled 00:08:27.821 EAL: Heap on socket 0 was expanded by 66MB 00:08:28.077 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.077 EAL: request: mp_malloc_sync 00:08:28.077 EAL: No shared files mode enabled, IPC is disabled 00:08:28.077 EAL: Heap on socket 0 was shrunk by 66MB 00:08:28.077 EAL: Trying to obtain current memory policy. 00:08:28.077 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.077 EAL: Restoring previous memory policy: 4 00:08:28.077 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.077 EAL: request: mp_malloc_sync 00:08:28.077 EAL: No shared files mode enabled, IPC is disabled 00:08:28.077 EAL: Heap on socket 0 was expanded by 130MB 00:08:28.334 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.334 EAL: request: mp_malloc_sync 00:08:28.334 EAL: No shared files mode enabled, IPC is disabled 00:08:28.334 EAL: Heap on socket 0 was shrunk by 130MB 00:08:28.592 EAL: Trying to obtain current memory policy. 00:08:28.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.592 EAL: Restoring previous memory policy: 4 00:08:28.592 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.592 EAL: request: mp_malloc_sync 00:08:28.592 EAL: No shared files mode enabled, IPC is disabled 00:08:28.592 EAL: Heap on socket 0 was expanded by 258MB 00:08:29.155 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.155 EAL: request: mp_malloc_sync 00:08:29.155 EAL: No shared files mode enabled, IPC is disabled 00:08:29.155 EAL: Heap on socket 0 was shrunk by 258MB 00:08:29.412 EAL: Trying to obtain current memory policy. 00:08:29.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:29.412 EAL: Restoring previous memory policy: 4 00:08:29.412 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.412 EAL: request: mp_malloc_sync 00:08:29.412 EAL: No shared files mode enabled, IPC is disabled 00:08:29.412 EAL: Heap on socket 0 was expanded by 514MB 00:08:30.342 EAL: Calling mem event callback 'spdk:(nil)' 00:08:30.342 EAL: request: mp_malloc_sync 00:08:30.342 EAL: No shared files mode enabled, IPC is disabled 00:08:30.342 EAL: Heap on socket 0 was shrunk by 514MB 00:08:30.908 EAL: Trying to obtain current memory policy. 00:08:30.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.166 EAL: Restoring previous memory policy: 4 00:08:31.166 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.166 EAL: request: mp_malloc_sync 00:08:31.166 EAL: No shared files mode enabled, IPC is disabled 00:08:31.166 EAL: Heap on socket 0 was expanded by 1026MB 00:08:33.065 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.065 EAL: request: mp_malloc_sync 00:08:33.065 EAL: No shared files mode enabled, IPC is disabled 00:08:33.065 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:34.449 passed 00:08:34.449 00:08:34.449 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.449 suites 1 1 n/a 0 0 00:08:34.449 tests 2 2 2 0 0 00:08:34.449 asserts 5334 5334 5334 0 n/a 00:08:34.449 00:08:34.449 Elapsed time = 6.950 seconds 00:08:34.449 EAL: Calling mem event callback 'spdk:(nil)' 00:08:34.449 EAL: request: mp_malloc_sync 00:08:34.449 EAL: No shared files mode enabled, IPC is disabled 00:08:34.449 EAL: Heap on socket 0 was shrunk by 2MB 00:08:34.449 EAL: No shared files mode enabled, IPC is disabled 00:08:34.449 EAL: No shared files mode enabled, IPC is disabled 00:08:34.449 EAL: No shared files mode enabled, IPC is disabled 00:08:34.449 00:08:34.449 real 0m7.304s 00:08:34.449 user 0m6.395s 00:08:34.449 sys 0m0.742s 00:08:34.449 12:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.449 12:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.449 ************************************ 00:08:34.449 END TEST env_vtophys 00:08:34.449 ************************************ 00:08:34.449 12:28:16 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:34.449 12:28:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.449 12:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.449 12:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.449 ************************************ 00:08:34.449 START TEST env_pci 00:08:34.449 ************************************ 00:08:34.449 12:28:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:34.449 00:08:34.449 00:08:34.449 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.449 http://cunit.sourceforge.net/ 00:08:34.449 00:08:34.449 00:08:34.449 Suite: pci 00:08:34.449 Test: pci_hook ...[2024-10-01 12:28:16.787302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53319 has claimed it 00:08:34.449 passed 00:08:34.449 00:08:34.449 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.449 suites 1 1 n/a 0 0 00:08:34.449 tests 1 1 1 0 0 00:08:34.449 asserts 25 25 25 0 n/a 00:08:34.449 00:08:34.449 Elapsed time = 0.007 seconds 00:08:34.449 EAL: Cannot find device (10000:00:01.0) 00:08:34.449 EAL: Failed to attach device on primary process 00:08:34.449 00:08:34.449 real 0m0.074s 00:08:34.449 user 0m0.035s 00:08:34.449 sys 0m0.038s 00:08:34.449 12:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.449 12:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.449 ************************************ 00:08:34.449 END TEST env_pci 00:08:34.449 ************************************ 00:08:34.449 12:28:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:34.449 12:28:16 -- env/env.sh@15 -- # uname 00:08:34.449 12:28:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:34.449 12:28:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:34.449 12:28:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:34.449 12:28:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:34.449 12:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.449 12:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.449 ************************************ 00:08:34.449 START TEST env_dpdk_post_init 00:08:34.449 ************************************ 00:08:34.449 12:28:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:34.449 EAL: Detected CPU lcores: 10 00:08:34.449 EAL: Detected NUMA nodes: 1 00:08:34.449 EAL: Detected shared linkage of DPDK 00:08:34.449 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:34.449 EAL: Selected IOVA mode 'PA' 00:08:34.708 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:34.708 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:34.708 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:08:34.708 Starting DPDK initialization... 00:08:34.708 Starting SPDK post initialization... 00:08:34.708 SPDK NVMe probe 00:08:34.708 Attaching to 0000:00:06.0 00:08:34.708 Attaching to 0000:00:07.0 00:08:34.708 Attached to 0000:00:06.0 00:08:34.708 Attached to 0000:00:07.0 00:08:34.708 Cleaning up... 00:08:34.708 00:08:34.708 real 0m0.291s 00:08:34.708 user 0m0.098s 00:08:34.708 sys 0m0.092s 00:08:34.708 12:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.708 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:34.708 ************************************ 00:08:34.708 END TEST env_dpdk_post_init 00:08:34.708 ************************************ 00:08:34.708 12:28:17 -- env/env.sh@26 -- # uname 00:08:34.708 12:28:17 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:34.708 12:28:17 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:34.708 12:28:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.708 12:28:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.708 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:34.708 ************************************ 00:08:34.708 START TEST env_mem_callbacks 00:08:34.708 ************************************ 00:08:34.708 12:28:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:34.966 EAL: Detected CPU lcores: 10 00:08:34.966 EAL: Detected NUMA nodes: 1 00:08:34.966 EAL: Detected shared linkage of DPDK 00:08:34.966 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:34.966 EAL: Selected IOVA mode 'PA' 00:08:34.966 00:08:34.966 00:08:34.966 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.966 http://cunit.sourceforge.net/ 00:08:34.966 00:08:34.966 00:08:34.966 Suite: memory 00:08:34.966 Test: test ... 00:08:34.966 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:34.966 register 0x200000200000 2097152 00:08:34.966 malloc 3145728 00:08:34.966 register 0x200000400000 4194304 00:08:34.966 buf 0x2000004fffc0 len 3145728 PASSED 00:08:34.966 malloc 64 00:08:34.966 buf 0x2000004ffec0 len 64 PASSED 00:08:34.966 malloc 4194304 00:08:34.966 register 0x200000800000 6291456 00:08:34.966 buf 0x2000009fffc0 len 4194304 PASSED 00:08:34.966 free 0x2000004fffc0 3145728 00:08:34.966 free 0x2000004ffec0 64 00:08:34.966 unregister 0x200000400000 4194304 PASSED 00:08:34.966 free 0x2000009fffc0 4194304 00:08:34.966 unregister 0x200000800000 6291456 PASSED 00:08:34.966 malloc 8388608 00:08:34.966 register 0x200000400000 10485760 00:08:34.966 buf 0x2000005fffc0 len 8388608 PASSED 00:08:34.966 free 0x2000005fffc0 8388608 00:08:34.966 unregister 0x200000400000 10485760 PASSED 00:08:34.966 passed 00:08:34.966 00:08:34.966 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.966 suites 1 1 n/a 0 0 00:08:34.966 tests 1 1 1 0 0 00:08:34.966 asserts 15 15 15 0 n/a 00:08:34.966 00:08:34.966 Elapsed time = 0.070 seconds 00:08:34.966 00:08:34.966 real 0m0.258s 00:08:34.966 user 0m0.098s 00:08:34.966 sys 0m0.058s 00:08:34.966 12:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.966 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:34.966 ************************************ 00:08:34.966 END TEST env_mem_callbacks 00:08:34.966 ************************************ 00:08:35.224 00:08:35.224 real 0m8.517s 00:08:35.224 user 0m7.031s 00:08:35.224 sys 0m1.108s 00:08:35.224 12:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.224 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 ************************************ 00:08:35.224 END TEST env 00:08:35.224 ************************************ 00:08:35.224 12:28:17 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:35.224 12:28:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:35.224 12:28:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.224 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 ************************************ 00:08:35.224 START TEST rpc 00:08:35.224 ************************************ 00:08:35.224 12:28:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:35.224 * Looking for test storage... 00:08:35.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:35.224 12:28:17 -- rpc/rpc.sh@65 -- # spdk_pid=53437 00:08:35.224 12:28:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.224 12:28:17 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:35.224 12:28:17 -- rpc/rpc.sh@67 -- # waitforlisten 53437 00:08:35.224 12:28:17 -- common/autotest_common.sh@819 -- # '[' -z 53437 ']' 00:08:35.224 12:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.224 12:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.224 12:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.224 12:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:35.224 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 [2024-10-01 12:28:17.718088] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:35.224 [2024-10-01 12:28:17.718256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53437 ] 00:08:35.482 [2024-10-01 12:28:17.889742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.739 [2024-10-01 12:28:18.112507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.740 [2024-10-01 12:28:18.112773] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:35.740 [2024-10-01 12:28:18.112816] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53437' to capture a snapshot of events at runtime. 00:08:35.740 [2024-10-01 12:28:18.112832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53437 for offline analysis/debug. 00:08:35.740 [2024-10-01 12:28:18.112905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.113 12:28:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.113 12:28:19 -- common/autotest_common.sh@852 -- # return 0 00:08:37.113 12:28:19 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:37.113 12:28:19 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:37.113 12:28:19 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:37.113 12:28:19 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:37.113 12:28:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.113 12:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.113 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 ************************************ 00:08:37.113 START TEST rpc_integrity 00:08:37.113 ************************************ 00:08:37.113 12:28:19 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:37.113 12:28:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:37.113 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.113 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.113 12:28:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:37.113 12:28:19 -- rpc/rpc.sh@13 -- # jq length 00:08:37.113 12:28:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:37.113 12:28:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:37.113 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.113 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.113 12:28:19 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:37.113 12:28:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:37.114 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.114 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.114 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.114 12:28:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:37.114 { 00:08:37.114 "name": "Malloc0", 00:08:37.114 "aliases": [ 00:08:37.114 "69e29319-eba5-4766-bfeb-793bb907998b" 00:08:37.114 ], 00:08:37.114 "product_name": "Malloc disk", 00:08:37.114 "block_size": 512, 00:08:37.114 "num_blocks": 16384, 00:08:37.114 "uuid": "69e29319-eba5-4766-bfeb-793bb907998b", 00:08:37.114 "assigned_rate_limits": { 00:08:37.114 "rw_ios_per_sec": 0, 00:08:37.114 "rw_mbytes_per_sec": 0, 00:08:37.114 "r_mbytes_per_sec": 0, 00:08:37.114 "w_mbytes_per_sec": 0 00:08:37.114 }, 00:08:37.114 "claimed": false, 00:08:37.114 "zoned": false, 00:08:37.114 "supported_io_types": { 00:08:37.114 "read": true, 00:08:37.114 "write": true, 00:08:37.114 "unmap": true, 00:08:37.114 "write_zeroes": true, 00:08:37.114 "flush": true, 00:08:37.114 "reset": true, 00:08:37.114 "compare": false, 00:08:37.114 "compare_and_write": false, 00:08:37.114 "abort": true, 00:08:37.114 "nvme_admin": false, 00:08:37.114 "nvme_io": false 00:08:37.114 }, 00:08:37.114 "memory_domains": [ 00:08:37.114 { 00:08:37.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.114 "dma_device_type": 2 00:08:37.114 } 00:08:37.114 ], 00:08:37.114 "driver_specific": {} 00:08:37.114 } 00:08:37.114 ]' 00:08:37.114 12:28:19 -- rpc/rpc.sh@17 -- # jq length 00:08:37.114 12:28:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:37.114 12:28:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:37.114 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.114 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.114 [2024-10-01 12:28:19.610736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:37.114 [2024-10-01 12:28:19.610825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.114 [2024-10-01 12:28:19.610870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.114 [2024-10-01 12:28:19.610896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.114 [2024-10-01 12:28:19.613891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.114 [2024-10-01 12:28:19.613940] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:37.114 Passthru0 00:08:37.114 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.114 12:28:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:37.114 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.114 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.114 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.114 12:28:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:37.114 { 00:08:37.114 "name": "Malloc0", 00:08:37.114 "aliases": [ 00:08:37.114 "69e29319-eba5-4766-bfeb-793bb907998b" 00:08:37.114 ], 00:08:37.114 "product_name": "Malloc disk", 00:08:37.114 "block_size": 512, 00:08:37.114 "num_blocks": 16384, 00:08:37.114 "uuid": "69e29319-eba5-4766-bfeb-793bb907998b", 00:08:37.114 "assigned_rate_limits": { 00:08:37.114 "rw_ios_per_sec": 0, 00:08:37.114 "rw_mbytes_per_sec": 0, 00:08:37.114 "r_mbytes_per_sec": 0, 00:08:37.114 "w_mbytes_per_sec": 0 00:08:37.114 }, 00:08:37.114 "claimed": true, 00:08:37.114 "claim_type": "exclusive_write", 00:08:37.114 "zoned": false, 00:08:37.114 "supported_io_types": { 00:08:37.114 "read": true, 00:08:37.114 "write": true, 00:08:37.114 "unmap": true, 00:08:37.114 "write_zeroes": true, 00:08:37.114 "flush": true, 00:08:37.114 "reset": true, 00:08:37.114 "compare": false, 00:08:37.114 "compare_and_write": false, 00:08:37.114 "abort": true, 00:08:37.114 "nvme_admin": false, 00:08:37.114 "nvme_io": false 00:08:37.114 }, 00:08:37.114 "memory_domains": [ 00:08:37.114 { 00:08:37.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.114 "dma_device_type": 2 00:08:37.114 } 00:08:37.114 ], 00:08:37.114 "driver_specific": {} 00:08:37.114 }, 00:08:37.114 { 00:08:37.114 "name": "Passthru0", 00:08:37.114 "aliases": [ 00:08:37.114 "b418fc0c-424c-5254-ba06-30cd8aa31285" 00:08:37.114 ], 00:08:37.114 "product_name": "passthru", 00:08:37.114 "block_size": 512, 00:08:37.114 "num_blocks": 16384, 00:08:37.114 "uuid": "b418fc0c-424c-5254-ba06-30cd8aa31285", 00:08:37.114 "assigned_rate_limits": { 00:08:37.114 "rw_ios_per_sec": 0, 00:08:37.114 "rw_mbytes_per_sec": 0, 00:08:37.114 "r_mbytes_per_sec": 0, 00:08:37.114 "w_mbytes_per_sec": 0 00:08:37.114 }, 00:08:37.114 "claimed": false, 00:08:37.114 "zoned": false, 00:08:37.114 "supported_io_types": { 00:08:37.114 "read": true, 00:08:37.114 "write": true, 00:08:37.114 "unmap": true, 00:08:37.114 "write_zeroes": true, 00:08:37.114 "flush": true, 00:08:37.114 "reset": true, 00:08:37.114 "compare": false, 00:08:37.114 "compare_and_write": false, 00:08:37.114 "abort": true, 00:08:37.114 "nvme_admin": false, 00:08:37.114 "nvme_io": false 00:08:37.114 }, 00:08:37.114 "memory_domains": [ 00:08:37.114 { 00:08:37.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.114 "dma_device_type": 2 00:08:37.114 } 00:08:37.114 ], 00:08:37.114 "driver_specific": { 00:08:37.114 "passthru": { 00:08:37.114 "name": "Passthru0", 00:08:37.114 "base_bdev_name": "Malloc0" 00:08:37.114 } 00:08:37.114 } 00:08:37.114 } 00:08:37.114 ]' 00:08:37.114 12:28:19 -- rpc/rpc.sh@21 -- # jq length 00:08:37.374 12:28:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:37.374 12:28:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:37.374 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.374 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.374 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.374 12:28:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:37.374 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.374 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.374 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.374 12:28:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:37.374 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.374 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.374 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.374 12:28:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:37.374 12:28:19 -- rpc/rpc.sh@26 -- # jq length 00:08:37.374 12:28:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:37.374 00:08:37.374 real 0m0.308s 00:08:37.374 user 0m0.191s 00:08:37.374 sys 0m0.029s 00:08:37.374 12:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.374 ************************************ 00:08:37.374 END TEST rpc_integrity 00:08:37.374 ************************************ 00:08:37.374 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.374 12:28:19 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:37.374 12:28:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.374 12:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.375 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 ************************************ 00:08:37.375 START TEST rpc_plugins 00:08:37.375 ************************************ 00:08:37.375 12:28:19 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:37.375 12:28:19 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:37.375 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.375 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.375 12:28:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:37.375 12:28:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:37.375 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.375 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.375 12:28:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:37.375 { 00:08:37.375 "name": "Malloc1", 00:08:37.375 "aliases": [ 00:08:37.375 "f5f99005-3630-409e-a420-bedaca931964" 00:08:37.375 ], 00:08:37.375 "product_name": "Malloc disk", 00:08:37.375 "block_size": 4096, 00:08:37.375 "num_blocks": 256, 00:08:37.375 "uuid": "f5f99005-3630-409e-a420-bedaca931964", 00:08:37.375 "assigned_rate_limits": { 00:08:37.375 "rw_ios_per_sec": 0, 00:08:37.375 "rw_mbytes_per_sec": 0, 00:08:37.375 "r_mbytes_per_sec": 0, 00:08:37.375 "w_mbytes_per_sec": 0 00:08:37.375 }, 00:08:37.375 "claimed": false, 00:08:37.375 "zoned": false, 00:08:37.375 "supported_io_types": { 00:08:37.375 "read": true, 00:08:37.375 "write": true, 00:08:37.375 "unmap": true, 00:08:37.375 "write_zeroes": true, 00:08:37.375 "flush": true, 00:08:37.375 "reset": true, 00:08:37.375 "compare": false, 00:08:37.375 "compare_and_write": false, 00:08:37.375 "abort": true, 00:08:37.375 "nvme_admin": false, 00:08:37.375 "nvme_io": false 00:08:37.375 }, 00:08:37.375 "memory_domains": [ 00:08:37.375 { 00:08:37.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.375 "dma_device_type": 2 00:08:37.375 } 00:08:37.375 ], 00:08:37.375 "driver_specific": {} 00:08:37.375 } 00:08:37.375 ]' 00:08:37.375 12:28:19 -- rpc/rpc.sh@32 -- # jq length 00:08:37.375 12:28:19 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:37.375 12:28:19 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:37.375 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.375 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.634 12:28:19 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:37.634 12:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.634 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 12:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.634 12:28:19 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:37.634 12:28:19 -- rpc/rpc.sh@36 -- # jq length 00:08:37.634 ************************************ 00:08:37.634 END TEST rpc_plugins 00:08:37.634 ************************************ 00:08:37.634 12:28:19 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:37.634 00:08:37.634 real 0m0.145s 00:08:37.634 user 0m0.090s 00:08:37.634 sys 0m0.021s 00:08:37.634 12:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.634 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 12:28:19 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:37.634 12:28:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.634 12:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.634 12:28:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 ************************************ 00:08:37.634 START TEST rpc_trace_cmd_test 00:08:37.634 ************************************ 00:08:37.634 12:28:20 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:37.634 12:28:20 -- rpc/rpc.sh@40 -- # local info 00:08:37.634 12:28:20 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:37.634 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.634 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.634 12:28:20 -- rpc/rpc.sh@42 -- # info='{ 00:08:37.634 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53437", 00:08:37.634 "tpoint_group_mask": "0x8", 00:08:37.634 "iscsi_conn": { 00:08:37.634 "mask": "0x2", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "scsi": { 00:08:37.634 "mask": "0x4", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "bdev": { 00:08:37.634 "mask": "0x8", 00:08:37.634 "tpoint_mask": "0xffffffffffffffff" 00:08:37.634 }, 00:08:37.634 "nvmf_rdma": { 00:08:37.634 "mask": "0x10", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "nvmf_tcp": { 00:08:37.634 "mask": "0x20", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "ftl": { 00:08:37.634 "mask": "0x40", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "blobfs": { 00:08:37.634 "mask": "0x80", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "dsa": { 00:08:37.634 "mask": "0x200", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "thread": { 00:08:37.634 "mask": "0x400", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "nvme_pcie": { 00:08:37.634 "mask": "0x800", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "iaa": { 00:08:37.634 "mask": "0x1000", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "nvme_tcp": { 00:08:37.634 "mask": "0x2000", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 }, 00:08:37.634 "bdev_nvme": { 00:08:37.634 "mask": "0x4000", 00:08:37.634 "tpoint_mask": "0x0" 00:08:37.634 } 00:08:37.634 }' 00:08:37.634 12:28:20 -- rpc/rpc.sh@43 -- # jq length 00:08:37.634 12:28:20 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:37.634 12:28:20 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:37.634 12:28:20 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:37.634 12:28:20 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:37.892 12:28:20 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:37.892 12:28:20 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:37.892 12:28:20 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:37.892 12:28:20 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:37.892 ************************************ 00:08:37.892 END TEST rpc_trace_cmd_test 00:08:37.892 ************************************ 00:08:37.892 12:28:20 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:37.892 00:08:37.892 real 0m0.246s 00:08:37.892 user 0m0.218s 00:08:37.892 sys 0m0.020s 00:08:37.892 12:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.892 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 12:28:20 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:37.892 12:28:20 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:37.892 12:28:20 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:37.892 12:28:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:37.892 12:28:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.892 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 ************************************ 00:08:37.892 START TEST rpc_daemon_integrity 00:08:37.892 ************************************ 00:08:37.892 12:28:20 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:37.892 12:28:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:37.892 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.892 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.892 12:28:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:37.892 12:28:20 -- rpc/rpc.sh@13 -- # jq length 00:08:37.892 12:28:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:37.892 12:28:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:37.892 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.892 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.892 12:28:20 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:37.892 12:28:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:37.893 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.893 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:37.893 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.893 12:28:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:37.893 { 00:08:37.893 "name": "Malloc2", 00:08:37.893 "aliases": [ 00:08:37.893 "b15dc675-a233-4313-9dcf-2ae9256b8bbe" 00:08:37.893 ], 00:08:37.893 "product_name": "Malloc disk", 00:08:37.893 "block_size": 512, 00:08:37.893 "num_blocks": 16384, 00:08:37.893 "uuid": "b15dc675-a233-4313-9dcf-2ae9256b8bbe", 00:08:37.893 "assigned_rate_limits": { 00:08:37.893 "rw_ios_per_sec": 0, 00:08:37.893 "rw_mbytes_per_sec": 0, 00:08:37.893 "r_mbytes_per_sec": 0, 00:08:37.893 "w_mbytes_per_sec": 0 00:08:37.893 }, 00:08:37.893 "claimed": false, 00:08:37.893 "zoned": false, 00:08:37.893 "supported_io_types": { 00:08:37.893 "read": true, 00:08:37.893 "write": true, 00:08:37.893 "unmap": true, 00:08:37.893 "write_zeroes": true, 00:08:37.893 "flush": true, 00:08:37.893 "reset": true, 00:08:37.893 "compare": false, 00:08:37.893 "compare_and_write": false, 00:08:37.893 "abort": true, 00:08:37.893 "nvme_admin": false, 00:08:37.893 "nvme_io": false 00:08:37.893 }, 00:08:37.893 "memory_domains": [ 00:08:37.893 { 00:08:37.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.893 "dma_device_type": 2 00:08:37.893 } 00:08:37.893 ], 00:08:37.893 "driver_specific": {} 00:08:37.893 } 00:08:37.893 ]' 00:08:37.893 12:28:20 -- rpc/rpc.sh@17 -- # jq length 00:08:38.235 12:28:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:38.235 12:28:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:38.235 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.235 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.235 [2024-10-01 12:28:20.428596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:38.235 [2024-10-01 12:28:20.428707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.235 [2024-10-01 12:28:20.428742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:38.235 [2024-10-01 12:28:20.428766] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.235 [2024-10-01 12:28:20.431540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.235 [2024-10-01 12:28:20.431594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:38.235 Passthru0 00:08:38.235 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.235 12:28:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:38.235 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.235 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.235 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.235 12:28:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:38.235 { 00:08:38.235 "name": "Malloc2", 00:08:38.235 "aliases": [ 00:08:38.235 "b15dc675-a233-4313-9dcf-2ae9256b8bbe" 00:08:38.235 ], 00:08:38.235 "product_name": "Malloc disk", 00:08:38.235 "block_size": 512, 00:08:38.235 "num_blocks": 16384, 00:08:38.235 "uuid": "b15dc675-a233-4313-9dcf-2ae9256b8bbe", 00:08:38.235 "assigned_rate_limits": { 00:08:38.235 "rw_ios_per_sec": 0, 00:08:38.235 "rw_mbytes_per_sec": 0, 00:08:38.235 "r_mbytes_per_sec": 0, 00:08:38.235 "w_mbytes_per_sec": 0 00:08:38.235 }, 00:08:38.235 "claimed": true, 00:08:38.235 "claim_type": "exclusive_write", 00:08:38.235 "zoned": false, 00:08:38.235 "supported_io_types": { 00:08:38.235 "read": true, 00:08:38.235 "write": true, 00:08:38.235 "unmap": true, 00:08:38.235 "write_zeroes": true, 00:08:38.236 "flush": true, 00:08:38.236 "reset": true, 00:08:38.236 "compare": false, 00:08:38.236 "compare_and_write": false, 00:08:38.236 "abort": true, 00:08:38.236 "nvme_admin": false, 00:08:38.236 "nvme_io": false 00:08:38.236 }, 00:08:38.236 "memory_domains": [ 00:08:38.236 { 00:08:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.236 "dma_device_type": 2 00:08:38.236 } 00:08:38.236 ], 00:08:38.236 "driver_specific": {} 00:08:38.236 }, 00:08:38.236 { 00:08:38.236 "name": "Passthru0", 00:08:38.236 "aliases": [ 00:08:38.236 "55fc60d3-bc38-5c63-8fcd-dc14cbff117a" 00:08:38.236 ], 00:08:38.236 "product_name": "passthru", 00:08:38.236 "block_size": 512, 00:08:38.236 "num_blocks": 16384, 00:08:38.236 "uuid": "55fc60d3-bc38-5c63-8fcd-dc14cbff117a", 00:08:38.236 "assigned_rate_limits": { 00:08:38.236 "rw_ios_per_sec": 0, 00:08:38.236 "rw_mbytes_per_sec": 0, 00:08:38.236 "r_mbytes_per_sec": 0, 00:08:38.236 "w_mbytes_per_sec": 0 00:08:38.236 }, 00:08:38.236 "claimed": false, 00:08:38.236 "zoned": false, 00:08:38.236 "supported_io_types": { 00:08:38.236 "read": true, 00:08:38.236 "write": true, 00:08:38.236 "unmap": true, 00:08:38.236 "write_zeroes": true, 00:08:38.236 "flush": true, 00:08:38.236 "reset": true, 00:08:38.236 "compare": false, 00:08:38.236 "compare_and_write": false, 00:08:38.236 "abort": true, 00:08:38.236 "nvme_admin": false, 00:08:38.236 "nvme_io": false 00:08:38.236 }, 00:08:38.236 "memory_domains": [ 00:08:38.236 { 00:08:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.236 "dma_device_type": 2 00:08:38.236 } 00:08:38.236 ], 00:08:38.236 "driver_specific": { 00:08:38.236 "passthru": { 00:08:38.236 "name": "Passthru0", 00:08:38.236 "base_bdev_name": "Malloc2" 00:08:38.236 } 00:08:38.236 } 00:08:38.236 } 00:08:38.236 ]' 00:08:38.236 12:28:20 -- rpc/rpc.sh@21 -- # jq length 00:08:38.236 12:28:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:38.236 12:28:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:38.236 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.236 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.236 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.236 12:28:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:38.236 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.236 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.236 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.236 12:28:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:38.236 12:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:38.236 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.236 12:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:38.236 12:28:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:38.236 12:28:20 -- rpc/rpc.sh@26 -- # jq length 00:08:38.236 ************************************ 00:08:38.236 END TEST rpc_daemon_integrity 00:08:38.236 ************************************ 00:08:38.236 12:28:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:38.236 00:08:38.236 real 0m0.300s 00:08:38.236 user 0m0.184s 00:08:38.236 sys 0m0.037s 00:08:38.236 12:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.236 12:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.236 12:28:20 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:38.236 12:28:20 -- rpc/rpc.sh@84 -- # killprocess 53437 00:08:38.236 12:28:20 -- common/autotest_common.sh@926 -- # '[' -z 53437 ']' 00:08:38.236 12:28:20 -- common/autotest_common.sh@930 -- # kill -0 53437 00:08:38.236 12:28:20 -- common/autotest_common.sh@931 -- # uname 00:08:38.236 12:28:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:38.236 12:28:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53437 00:08:38.236 killing process with pid 53437 00:08:38.236 12:28:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:38.236 12:28:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:38.236 12:28:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53437' 00:08:38.236 12:28:20 -- common/autotest_common.sh@945 -- # kill 53437 00:08:38.236 12:28:20 -- common/autotest_common.sh@950 -- # wait 53437 00:08:40.769 00:08:40.769 real 0m5.187s 00:08:40.769 user 0m6.188s 00:08:40.769 sys 0m0.687s 00:08:40.769 12:28:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.769 12:28:22 -- common/autotest_common.sh@10 -- # set +x 00:08:40.769 ************************************ 00:08:40.769 END TEST rpc 00:08:40.769 ************************************ 00:08:40.769 12:28:22 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:40.769 12:28:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:40.769 12:28:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.769 12:28:22 -- common/autotest_common.sh@10 -- # set +x 00:08:40.769 ************************************ 00:08:40.769 START TEST rpc_client 00:08:40.769 ************************************ 00:08:40.769 12:28:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:40.769 * Looking for test storage... 00:08:40.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:40.769 12:28:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:40.769 OK 00:08:40.770 12:28:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:40.770 00:08:40.770 real 0m0.126s 00:08:40.770 user 0m0.054s 00:08:40.770 sys 0m0.077s 00:08:40.770 12:28:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.770 12:28:22 -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 ************************************ 00:08:40.770 END TEST rpc_client 00:08:40.770 ************************************ 00:08:40.770 12:28:22 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:40.770 12:28:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:40.770 12:28:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.770 12:28:22 -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 ************************************ 00:08:40.770 START TEST json_config 00:08:40.770 ************************************ 00:08:40.770 12:28:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:40.770 12:28:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.770 12:28:22 -- nvmf/common.sh@7 -- # uname -s 00:08:40.770 12:28:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.770 12:28:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.770 12:28:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.770 12:28:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.770 12:28:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.770 12:28:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.770 12:28:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.770 12:28:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.770 12:28:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.770 12:28:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.770 12:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4dd1456e-1657-4c37-b992-242c1af0be2c 00:08:40.770 12:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4dd1456e-1657-4c37-b992-242c1af0be2c 00:08:40.770 12:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.770 12:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.770 12:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:40.770 12:28:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.770 12:28:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.770 12:28:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.770 12:28:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.770 12:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@5 -- # export PATH 00:08:40.770 12:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- nvmf/common.sh@46 -- # : 0 00:08:40.770 12:28:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:40.770 12:28:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:40.770 12:28:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.770 12:28:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.770 12:28:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:40.770 12:28:23 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:40.770 12:28:23 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:40.770 12:28:23 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:40.770 12:28:23 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:40.770 12:28:23 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:40.770 WARNING: No tests are enabled so not running JSON configuration tests 00:08:40.770 12:28:23 -- json_config/json_config.sh@27 -- # exit 0 00:08:40.770 00:08:40.770 real 0m0.074s 00:08:40.770 user 0m0.038s 00:08:40.770 sys 0m0.034s 00:08:40.770 12:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.770 12:28:23 -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 ************************************ 00:08:40.770 END TEST json_config 00:08:40.770 ************************************ 00:08:40.770 12:28:23 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:40.770 12:28:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:40.770 12:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.770 12:28:23 -- common/autotest_common.sh@10 -- # set +x 00:08:40.770 ************************************ 00:08:40.770 START TEST json_config_extra_key 00:08:40.770 ************************************ 00:08:40.770 12:28:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.770 12:28:23 -- nvmf/common.sh@7 -- # uname -s 00:08:40.770 12:28:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.770 12:28:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.770 12:28:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.770 12:28:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.770 12:28:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.770 12:28:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.770 12:28:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.770 12:28:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.770 12:28:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.770 12:28:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.770 12:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4dd1456e-1657-4c37-b992-242c1af0be2c 00:08:40.770 12:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4dd1456e-1657-4c37-b992-242c1af0be2c 00:08:40.770 12:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.770 12:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.770 12:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:40.770 12:28:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.770 12:28:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.770 12:28:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.770 12:28:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.770 12:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- paths/export.sh@5 -- # export PATH 00:08:40.770 12:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.770 12:28:23 -- nvmf/common.sh@46 -- # : 0 00:08:40.770 12:28:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:40.770 12:28:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:40.770 12:28:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.770 12:28:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.770 12:28:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:40.770 12:28:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:40.770 12:28:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:40.771 INFO: launching applications... 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=53731 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:40.771 Waiting for target to run... 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:40.771 12:28:23 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 53731 /var/tmp/spdk_tgt.sock 00:08:40.771 12:28:23 -- common/autotest_common.sh@819 -- # '[' -z 53731 ']' 00:08:40.771 12:28:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:40.771 12:28:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.771 12:28:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:40.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:40.771 12:28:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.771 12:28:23 -- common/autotest_common.sh@10 -- # set +x 00:08:40.771 [2024-10-01 12:28:23.217614] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:40.771 [2024-10-01 12:28:23.217969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53731 ] 00:08:41.029 [2024-10-01 12:28:23.522869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.289 [2024-10-01 12:28:23.698040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.289 [2024-10-01 12:28:23.698302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.665 00:08:42.665 INFO: shutting down applications... 00:08:42.665 12:28:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:42.665 12:28:24 -- common/autotest_common.sh@852 -- # return 0 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 53731 ]] 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 53731 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:42.665 12:28:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:43.234 12:28:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:43.234 12:28:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:43.234 12:28:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:43.234 12:28:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:43.499 12:28:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:43.499 12:28:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:43.499 12:28:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:43.499 12:28:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:44.066 12:28:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:44.066 12:28:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:44.066 12:28:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:44.066 12:28:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:44.633 12:28:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:44.633 12:28:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:44.633 12:28:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:44.633 12:28:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 53731 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:45.200 SPDK target shutdown done 00:08:45.200 Success 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:45.200 12:28:27 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:45.200 00:08:45.200 real 0m4.423s 00:08:45.200 user 0m4.535s 00:08:45.200 sys 0m0.445s 00:08:45.200 ************************************ 00:08:45.200 END TEST json_config_extra_key 00:08:45.200 ************************************ 00:08:45.200 12:28:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.200 12:28:27 -- common/autotest_common.sh@10 -- # set +x 00:08:45.200 12:28:27 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:45.200 12:28:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.200 12:28:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.200 12:28:27 -- common/autotest_common.sh@10 -- # set +x 00:08:45.200 ************************************ 00:08:45.200 START TEST alias_rpc 00:08:45.200 ************************************ 00:08:45.200 12:28:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:45.200 * Looking for test storage... 00:08:45.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:45.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.200 12:28:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:45.200 12:28:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=53835 00:08:45.200 12:28:27 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:45.200 12:28:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 53835 00:08:45.200 12:28:27 -- common/autotest_common.sh@819 -- # '[' -z 53835 ']' 00:08:45.200 12:28:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.200 12:28:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.200 12:28:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.200 12:28:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.200 12:28:27 -- common/autotest_common.sh@10 -- # set +x 00:08:45.200 [2024-10-01 12:28:27.715392] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:45.200 [2024-10-01 12:28:27.715778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53835 ] 00:08:45.459 [2024-10-01 12:28:27.886028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.718 [2024-10-01 12:28:28.112747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.718 [2024-10-01 12:28:28.113223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.091 12:28:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.091 12:28:29 -- common/autotest_common.sh@852 -- # return 0 00:08:47.091 12:28:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:47.349 12:28:29 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 53835 00:08:47.349 12:28:29 -- common/autotest_common.sh@926 -- # '[' -z 53835 ']' 00:08:47.349 12:28:29 -- common/autotest_common.sh@930 -- # kill -0 53835 00:08:47.349 12:28:29 -- common/autotest_common.sh@931 -- # uname 00:08:47.349 12:28:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.349 12:28:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53835 00:08:47.349 killing process with pid 53835 00:08:47.349 12:28:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.349 12:28:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.349 12:28:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53835' 00:08:47.349 12:28:29 -- common/autotest_common.sh@945 -- # kill 53835 00:08:47.349 12:28:29 -- common/autotest_common.sh@950 -- # wait 53835 00:08:49.248 ************************************ 00:08:49.248 END TEST alias_rpc 00:08:49.248 ************************************ 00:08:49.248 00:08:49.248 real 0m4.219s 00:08:49.248 user 0m4.628s 00:08:49.248 sys 0m0.474s 00:08:49.248 12:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.248 12:28:31 -- common/autotest_common.sh@10 -- # set +x 00:08:49.506 12:28:31 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:49.506 12:28:31 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:49.506 12:28:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:49.506 12:28:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.506 12:28:31 -- common/autotest_common.sh@10 -- # set +x 00:08:49.506 ************************************ 00:08:49.506 START TEST spdkcli_tcp 00:08:49.506 ************************************ 00:08:49.506 12:28:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:49.506 * Looking for test storage... 00:08:49.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:49.506 12:28:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:49.506 12:28:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:49.506 12:28:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.506 12:28:31 -- common/autotest_common.sh@10 -- # set +x 00:08:49.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=53940 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 53940 00:08:49.506 12:28:31 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:49.506 12:28:31 -- common/autotest_common.sh@819 -- # '[' -z 53940 ']' 00:08:49.506 12:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.507 12:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.507 12:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.507 12:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.507 12:28:31 -- common/autotest_common.sh@10 -- # set +x 00:08:49.507 [2024-10-01 12:28:31.997036] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:49.507 [2024-10-01 12:28:31.997196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53940 ] 00:08:49.766 [2024-10-01 12:28:32.167506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.025 [2024-10-01 12:28:32.389650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.025 [2024-10-01 12:28:32.390172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.025 [2024-10-01 12:28:32.390181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.401 12:28:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.401 12:28:33 -- common/autotest_common.sh@852 -- # return 0 00:08:51.401 12:28:33 -- spdkcli/tcp.sh@31 -- # socat_pid=53965 00:08:51.401 12:28:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:51.401 12:28:33 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:51.660 [ 00:08:51.660 "bdev_malloc_delete", 00:08:51.660 "bdev_malloc_create", 00:08:51.660 "bdev_null_resize", 00:08:51.660 "bdev_null_delete", 00:08:51.660 "bdev_null_create", 00:08:51.660 "bdev_nvme_cuse_unregister", 00:08:51.660 "bdev_nvme_cuse_register", 00:08:51.660 "bdev_opal_new_user", 00:08:51.660 "bdev_opal_set_lock_state", 00:08:51.660 "bdev_opal_delete", 00:08:51.660 "bdev_opal_get_info", 00:08:51.660 "bdev_opal_create", 00:08:51.660 "bdev_nvme_opal_revert", 00:08:51.660 "bdev_nvme_opal_init", 00:08:51.660 "bdev_nvme_send_cmd", 00:08:51.660 "bdev_nvme_get_path_iostat", 00:08:51.660 "bdev_nvme_get_mdns_discovery_info", 00:08:51.660 "bdev_nvme_stop_mdns_discovery", 00:08:51.660 "bdev_nvme_start_mdns_discovery", 00:08:51.660 "bdev_nvme_set_multipath_policy", 00:08:51.660 "bdev_nvme_set_preferred_path", 00:08:51.660 "bdev_nvme_get_io_paths", 00:08:51.660 "bdev_nvme_remove_error_injection", 00:08:51.660 "bdev_nvme_add_error_injection", 00:08:51.660 "bdev_nvme_get_discovery_info", 00:08:51.660 "bdev_nvme_stop_discovery", 00:08:51.660 "bdev_nvme_start_discovery", 00:08:51.660 "bdev_nvme_get_controller_health_info", 00:08:51.660 "bdev_nvme_disable_controller", 00:08:51.660 "bdev_nvme_enable_controller", 00:08:51.660 "bdev_nvme_reset_controller", 00:08:51.660 "bdev_nvme_get_transport_statistics", 00:08:51.660 "bdev_nvme_apply_firmware", 00:08:51.660 "bdev_nvme_detach_controller", 00:08:51.660 "bdev_nvme_get_controllers", 00:08:51.660 "bdev_nvme_attach_controller", 00:08:51.660 "bdev_nvme_set_hotplug", 00:08:51.660 "bdev_nvme_set_options", 00:08:51.660 "bdev_passthru_delete", 00:08:51.660 "bdev_passthru_create", 00:08:51.660 "bdev_lvol_grow_lvstore", 00:08:51.660 "bdev_lvol_get_lvols", 00:08:51.660 "bdev_lvol_get_lvstores", 00:08:51.660 "bdev_lvol_delete", 00:08:51.660 "bdev_lvol_set_read_only", 00:08:51.660 "bdev_lvol_resize", 00:08:51.660 "bdev_lvol_decouple_parent", 00:08:51.660 "bdev_lvol_inflate", 00:08:51.660 "bdev_lvol_rename", 00:08:51.660 "bdev_lvol_clone_bdev", 00:08:51.660 "bdev_lvol_clone", 00:08:51.660 "bdev_lvol_snapshot", 00:08:51.660 "bdev_lvol_create", 00:08:51.660 "bdev_lvol_delete_lvstore", 00:08:51.660 "bdev_lvol_rename_lvstore", 00:08:51.660 "bdev_lvol_create_lvstore", 00:08:51.660 "bdev_raid_set_options", 00:08:51.660 "bdev_raid_remove_base_bdev", 00:08:51.660 "bdev_raid_add_base_bdev", 00:08:51.660 "bdev_raid_delete", 00:08:51.660 "bdev_raid_create", 00:08:51.660 "bdev_raid_get_bdevs", 00:08:51.660 "bdev_error_inject_error", 00:08:51.660 "bdev_error_delete", 00:08:51.660 "bdev_error_create", 00:08:51.660 "bdev_split_delete", 00:08:51.660 "bdev_split_create", 00:08:51.660 "bdev_delay_delete", 00:08:51.660 "bdev_delay_create", 00:08:51.660 "bdev_delay_update_latency", 00:08:51.660 "bdev_zone_block_delete", 00:08:51.660 "bdev_zone_block_create", 00:08:51.660 "blobfs_create", 00:08:51.660 "blobfs_detect", 00:08:51.660 "blobfs_set_cache_size", 00:08:51.660 "bdev_aio_delete", 00:08:51.660 "bdev_aio_rescan", 00:08:51.660 "bdev_aio_create", 00:08:51.660 "bdev_ftl_set_property", 00:08:51.660 "bdev_ftl_get_properties", 00:08:51.660 "bdev_ftl_get_stats", 00:08:51.660 "bdev_ftl_unmap", 00:08:51.660 "bdev_ftl_unload", 00:08:51.660 "bdev_ftl_delete", 00:08:51.660 "bdev_ftl_load", 00:08:51.660 "bdev_ftl_create", 00:08:51.660 "bdev_virtio_attach_controller", 00:08:51.660 "bdev_virtio_scsi_get_devices", 00:08:51.660 "bdev_virtio_detach_controller", 00:08:51.660 "bdev_virtio_blk_set_hotplug", 00:08:51.660 "bdev_iscsi_delete", 00:08:51.660 "bdev_iscsi_create", 00:08:51.660 "bdev_iscsi_set_options", 00:08:51.660 "accel_error_inject_error", 00:08:51.660 "ioat_scan_accel_module", 00:08:51.660 "dsa_scan_accel_module", 00:08:51.660 "iaa_scan_accel_module", 00:08:51.660 "iscsi_set_options", 00:08:51.660 "iscsi_get_auth_groups", 00:08:51.660 "iscsi_auth_group_remove_secret", 00:08:51.660 "iscsi_auth_group_add_secret", 00:08:51.660 "iscsi_delete_auth_group", 00:08:51.660 "iscsi_create_auth_group", 00:08:51.660 "iscsi_set_discovery_auth", 00:08:51.660 "iscsi_get_options", 00:08:51.660 "iscsi_target_node_request_logout", 00:08:51.660 "iscsi_target_node_set_redirect", 00:08:51.660 "iscsi_target_node_set_auth", 00:08:51.660 "iscsi_target_node_add_lun", 00:08:51.660 "iscsi_get_connections", 00:08:51.660 "iscsi_portal_group_set_auth", 00:08:51.660 "iscsi_start_portal_group", 00:08:51.660 "iscsi_delete_portal_group", 00:08:51.660 "iscsi_create_portal_group", 00:08:51.660 "iscsi_get_portal_groups", 00:08:51.660 "iscsi_delete_target_node", 00:08:51.660 "iscsi_target_node_remove_pg_ig_maps", 00:08:51.660 "iscsi_target_node_add_pg_ig_maps", 00:08:51.660 "iscsi_create_target_node", 00:08:51.660 "iscsi_get_target_nodes", 00:08:51.660 "iscsi_delete_initiator_group", 00:08:51.660 "iscsi_initiator_group_remove_initiators", 00:08:51.660 "iscsi_initiator_group_add_initiators", 00:08:51.660 "iscsi_create_initiator_group", 00:08:51.660 "iscsi_get_initiator_groups", 00:08:51.660 "nvmf_set_crdt", 00:08:51.660 "nvmf_set_config", 00:08:51.661 "nvmf_set_max_subsystems", 00:08:51.661 "nvmf_subsystem_get_listeners", 00:08:51.661 "nvmf_subsystem_get_qpairs", 00:08:51.661 "nvmf_subsystem_get_controllers", 00:08:51.661 "nvmf_get_stats", 00:08:51.661 "nvmf_get_transports", 00:08:51.661 "nvmf_create_transport", 00:08:51.661 "nvmf_get_targets", 00:08:51.661 "nvmf_delete_target", 00:08:51.661 "nvmf_create_target", 00:08:51.661 "nvmf_subsystem_allow_any_host", 00:08:51.661 "nvmf_subsystem_remove_host", 00:08:51.661 "nvmf_subsystem_add_host", 00:08:51.661 "nvmf_subsystem_remove_ns", 00:08:51.661 "nvmf_subsystem_add_ns", 00:08:51.661 "nvmf_subsystem_listener_set_ana_state", 00:08:51.661 "nvmf_discovery_get_referrals", 00:08:51.661 "nvmf_discovery_remove_referral", 00:08:51.661 "nvmf_discovery_add_referral", 00:08:51.661 "nvmf_subsystem_remove_listener", 00:08:51.661 "nvmf_subsystem_add_listener", 00:08:51.661 "nvmf_delete_subsystem", 00:08:51.661 "nvmf_create_subsystem", 00:08:51.661 "nvmf_get_subsystems", 00:08:51.661 "env_dpdk_get_mem_stats", 00:08:51.661 "nbd_get_disks", 00:08:51.661 "nbd_stop_disk", 00:08:51.661 "nbd_start_disk", 00:08:51.661 "ublk_recover_disk", 00:08:51.661 "ublk_get_disks", 00:08:51.661 "ublk_stop_disk", 00:08:51.661 "ublk_start_disk", 00:08:51.661 "ublk_destroy_target", 00:08:51.661 "ublk_create_target", 00:08:51.661 "virtio_blk_create_transport", 00:08:51.661 "virtio_blk_get_transports", 00:08:51.661 "vhost_controller_set_coalescing", 00:08:51.661 "vhost_get_controllers", 00:08:51.661 "vhost_delete_controller", 00:08:51.661 "vhost_create_blk_controller", 00:08:51.661 "vhost_scsi_controller_remove_target", 00:08:51.661 "vhost_scsi_controller_add_target", 00:08:51.661 "vhost_start_scsi_controller", 00:08:51.661 "vhost_create_scsi_controller", 00:08:51.661 "thread_set_cpumask", 00:08:51.661 "framework_get_scheduler", 00:08:51.661 "framework_set_scheduler", 00:08:51.661 "framework_get_reactors", 00:08:51.661 "thread_get_io_channels", 00:08:51.661 "thread_get_pollers", 00:08:51.661 "thread_get_stats", 00:08:51.661 "framework_monitor_context_switch", 00:08:51.661 "spdk_kill_instance", 00:08:51.661 "log_enable_timestamps", 00:08:51.661 "log_get_flags", 00:08:51.661 "log_clear_flag", 00:08:51.661 "log_set_flag", 00:08:51.661 "log_get_level", 00:08:51.661 "log_set_level", 00:08:51.661 "log_get_print_level", 00:08:51.661 "log_set_print_level", 00:08:51.661 "framework_enable_cpumask_locks", 00:08:51.661 "framework_disable_cpumask_locks", 00:08:51.661 "framework_wait_init", 00:08:51.661 "framework_start_init", 00:08:51.661 "scsi_get_devices", 00:08:51.661 "bdev_get_histogram", 00:08:51.661 "bdev_enable_histogram", 00:08:51.661 "bdev_set_qos_limit", 00:08:51.661 "bdev_set_qd_sampling_period", 00:08:51.661 "bdev_get_bdevs", 00:08:51.661 "bdev_reset_iostat", 00:08:51.661 "bdev_get_iostat", 00:08:51.661 "bdev_examine", 00:08:51.661 "bdev_wait_for_examine", 00:08:51.661 "bdev_set_options", 00:08:51.661 "notify_get_notifications", 00:08:51.661 "notify_get_types", 00:08:51.661 "accel_get_stats", 00:08:51.661 "accel_set_options", 00:08:51.661 "accel_set_driver", 00:08:51.661 "accel_crypto_key_destroy", 00:08:51.661 "accel_crypto_keys_get", 00:08:51.661 "accel_crypto_key_create", 00:08:51.661 "accel_assign_opc", 00:08:51.661 "accel_get_module_info", 00:08:51.661 "accel_get_opc_assignments", 00:08:51.661 "vmd_rescan", 00:08:51.661 "vmd_remove_device", 00:08:51.661 "vmd_enable", 00:08:51.661 "sock_set_default_impl", 00:08:51.661 "sock_impl_set_options", 00:08:51.661 "sock_impl_get_options", 00:08:51.661 "iobuf_get_stats", 00:08:51.661 "iobuf_set_options", 00:08:51.661 "framework_get_pci_devices", 00:08:51.661 "framework_get_config", 00:08:51.661 "framework_get_subsystems", 00:08:51.661 "trace_get_info", 00:08:51.661 "trace_get_tpoint_group_mask", 00:08:51.661 "trace_disable_tpoint_group", 00:08:51.661 "trace_enable_tpoint_group", 00:08:51.661 "trace_clear_tpoint_mask", 00:08:51.661 "trace_set_tpoint_mask", 00:08:51.661 "spdk_get_version", 00:08:51.661 "rpc_get_methods" 00:08:51.661 ] 00:08:51.661 12:28:34 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:51.661 12:28:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:51.661 12:28:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.661 12:28:34 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:51.661 12:28:34 -- spdkcli/tcp.sh@38 -- # killprocess 53940 00:08:51.661 12:28:34 -- common/autotest_common.sh@926 -- # '[' -z 53940 ']' 00:08:51.661 12:28:34 -- common/autotest_common.sh@930 -- # kill -0 53940 00:08:51.661 12:28:34 -- common/autotest_common.sh@931 -- # uname 00:08:51.661 12:28:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:51.661 12:28:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53940 00:08:51.661 killing process with pid 53940 00:08:51.661 12:28:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:51.661 12:28:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:51.661 12:28:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53940' 00:08:51.661 12:28:34 -- common/autotest_common.sh@945 -- # kill 53940 00:08:51.661 12:28:34 -- common/autotest_common.sh@950 -- # wait 53940 00:08:54.200 ************************************ 00:08:54.200 END TEST spdkcli_tcp 00:08:54.200 ************************************ 00:08:54.200 00:08:54.200 real 0m4.338s 00:08:54.200 user 0m8.222s 00:08:54.200 sys 0m0.503s 00:08:54.200 12:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.200 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 12:28:36 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:54.200 12:28:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.200 12:28:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.200 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 ************************************ 00:08:54.200 START TEST dpdk_mem_utility 00:08:54.200 ************************************ 00:08:54.200 12:28:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:54.200 * Looking for test storage... 00:08:54.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:54.200 12:28:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:54.200 12:28:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54055 00:08:54.200 12:28:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54055 00:08:54.200 12:28:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:54.200 12:28:36 -- common/autotest_common.sh@819 -- # '[' -z 54055 ']' 00:08:54.200 12:28:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.200 12:28:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.200 12:28:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.200 12:28:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.200 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 [2024-10-01 12:28:36.360190] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:54.200 [2024-10-01 12:28:36.360682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54055 ] 00:08:54.200 [2024-10-01 12:28:36.530862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.458 [2024-10-01 12:28:36.754473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.458 [2024-10-01 12:28:36.754787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.836 12:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.836 12:28:38 -- common/autotest_common.sh@852 -- # return 0 00:08:55.836 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:55.836 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:55.836 12:28:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.836 12:28:38 -- common/autotest_common.sh@10 -- # set +x 00:08:55.836 { 00:08:55.836 "filename": "/tmp/spdk_mem_dump.txt" 00:08:55.836 } 00:08:55.836 12:28:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.836 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:55.836 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:55.836 1 heaps totaling size 820.000000 MiB 00:08:55.836 size: 820.000000 MiB heap id: 0 00:08:55.836 end heaps---------- 00:08:55.836 8 mempools totaling size 598.116089 MiB 00:08:55.836 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:55.836 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:55.836 size: 84.521057 MiB name: bdev_io_54055 00:08:55.836 size: 51.011292 MiB name: evtpool_54055 00:08:55.836 size: 50.003479 MiB name: msgpool_54055 00:08:55.836 size: 21.763794 MiB name: PDU_Pool 00:08:55.836 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:55.836 size: 0.026123 MiB name: Session_Pool 00:08:55.836 end mempools------- 00:08:55.836 6 memzones totaling size 4.142822 MiB 00:08:55.836 size: 1.000366 MiB name: RG_ring_0_54055 00:08:55.836 size: 1.000366 MiB name: RG_ring_1_54055 00:08:55.836 size: 1.000366 MiB name: RG_ring_4_54055 00:08:55.836 size: 1.000366 MiB name: RG_ring_5_54055 00:08:55.836 size: 0.125366 MiB name: RG_ring_2_54055 00:08:55.836 size: 0.015991 MiB name: RG_ring_3_54055 00:08:55.836 end memzones------- 00:08:55.836 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:55.836 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:08:55.836 list of free elements. size: 18.451294 MiB 00:08:55.836 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:55.836 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:55.836 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:55.836 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:55.836 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:55.836 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:55.836 element at address: 0x200019600000 with size: 0.999084 MiB 00:08:55.836 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:55.836 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:55.836 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:55.836 element at address: 0x200019900040 with size: 0.936401 MiB 00:08:55.836 element at address: 0x200000200000 with size: 0.829224 MiB 00:08:55.836 element at address: 0x20001b000000 with size: 0.564880 MiB 00:08:55.836 element at address: 0x200019200000 with size: 0.487976 MiB 00:08:55.836 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:55.836 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:55.836 element at address: 0x200028400000 with size: 0.390442 MiB 00:08:55.836 element at address: 0x200003a00000 with size: 0.351990 MiB 00:08:55.836 list of standard malloc elements. size: 199.284302 MiB 00:08:55.836 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:55.836 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:55.836 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:55.836 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:55.836 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:55.836 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:55.836 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:55.836 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:55.836 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:55.836 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:08:55.836 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:55.836 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:55.836 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200028463f40 with size: 0.000244 MiB 00:08:55.837 element at address: 0x200028464040 with size: 0.000244 MiB 00:08:55.837 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846af80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b080 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b180 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b280 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b380 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b480 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b580 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b680 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b780 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b880 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846b980 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:55.838 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:55.838 list of memzone associated elements. size: 602.264404 MiB 00:08:55.838 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:55.838 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:55.838 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:55.838 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:55.838 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:55.838 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54055_0 00:08:55.838 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:55.838 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54055_0 00:08:55.838 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:55.838 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54055_0 00:08:55.838 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:55.838 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:55.838 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:55.838 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:55.838 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:55.838 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54055 00:08:55.838 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:55.838 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54055 00:08:55.838 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:55.838 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54055 00:08:55.838 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:55.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:55.838 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:55.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:55.838 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:55.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:55.838 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:55.838 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:55.838 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:55.838 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54055 00:08:55.838 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:55.838 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54055 00:08:55.838 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:55.838 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54055 00:08:55.838 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:55.838 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54055 00:08:55.838 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:55.838 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54055 00:08:55.838 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:55.838 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:55.838 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:55.838 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:55.838 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:55.838 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:55.838 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:55.838 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54055 00:08:55.838 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:55.838 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:55.838 element at address: 0x200028464140 with size: 0.023804 MiB 00:08:55.838 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:55.838 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:55.838 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54055 00:08:55.838 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:08:55.838 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:55.838 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:08:55.838 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54055 00:08:55.838 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:55.838 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54055 00:08:55.838 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:08:55.838 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:55.838 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:55.838 12:28:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54055 00:08:55.838 12:28:38 -- common/autotest_common.sh@926 -- # '[' -z 54055 ']' 00:08:55.838 12:28:38 -- common/autotest_common.sh@930 -- # kill -0 54055 00:08:55.838 12:28:38 -- common/autotest_common.sh@931 -- # uname 00:08:55.838 12:28:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:55.838 12:28:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54055 00:08:55.838 killing process with pid 54055 00:08:55.838 12:28:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:55.838 12:28:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:55.838 12:28:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54055' 00:08:55.839 12:28:38 -- common/autotest_common.sh@945 -- # kill 54055 00:08:55.839 12:28:38 -- common/autotest_common.sh@950 -- # wait 54055 00:08:58.372 ************************************ 00:08:58.372 END TEST dpdk_mem_utility 00:08:58.372 ************************************ 00:08:58.372 00:08:58.372 real 0m4.090s 00:08:58.372 user 0m4.439s 00:08:58.372 sys 0m0.468s 00:08:58.372 12:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.372 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.372 12:28:40 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:58.372 12:28:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.372 12:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.372 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.372 ************************************ 00:08:58.372 START TEST event 00:08:58.372 ************************************ 00:08:58.372 12:28:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:58.372 * Looking for test storage... 00:08:58.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:58.372 12:28:40 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:58.372 12:28:40 -- bdev/nbd_common.sh@6 -- # set -e 00:08:58.372 12:28:40 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:58.372 12:28:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:58.372 12:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.372 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.372 ************************************ 00:08:58.372 START TEST event_perf 00:08:58.372 ************************************ 00:08:58.372 12:28:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:58.372 Running I/O for 1 seconds...[2024-10-01 12:28:40.433204] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:58.372 [2024-10-01 12:28:40.433386] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54156 ] 00:08:58.372 [2024-10-01 12:28:40.616398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.372 [2024-10-01 12:28:40.814939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.372 [2024-10-01 12:28:40.815015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.372 [2024-10-01 12:28:40.815086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.372 [2024-10-01 12:28:40.815102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.748 Running I/O for 1 seconds... 00:08:59.748 lcore 0: 186451 00:08:59.748 lcore 1: 186450 00:08:59.748 lcore 2: 186450 00:08:59.748 lcore 3: 186451 00:08:59.748 done. 00:08:59.748 00:08:59.748 real 0m1.793s 00:08:59.748 user 0m4.559s 00:08:59.748 sys 0m0.108s 00:08:59.748 12:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.748 12:28:42 -- common/autotest_common.sh@10 -- # set +x 00:08:59.748 ************************************ 00:08:59.748 END TEST event_perf 00:08:59.748 ************************************ 00:08:59.748 12:28:42 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:59.748 12:28:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:59.748 12:28:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.748 12:28:42 -- common/autotest_common.sh@10 -- # set +x 00:08:59.748 ************************************ 00:08:59.748 START TEST event_reactor 00:08:59.748 ************************************ 00:08:59.748 12:28:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:00.006 [2024-10-01 12:28:42.273670] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:00.006 [2024-10-01 12:28:42.273827] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54201 ] 00:09:00.006 [2024-10-01 12:28:42.446109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.263 [2024-10-01 12:28:42.672641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.638 test_start 00:09:01.638 oneshot 00:09:01.638 tick 100 00:09:01.638 tick 100 00:09:01.638 tick 250 00:09:01.638 tick 100 00:09:01.638 tick 100 00:09:01.638 tick 100 00:09:01.638 tick 250 00:09:01.638 tick 500 00:09:01.638 tick 100 00:09:01.638 tick 100 00:09:01.638 tick 250 00:09:01.638 tick 100 00:09:01.638 tick 100 00:09:01.638 test_end 00:09:01.638 ************************************ 00:09:01.638 00:09:01.638 real 0m1.793s 00:09:01.638 user 0m1.585s 00:09:01.638 sys 0m0.097s 00:09:01.638 12:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.638 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:01.638 END TEST event_reactor 00:09:01.638 ************************************ 00:09:01.638 12:28:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:01.638 12:28:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:01.638 12:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.638 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:01.638 ************************************ 00:09:01.638 START TEST event_reactor_perf 00:09:01.638 ************************************ 00:09:01.638 12:28:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:01.638 [2024-10-01 12:28:44.112902] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:01.638 [2024-10-01 12:28:44.113220] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54238 ] 00:09:01.896 [2024-10-01 12:28:44.281584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.154 [2024-10-01 12:28:44.504304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.528 test_start 00:09:03.528 test_end 00:09:03.528 Performance: 269600 events per second 00:09:03.528 ************************************ 00:09:03.528 END TEST event_reactor_perf 00:09:03.528 ************************************ 00:09:03.528 00:09:03.528 real 0m1.797s 00:09:03.528 user 0m1.596s 00:09:03.528 sys 0m0.089s 00:09:03.528 12:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.528 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:09:03.528 12:28:45 -- event/event.sh@49 -- # uname -s 00:09:03.528 12:28:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:03.528 12:28:45 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:03.528 12:28:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.528 12:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.528 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:09:03.528 ************************************ 00:09:03.528 START TEST event_scheduler 00:09:03.528 ************************************ 00:09:03.528 12:28:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:03.528 * Looking for test storage... 00:09:03.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:03.528 12:28:45 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:03.528 12:28:45 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54305 00:09:03.528 12:28:45 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.528 12:28:45 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:03.528 12:28:45 -- scheduler/scheduler.sh@37 -- # waitforlisten 54305 00:09:03.528 12:28:45 -- common/autotest_common.sh@819 -- # '[' -z 54305 ']' 00:09:03.528 12:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.528 12:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.528 12:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.528 12:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.528 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:09:03.786 [2024-10-01 12:28:46.084961] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:03.786 [2024-10-01 12:28:46.085151] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54305 ] 00:09:03.786 [2024-10-01 12:28:46.257724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.043 [2024-10-01 12:28:46.456286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.043 [2024-10-01 12:28:46.456361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.044 [2024-10-01 12:28:46.456511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.044 [2024-10-01 12:28:46.456686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.631 12:28:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:04.631 12:28:47 -- common/autotest_common.sh@852 -- # return 0 00:09:04.631 12:28:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:04.631 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.631 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.631 POWER: Env isn't set yet! 00:09:04.631 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:04.631 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:04.631 POWER: Cannot set governor of lcore 0 to userspace 00:09:04.631 POWER: Attempting to initialise PSTAT power management... 00:09:04.631 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:04.631 POWER: Cannot set governor of lcore 0 to performance 00:09:04.631 POWER: Attempting to initialise AMD PSTATE power management... 00:09:04.631 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:04.631 POWER: Cannot set governor of lcore 0 to userspace 00:09:04.631 POWER: Attempting to initialise CPPC power management... 00:09:04.631 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:04.631 POWER: Cannot set governor of lcore 0 to userspace 00:09:04.631 POWER: Attempting to initialise VM power management... 00:09:04.631 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:04.631 POWER: Unable to set Power Management Environment for lcore 0 00:09:04.631 [2024-10-01 12:28:47.050277] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:04.631 [2024-10-01 12:28:47.050300] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:04.631 [2024-10-01 12:28:47.050321] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:04.631 [2024-10-01 12:28:47.050349] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:04.631 [2024-10-01 12:28:47.050363] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:04.631 [2024-10-01 12:28:47.050374] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:04.631 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.631 12:28:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:04.631 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.631 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 [2024-10-01 12:28:47.337039] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:04.889 12:28:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.889 12:28:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 ************************************ 00:09:04.889 START TEST scheduler_create_thread 00:09:04.889 ************************************ 00:09:04.889 12:28:47 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 2 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 3 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 4 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 5 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 6 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 7 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.889 8 00:09:04.889 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:04.889 12:28:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:04.889 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:04.889 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 9 00:09:05.149 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.149 12:28:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:05.149 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.149 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 10 00:09:05.149 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.149 12:28:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:05.149 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.149 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.149 12:28:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:05.149 12:28:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:05.149 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.149 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.149 12:28:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:05.149 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.149 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:06.085 12:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:06.085 12:28:48 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:06.085 12:28:48 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:06.085 12:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:06.085 12:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 ************************************ 00:09:07.021 END TEST scheduler_create_thread 00:09:07.021 ************************************ 00:09:07.021 12:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:07.021 00:09:07.021 real 0m2.136s 00:09:07.021 user 0m0.016s 00:09:07.021 sys 0m0.007s 00:09:07.021 12:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.021 12:28:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 12:28:49 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:07.021 12:28:49 -- scheduler/scheduler.sh@46 -- # killprocess 54305 00:09:07.021 12:28:49 -- common/autotest_common.sh@926 -- # '[' -z 54305 ']' 00:09:07.021 12:28:49 -- common/autotest_common.sh@930 -- # kill -0 54305 00:09:07.021 12:28:49 -- common/autotest_common.sh@931 -- # uname 00:09:07.021 12:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:07.021 12:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54305 00:09:07.279 killing process with pid 54305 00:09:07.280 12:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:07.280 12:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:07.280 12:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54305' 00:09:07.280 12:28:49 -- common/autotest_common.sh@945 -- # kill 54305 00:09:07.280 12:28:49 -- common/autotest_common.sh@950 -- # wait 54305 00:09:07.538 [2024-10-01 12:28:49.966224] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:08.919 ************************************ 00:09:08.919 END TEST event_scheduler 00:09:08.919 ************************************ 00:09:08.919 00:09:08.919 real 0m5.187s 00:09:08.919 user 0m8.811s 00:09:08.919 sys 0m0.415s 00:09:08.919 12:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.919 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:09:08.919 12:28:51 -- event/event.sh@51 -- # modprobe -n nbd 00:09:08.919 12:28:51 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:08.919 12:28:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.919 12:28:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.919 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:09:08.919 ************************************ 00:09:08.919 START TEST app_repeat 00:09:08.919 ************************************ 00:09:08.919 12:28:51 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:08.919 12:28:51 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.919 12:28:51 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.919 12:28:51 -- event/event.sh@13 -- # local nbd_list 00:09:08.919 12:28:51 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.919 12:28:51 -- event/event.sh@14 -- # local bdev_list 00:09:08.919 12:28:51 -- event/event.sh@15 -- # local repeat_times=4 00:09:08.919 12:28:51 -- event/event.sh@17 -- # modprobe nbd 00:09:08.919 Process app_repeat pid: 54411 00:09:08.919 spdk_app_start Round 0 00:09:08.919 12:28:51 -- event/event.sh@19 -- # repeat_pid=54411 00:09:08.919 12:28:51 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.919 12:28:51 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54411' 00:09:08.919 12:28:51 -- event/event.sh@23 -- # for i in {0..2} 00:09:08.919 12:28:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:08.919 12:28:51 -- event/event.sh@25 -- # waitforlisten 54411 /var/tmp/spdk-nbd.sock 00:09:08.919 12:28:51 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:08.919 12:28:51 -- common/autotest_common.sh@819 -- # '[' -z 54411 ']' 00:09:08.919 12:28:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.919 12:28:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.919 12:28:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.919 12:28:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.919 12:28:51 -- common/autotest_common.sh@10 -- # set +x 00:09:08.919 [2024-10-01 12:28:51.215389] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:08.919 [2024-10-01 12:28:51.215555] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54411 ] 00:09:08.919 [2024-10-01 12:28:51.375506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.178 [2024-10-01 12:28:51.566583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.178 [2024-10-01 12:28:51.566599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.745 12:28:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.745 12:28:52 -- common/autotest_common.sh@852 -- # return 0 00:09:09.745 12:28:52 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.003 Malloc0 00:09:10.262 12:28:52 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.521 Malloc1 00:09:10.521 12:28:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.521 12:28:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@12 -- # local i 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.522 12:28:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:10.780 /dev/nbd0 00:09:10.781 12:28:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:10.781 12:28:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:10.781 12:28:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:10.781 12:28:53 -- common/autotest_common.sh@857 -- # local i 00:09:10.781 12:28:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:10.781 12:28:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:10.781 12:28:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:10.781 12:28:53 -- common/autotest_common.sh@861 -- # break 00:09:10.781 12:28:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:10.781 12:28:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:10.781 12:28:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:10.781 1+0 records in 00:09:10.781 1+0 records out 00:09:10.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454953 s, 9.0 MB/s 00:09:10.781 12:28:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:10.781 12:28:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:10.781 12:28:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:10.781 12:28:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:10.781 12:28:53 -- common/autotest_common.sh@877 -- # return 0 00:09:10.781 12:28:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:10.781 12:28:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.781 12:28:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:11.040 /dev/nbd1 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:11.040 12:28:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:11.040 12:28:53 -- common/autotest_common.sh@857 -- # local i 00:09:11.040 12:28:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:11.040 12:28:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:11.040 12:28:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:11.040 12:28:53 -- common/autotest_common.sh@861 -- # break 00:09:11.040 12:28:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:11.040 12:28:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:11.040 12:28:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.040 1+0 records in 00:09:11.040 1+0 records out 00:09:11.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445577 s, 9.2 MB/s 00:09:11.040 12:28:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.040 12:28:53 -- common/autotest_common.sh@874 -- # size=4096 00:09:11.040 12:28:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.040 12:28:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:11.040 12:28:53 -- common/autotest_common.sh@877 -- # return 0 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.040 12:28:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:11.299 { 00:09:11.299 "nbd_device": "/dev/nbd0", 00:09:11.299 "bdev_name": "Malloc0" 00:09:11.299 }, 00:09:11.299 { 00:09:11.299 "nbd_device": "/dev/nbd1", 00:09:11.299 "bdev_name": "Malloc1" 00:09:11.299 } 00:09:11.299 ]' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:11.299 { 00:09:11.299 "nbd_device": "/dev/nbd0", 00:09:11.299 "bdev_name": "Malloc0" 00:09:11.299 }, 00:09:11.299 { 00:09:11.299 "nbd_device": "/dev/nbd1", 00:09:11.299 "bdev_name": "Malloc1" 00:09:11.299 } 00:09:11.299 ]' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:11.299 /dev/nbd1' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:11.299 /dev/nbd1' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@65 -- # count=2 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@95 -- # count=2 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:11.299 256+0 records in 00:09:11.299 256+0 records out 00:09:11.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667757 s, 157 MB/s 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:11.299 256+0 records in 00:09:11.299 256+0 records out 00:09:11.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298497 s, 35.1 MB/s 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:11.299 12:28:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:11.558 256+0 records in 00:09:11.558 256+0 records out 00:09:11.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305198 s, 34.4 MB/s 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@51 -- # local i 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.558 12:28:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@41 -- # break 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.816 12:28:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@41 -- # break 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.075 12:28:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.333 12:28:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:12.333 12:28:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.333 12:28:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@65 -- # true 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@65 -- # count=0 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@104 -- # count=0 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:12.592 12:28:54 -- bdev/nbd_common.sh@109 -- # return 0 00:09:12.592 12:28:54 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:12.851 12:28:55 -- event/event.sh@35 -- # sleep 3 00:09:14.257 [2024-10-01 12:28:56.447092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.257 [2024-10-01 12:28:56.625613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.257 [2024-10-01 12:28:56.625630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.516 [2024-10-01 12:28:56.793708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:14.516 [2024-10-01 12:28:56.793784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:15.893 spdk_app_start Round 1 00:09:15.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:15.893 12:28:58 -- event/event.sh@23 -- # for i in {0..2} 00:09:15.893 12:28:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:15.893 12:28:58 -- event/event.sh@25 -- # waitforlisten 54411 /var/tmp/spdk-nbd.sock 00:09:15.893 12:28:58 -- common/autotest_common.sh@819 -- # '[' -z 54411 ']' 00:09:15.893 12:28:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:15.893 12:28:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:15.893 12:28:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:15.893 12:28:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:15.893 12:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:16.151 12:28:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.151 12:28:58 -- common/autotest_common.sh@852 -- # return 0 00:09:16.151 12:28:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.409 Malloc0 00:09:16.667 12:28:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.924 Malloc1 00:09:16.924 12:28:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@12 -- # local i 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.924 12:28:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:17.183 /dev/nbd0 00:09:17.183 12:28:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:17.183 12:28:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:17.183 12:28:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:17.183 12:28:59 -- common/autotest_common.sh@857 -- # local i 00:09:17.183 12:28:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.183 12:28:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.183 12:28:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:17.183 12:28:59 -- common/autotest_common.sh@861 -- # break 00:09:17.183 12:28:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.183 12:28:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.183 12:28:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.183 1+0 records in 00:09:17.183 1+0 records out 00:09:17.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254288 s, 16.1 MB/s 00:09:17.183 12:28:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.183 12:28:59 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.183 12:28:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.183 12:28:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.183 12:28:59 -- common/autotest_common.sh@877 -- # return 0 00:09:17.183 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.183 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.183 12:28:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:17.442 /dev/nbd1 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.442 12:28:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:17.442 12:28:59 -- common/autotest_common.sh@857 -- # local i 00:09:17.442 12:28:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.442 12:28:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.442 12:28:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:17.442 12:28:59 -- common/autotest_common.sh@861 -- # break 00:09:17.442 12:28:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.442 12:28:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.442 12:28:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.442 1+0 records in 00:09:17.442 1+0 records out 00:09:17.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767015 s, 5.3 MB/s 00:09:17.442 12:28:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.442 12:28:59 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.442 12:28:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.442 12:28:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.442 12:28:59 -- common/autotest_common.sh@877 -- # return 0 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.442 12:28:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.009 12:29:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:18.009 { 00:09:18.009 "nbd_device": "/dev/nbd0", 00:09:18.009 "bdev_name": "Malloc0" 00:09:18.009 }, 00:09:18.009 { 00:09:18.009 "nbd_device": "/dev/nbd1", 00:09:18.010 "bdev_name": "Malloc1" 00:09:18.010 } 00:09:18.010 ]' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:18.010 { 00:09:18.010 "nbd_device": "/dev/nbd0", 00:09:18.010 "bdev_name": "Malloc0" 00:09:18.010 }, 00:09:18.010 { 00:09:18.010 "nbd_device": "/dev/nbd1", 00:09:18.010 "bdev_name": "Malloc1" 00:09:18.010 } 00:09:18.010 ]' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:18.010 /dev/nbd1' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:18.010 /dev/nbd1' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@65 -- # count=2 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@95 -- # count=2 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:18.010 256+0 records in 00:09:18.010 256+0 records out 00:09:18.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106524 s, 98.4 MB/s 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:18.010 256+0 records in 00:09:18.010 256+0 records out 00:09:18.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318435 s, 32.9 MB/s 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:18.010 256+0 records in 00:09:18.010 256+0 records out 00:09:18.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.037877 s, 27.7 MB/s 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@51 -- # local i 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.010 12:29:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@41 -- # break 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.268 12:29:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@41 -- # break 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.526 12:29:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@65 -- # true 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@65 -- # count=0 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@104 -- # count=0 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:19.093 12:29:01 -- bdev/nbd_common.sh@109 -- # return 0 00:09:19.093 12:29:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:19.352 12:29:01 -- event/event.sh@35 -- # sleep 3 00:09:20.728 [2024-10-01 12:29:02.956117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.728 [2024-10-01 12:29:03.134447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.728 [2024-10-01 12:29:03.134450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.988 [2024-10-01 12:29:03.302811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:20.988 [2024-10-01 12:29:03.302890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:22.364 spdk_app_start Round 2 00:09:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:22.364 12:29:04 -- event/event.sh@23 -- # for i in {0..2} 00:09:22.364 12:29:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:22.364 12:29:04 -- event/event.sh@25 -- # waitforlisten 54411 /var/tmp/spdk-nbd.sock 00:09:22.364 12:29:04 -- common/autotest_common.sh@819 -- # '[' -z 54411 ']' 00:09:22.364 12:29:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:22.364 12:29:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.364 12:29:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:22.364 12:29:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.364 12:29:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.622 12:29:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:22.622 12:29:05 -- common/autotest_common.sh@852 -- # return 0 00:09:22.622 12:29:05 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:23.189 Malloc0 00:09:23.189 12:29:05 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:23.447 Malloc1 00:09:23.447 12:29:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@12 -- # local i 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.447 12:29:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:23.704 /dev/nbd0 00:09:23.704 12:29:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:23.704 12:29:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:23.704 12:29:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:23.704 12:29:06 -- common/autotest_common.sh@857 -- # local i 00:09:23.704 12:29:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:23.704 12:29:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:23.704 12:29:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:23.704 12:29:06 -- common/autotest_common.sh@861 -- # break 00:09:23.704 12:29:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:23.704 12:29:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:23.704 12:29:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.704 1+0 records in 00:09:23.705 1+0 records out 00:09:23.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222716 s, 18.4 MB/s 00:09:23.705 12:29:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.705 12:29:06 -- common/autotest_common.sh@874 -- # size=4096 00:09:23.705 12:29:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.705 12:29:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:23.705 12:29:06 -- common/autotest_common.sh@877 -- # return 0 00:09:23.705 12:29:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.705 12:29:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.705 12:29:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:23.963 /dev/nbd1 00:09:23.963 12:29:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:23.963 12:29:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:23.963 12:29:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:23.963 12:29:06 -- common/autotest_common.sh@857 -- # local i 00:09:23.963 12:29:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:23.963 12:29:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:23.963 12:29:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:23.963 12:29:06 -- common/autotest_common.sh@861 -- # break 00:09:23.963 12:29:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:23.963 12:29:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:23.963 12:29:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:23.963 1+0 records in 00:09:23.963 1+0 records out 00:09:23.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354478 s, 11.6 MB/s 00:09:23.964 12:29:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.964 12:29:06 -- common/autotest_common.sh@874 -- # size=4096 00:09:23.964 12:29:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:23.964 12:29:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:23.964 12:29:06 -- common/autotest_common.sh@877 -- # return 0 00:09:23.964 12:29:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.964 12:29:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:23.964 12:29:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.221 12:29:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.221 12:29:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:24.479 12:29:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:24.479 { 00:09:24.479 "nbd_device": "/dev/nbd0", 00:09:24.479 "bdev_name": "Malloc0" 00:09:24.479 }, 00:09:24.479 { 00:09:24.479 "nbd_device": "/dev/nbd1", 00:09:24.479 "bdev_name": "Malloc1" 00:09:24.479 } 00:09:24.479 ]' 00:09:24.479 12:29:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:24.479 { 00:09:24.479 "nbd_device": "/dev/nbd0", 00:09:24.479 "bdev_name": "Malloc0" 00:09:24.479 }, 00:09:24.479 { 00:09:24.479 "nbd_device": "/dev/nbd1", 00:09:24.480 "bdev_name": "Malloc1" 00:09:24.480 } 00:09:24.480 ]' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:24.480 /dev/nbd1' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:24.480 /dev/nbd1' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@65 -- # count=2 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@95 -- # count=2 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:24.480 256+0 records in 00:09:24.480 256+0 records out 00:09:24.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702184 s, 149 MB/s 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:24.480 256+0 records in 00:09:24.480 256+0 records out 00:09:24.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255844 s, 41.0 MB/s 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:24.480 256+0 records in 00:09:24.480 256+0 records out 00:09:24.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293859 s, 35.7 MB/s 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@51 -- # local i 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.480 12:29:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@41 -- # break 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.739 12:29:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@41 -- # break 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.306 12:29:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@65 -- # true 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@65 -- # count=0 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@104 -- # count=0 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:25.565 12:29:07 -- bdev/nbd_common.sh@109 -- # return 0 00:09:25.565 12:29:07 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:25.823 12:29:08 -- event/event.sh@35 -- # sleep 3 00:09:27.200 [2024-10-01 12:29:09.350229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.200 [2024-10-01 12:29:09.526010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.200 [2024-10-01 12:29:09.526019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.200 [2024-10-01 12:29:09.690666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:27.200 [2024-10-01 12:29:09.690749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:29.104 12:29:11 -- event/event.sh@38 -- # waitforlisten 54411 /var/tmp/spdk-nbd.sock 00:09:29.104 12:29:11 -- common/autotest_common.sh@819 -- # '[' -z 54411 ']' 00:09:29.104 12:29:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:29.104 12:29:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.104 12:29:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:29.104 12:29:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.104 12:29:11 -- common/autotest_common.sh@10 -- # set +x 00:09:29.104 12:29:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.104 12:29:11 -- common/autotest_common.sh@852 -- # return 0 00:09:29.104 12:29:11 -- event/event.sh@39 -- # killprocess 54411 00:09:29.104 12:29:11 -- common/autotest_common.sh@926 -- # '[' -z 54411 ']' 00:09:29.104 12:29:11 -- common/autotest_common.sh@930 -- # kill -0 54411 00:09:29.104 12:29:11 -- common/autotest_common.sh@931 -- # uname 00:09:29.104 12:29:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:29.104 12:29:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54411 00:09:29.104 killing process with pid 54411 00:09:29.104 12:29:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:29.104 12:29:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:29.104 12:29:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54411' 00:09:29.104 12:29:11 -- common/autotest_common.sh@945 -- # kill 54411 00:09:29.104 12:29:11 -- common/autotest_common.sh@950 -- # wait 54411 00:09:30.482 spdk_app_start is called in Round 0. 00:09:30.482 Shutdown signal received, stop current app iteration 00:09:30.482 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:30.482 spdk_app_start is called in Round 1. 00:09:30.482 Shutdown signal received, stop current app iteration 00:09:30.482 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:30.482 spdk_app_start is called in Round 2. 00:09:30.482 Shutdown signal received, stop current app iteration 00:09:30.482 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:30.482 spdk_app_start is called in Round 3. 00:09:30.482 Shutdown signal received, stop current app iteration 00:09:30.482 12:29:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:30.482 ************************************ 00:09:30.482 END TEST app_repeat 00:09:30.482 ************************************ 00:09:30.482 12:29:12 -- event/event.sh@42 -- # return 0 00:09:30.482 00:09:30.482 real 0m21.454s 00:09:30.482 user 0m47.062s 00:09:30.482 sys 0m2.803s 00:09:30.482 12:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.482 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.482 12:29:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:30.482 12:29:12 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:30.482 12:29:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.482 12:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.482 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.482 ************************************ 00:09:30.482 START TEST cpu_locks 00:09:30.482 ************************************ 00:09:30.482 12:29:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:30.482 * Looking for test storage... 00:09:30.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:30.482 12:29:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:30.482 12:29:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:30.482 12:29:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:30.482 12:29:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:30.482 12:29:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.482 12:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.482 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.482 ************************************ 00:09:30.482 START TEST default_locks 00:09:30.482 ************************************ 00:09:30.482 12:29:12 -- common/autotest_common.sh@1104 -- # default_locks 00:09:30.482 12:29:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=54871 00:09:30.482 12:29:12 -- event/cpu_locks.sh@47 -- # waitforlisten 54871 00:09:30.482 12:29:12 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.482 12:29:12 -- common/autotest_common.sh@819 -- # '[' -z 54871 ']' 00:09:30.482 12:29:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.482 12:29:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.482 12:29:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.482 12:29:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.482 12:29:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.482 [2024-10-01 12:29:12.877013] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:30.482 [2024-10-01 12:29:12.877192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54871 ] 00:09:30.742 [2024-10-01 12:29:13.047106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.001 [2024-10-01 12:29:13.273435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.001 [2024-10-01 12:29:13.273733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.378 12:29:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.378 12:29:14 -- common/autotest_common.sh@852 -- # return 0 00:09:32.378 12:29:14 -- event/cpu_locks.sh@49 -- # locks_exist 54871 00:09:32.378 12:29:14 -- event/cpu_locks.sh@22 -- # lslocks -p 54871 00:09:32.378 12:29:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.638 12:29:15 -- event/cpu_locks.sh@50 -- # killprocess 54871 00:09:32.638 12:29:15 -- common/autotest_common.sh@926 -- # '[' -z 54871 ']' 00:09:32.638 12:29:15 -- common/autotest_common.sh@930 -- # kill -0 54871 00:09:32.638 12:29:15 -- common/autotest_common.sh@931 -- # uname 00:09:32.638 12:29:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:32.638 12:29:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54871 00:09:32.638 12:29:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:32.638 killing process with pid 54871 00:09:32.638 12:29:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:32.638 12:29:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54871' 00:09:32.638 12:29:15 -- common/autotest_common.sh@945 -- # kill 54871 00:09:32.638 12:29:15 -- common/autotest_common.sh@950 -- # wait 54871 00:09:35.171 12:29:17 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 54871 00:09:35.171 12:29:17 -- common/autotest_common.sh@640 -- # local es=0 00:09:35.171 12:29:17 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 54871 00:09:35.171 12:29:17 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:35.171 12:29:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.171 12:29:17 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:35.171 12:29:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.171 12:29:17 -- common/autotest_common.sh@643 -- # waitforlisten 54871 00:09:35.171 12:29:17 -- common/autotest_common.sh@819 -- # '[' -z 54871 ']' 00:09:35.171 12:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.171 12:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.171 12:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.171 12:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.171 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:35.171 ERROR: process (pid: 54871) is no longer running 00:09:35.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (54871) - No such process 00:09:35.171 12:29:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:35.171 12:29:17 -- common/autotest_common.sh@852 -- # return 1 00:09:35.171 12:29:17 -- common/autotest_common.sh@643 -- # es=1 00:09:35.171 12:29:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:35.172 12:29:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:35.172 12:29:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:35.172 12:29:17 -- event/cpu_locks.sh@54 -- # no_locks 00:09:35.172 12:29:17 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:35.172 12:29:17 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:35.172 12:29:17 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:35.172 00:09:35.172 real 0m4.389s 00:09:35.172 user 0m4.777s 00:09:35.172 sys 0m0.593s 00:09:35.172 12:29:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.172 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:35.172 ************************************ 00:09:35.172 END TEST default_locks 00:09:35.172 ************************************ 00:09:35.172 12:29:17 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:35.172 12:29:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.172 12:29:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.172 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:35.172 ************************************ 00:09:35.172 START TEST default_locks_via_rpc 00:09:35.172 ************************************ 00:09:35.172 12:29:17 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:35.172 12:29:17 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=54948 00:09:35.172 12:29:17 -- event/cpu_locks.sh@63 -- # waitforlisten 54948 00:09:35.172 12:29:17 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:35.172 12:29:17 -- common/autotest_common.sh@819 -- # '[' -z 54948 ']' 00:09:35.172 12:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.172 12:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.172 12:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.172 12:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.172 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:35.172 [2024-10-01 12:29:17.312491] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:35.172 [2024-10-01 12:29:17.312672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54948 ] 00:09:35.172 [2024-10-01 12:29:17.484671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.430 [2024-10-01 12:29:17.711867] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.430 [2024-10-01 12:29:17.712150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.802 12:29:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.802 12:29:19 -- common/autotest_common.sh@852 -- # return 0 00:09:36.802 12:29:19 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:36.802 12:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.802 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:09:36.802 12:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:36.802 12:29:19 -- event/cpu_locks.sh@67 -- # no_locks 00:09:36.802 12:29:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:36.802 12:29:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:36.802 12:29:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:36.802 12:29:19 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:36.802 12:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.802 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:09:36.802 12:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:36.802 12:29:19 -- event/cpu_locks.sh@71 -- # locks_exist 54948 00:09:36.802 12:29:19 -- event/cpu_locks.sh@22 -- # lslocks -p 54948 00:09:36.802 12:29:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:37.060 12:29:19 -- event/cpu_locks.sh@73 -- # killprocess 54948 00:09:37.060 12:29:19 -- common/autotest_common.sh@926 -- # '[' -z 54948 ']' 00:09:37.060 12:29:19 -- common/autotest_common.sh@930 -- # kill -0 54948 00:09:37.060 12:29:19 -- common/autotest_common.sh@931 -- # uname 00:09:37.060 12:29:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:37.060 12:29:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54948 00:09:37.060 12:29:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:37.060 killing process with pid 54948 00:09:37.060 12:29:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:37.060 12:29:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54948' 00:09:37.060 12:29:19 -- common/autotest_common.sh@945 -- # kill 54948 00:09:37.060 12:29:19 -- common/autotest_common.sh@950 -- # wait 54948 00:09:39.637 00:09:39.637 real 0m4.395s 00:09:39.637 user 0m4.821s 00:09:39.637 sys 0m0.616s 00:09:39.637 12:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.637 12:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:39.637 ************************************ 00:09:39.637 END TEST default_locks_via_rpc 00:09:39.637 ************************************ 00:09:39.637 12:29:21 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:39.637 12:29:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:39.637 12:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.637 12:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:39.637 ************************************ 00:09:39.637 START TEST non_locking_app_on_locked_coremask 00:09:39.637 ************************************ 00:09:39.637 12:29:21 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:39.637 12:29:21 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55030 00:09:39.637 12:29:21 -- event/cpu_locks.sh@81 -- # waitforlisten 55030 /var/tmp/spdk.sock 00:09:39.637 12:29:21 -- common/autotest_common.sh@819 -- # '[' -z 55030 ']' 00:09:39.637 12:29:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.637 12:29:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:39.637 12:29:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.637 12:29:21 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:39.637 12:29:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:39.637 12:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:39.637 [2024-10-01 12:29:21.746171] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:39.637 [2024-10-01 12:29:21.746395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55030 ] 00:09:39.637 [2024-10-01 12:29:21.904206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.637 [2024-10-01 12:29:22.090942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.637 [2024-10-01 12:29:22.091289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.036 12:29:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:41.036 12:29:23 -- common/autotest_common.sh@852 -- # return 0 00:09:41.036 12:29:23 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:41.036 12:29:23 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55053 00:09:41.036 12:29:23 -- event/cpu_locks.sh@85 -- # waitforlisten 55053 /var/tmp/spdk2.sock 00:09:41.036 12:29:23 -- common/autotest_common.sh@819 -- # '[' -z 55053 ']' 00:09:41.036 12:29:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:41.036 12:29:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:41.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:41.036 12:29:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:41.036 12:29:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:41.036 12:29:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.295 [2024-10-01 12:29:23.598762] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:41.295 [2024-10-01 12:29:23.598906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55053 ] 00:09:41.295 [2024-10-01 12:29:23.768846] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:41.295 [2024-10-01 12:29:23.768920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.861 [2024-10-01 12:29:24.142419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.861 [2024-10-01 12:29:24.142693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.762 12:29:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:43.762 12:29:26 -- common/autotest_common.sh@852 -- # return 0 00:09:43.762 12:29:26 -- event/cpu_locks.sh@87 -- # locks_exist 55030 00:09:43.762 12:29:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.762 12:29:26 -- event/cpu_locks.sh@22 -- # lslocks -p 55030 00:09:44.696 12:29:26 -- event/cpu_locks.sh@89 -- # killprocess 55030 00:09:44.696 12:29:26 -- common/autotest_common.sh@926 -- # '[' -z 55030 ']' 00:09:44.696 12:29:26 -- common/autotest_common.sh@930 -- # kill -0 55030 00:09:44.696 12:29:26 -- common/autotest_common.sh@931 -- # uname 00:09:44.696 12:29:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.696 12:29:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55030 00:09:44.696 12:29:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.696 12:29:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.696 killing process with pid 55030 00:09:44.696 12:29:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55030' 00:09:44.696 12:29:26 -- common/autotest_common.sh@945 -- # kill 55030 00:09:44.696 12:29:26 -- common/autotest_common.sh@950 -- # wait 55030 00:09:48.978 12:29:31 -- event/cpu_locks.sh@90 -- # killprocess 55053 00:09:48.978 12:29:31 -- common/autotest_common.sh@926 -- # '[' -z 55053 ']' 00:09:48.978 12:29:31 -- common/autotest_common.sh@930 -- # kill -0 55053 00:09:48.978 12:29:31 -- common/autotest_common.sh@931 -- # uname 00:09:48.978 12:29:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:48.978 12:29:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55053 00:09:48.978 12:29:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:48.978 12:29:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:48.978 killing process with pid 55053 00:09:48.978 12:29:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55053' 00:09:48.978 12:29:31 -- common/autotest_common.sh@945 -- # kill 55053 00:09:48.978 12:29:31 -- common/autotest_common.sh@950 -- # wait 55053 00:09:50.882 00:09:50.882 real 0m11.557s 00:09:50.882 user 0m12.863s 00:09:50.882 sys 0m1.259s 00:09:50.882 12:29:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.882 ************************************ 00:09:50.882 END TEST non_locking_app_on_locked_coremask 00:09:50.882 12:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:50.882 ************************************ 00:09:50.882 12:29:33 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:50.882 12:29:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:50.882 12:29:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.882 12:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:50.882 ************************************ 00:09:50.882 START TEST locking_app_on_unlocked_coremask 00:09:50.882 ************************************ 00:09:50.882 12:29:33 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:50.882 12:29:33 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55198 00:09:50.882 12:29:33 -- event/cpu_locks.sh@99 -- # waitforlisten 55198 /var/tmp/spdk.sock 00:09:50.882 12:29:33 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:50.882 12:29:33 -- common/autotest_common.sh@819 -- # '[' -z 55198 ']' 00:09:50.882 12:29:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.882 12:29:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:50.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.882 12:29:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.882 12:29:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:50.882 12:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:50.882 [2024-10-01 12:29:33.379244] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:50.882 [2024-10-01 12:29:33.379427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55198 ] 00:09:51.142 [2024-10-01 12:29:33.550723] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:51.142 [2024-10-01 12:29:33.550797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.401 [2024-10-01 12:29:33.736509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.401 [2024-10-01 12:29:33.736834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.779 12:29:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.779 12:29:35 -- common/autotest_common.sh@852 -- # return 0 00:09:52.779 12:29:35 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55227 00:09:52.779 12:29:35 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:52.779 12:29:35 -- event/cpu_locks.sh@103 -- # waitforlisten 55227 /var/tmp/spdk2.sock 00:09:52.779 12:29:35 -- common/autotest_common.sh@819 -- # '[' -z 55227 ']' 00:09:52.779 12:29:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:52.779 12:29:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:52.779 12:29:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:52.779 12:29:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:52.779 12:29:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.779 [2024-10-01 12:29:35.178221] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:52.779 [2024-10-01 12:29:35.178388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55227 ] 00:09:53.038 [2024-10-01 12:29:35.352300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.296 [2024-10-01 12:29:35.729237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.296 [2024-10-01 12:29:35.729489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.198 12:29:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:55.198 12:29:37 -- common/autotest_common.sh@852 -- # return 0 00:09:55.198 12:29:37 -- event/cpu_locks.sh@105 -- # locks_exist 55227 00:09:55.198 12:29:37 -- event/cpu_locks.sh@22 -- # lslocks -p 55227 00:09:55.198 12:29:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:56.131 12:29:38 -- event/cpu_locks.sh@107 -- # killprocess 55198 00:09:56.131 12:29:38 -- common/autotest_common.sh@926 -- # '[' -z 55198 ']' 00:09:56.131 12:29:38 -- common/autotest_common.sh@930 -- # kill -0 55198 00:09:56.131 12:29:38 -- common/autotest_common.sh@931 -- # uname 00:09:56.131 12:29:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.131 12:29:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55198 00:09:56.131 12:29:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:56.131 12:29:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:56.131 killing process with pid 55198 00:09:56.131 12:29:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55198' 00:09:56.131 12:29:38 -- common/autotest_common.sh@945 -- # kill 55198 00:09:56.131 12:29:38 -- common/autotest_common.sh@950 -- # wait 55198 00:10:00.316 12:29:42 -- event/cpu_locks.sh@108 -- # killprocess 55227 00:10:00.316 12:29:42 -- common/autotest_common.sh@926 -- # '[' -z 55227 ']' 00:10:00.316 12:29:42 -- common/autotest_common.sh@930 -- # kill -0 55227 00:10:00.316 12:29:42 -- common/autotest_common.sh@931 -- # uname 00:10:00.316 12:29:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:00.316 12:29:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55227 00:10:00.316 12:29:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:00.316 12:29:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:00.316 killing process with pid 55227 00:10:00.316 12:29:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55227' 00:10:00.316 12:29:42 -- common/autotest_common.sh@945 -- # kill 55227 00:10:00.316 12:29:42 -- common/autotest_common.sh@950 -- # wait 55227 00:10:02.908 00:10:02.908 real 0m11.578s 00:10:02.908 user 0m12.720s 00:10:02.908 sys 0m1.276s 00:10:02.908 12:29:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.908 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:02.908 ************************************ 00:10:02.908 END TEST locking_app_on_unlocked_coremask 00:10:02.908 ************************************ 00:10:02.908 12:29:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:02.908 12:29:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:02.908 12:29:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.908 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:02.908 ************************************ 00:10:02.908 START TEST locking_app_on_locked_coremask 00:10:02.908 ************************************ 00:10:02.908 12:29:44 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:02.908 12:29:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55371 00:10:02.908 12:29:44 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.908 12:29:44 -- event/cpu_locks.sh@116 -- # waitforlisten 55371 /var/tmp/spdk.sock 00:10:02.908 12:29:44 -- common/autotest_common.sh@819 -- # '[' -z 55371 ']' 00:10:02.908 12:29:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.908 12:29:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:02.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.908 12:29:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.908 12:29:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:02.908 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:02.908 [2024-10-01 12:29:45.003126] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:02.908 [2024-10-01 12:29:45.003317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55371 ] 00:10:02.908 [2024-10-01 12:29:45.171816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.908 [2024-10-01 12:29:45.384791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.908 [2024-10-01 12:29:45.385030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.284 12:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:04.284 12:29:46 -- common/autotest_common.sh@852 -- # return 0 00:10:04.284 12:29:46 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55395 00:10:04.284 12:29:46 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55395 /var/tmp/spdk2.sock 00:10:04.284 12:29:46 -- common/autotest_common.sh@640 -- # local es=0 00:10:04.284 12:29:46 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:04.284 12:29:46 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55395 /var/tmp/spdk2.sock 00:10:04.284 12:29:46 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:04.284 12:29:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:04.284 12:29:46 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:04.284 12:29:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:04.284 12:29:46 -- common/autotest_common.sh@643 -- # waitforlisten 55395 /var/tmp/spdk2.sock 00:10:04.284 12:29:46 -- common/autotest_common.sh@819 -- # '[' -z 55395 ']' 00:10:04.284 12:29:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.284 12:29:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:04.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.284 12:29:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.284 12:29:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:04.284 12:29:46 -- common/autotest_common.sh@10 -- # set +x 00:10:04.543 [2024-10-01 12:29:46.886060] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:04.543 [2024-10-01 12:29:46.886235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55395 ] 00:10:04.543 [2024-10-01 12:29:47.063004] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55371 has claimed it. 00:10:04.543 [2024-10-01 12:29:47.063094] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:05.110 ERROR: process (pid: 55395) is no longer running 00:10:05.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55395) - No such process 00:10:05.110 12:29:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:05.110 12:29:47 -- common/autotest_common.sh@852 -- # return 1 00:10:05.110 12:29:47 -- common/autotest_common.sh@643 -- # es=1 00:10:05.110 12:29:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:05.110 12:29:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:05.110 12:29:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:05.110 12:29:47 -- event/cpu_locks.sh@122 -- # locks_exist 55371 00:10:05.110 12:29:47 -- event/cpu_locks.sh@22 -- # lslocks -p 55371 00:10:05.110 12:29:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.678 12:29:47 -- event/cpu_locks.sh@124 -- # killprocess 55371 00:10:05.678 12:29:47 -- common/autotest_common.sh@926 -- # '[' -z 55371 ']' 00:10:05.678 12:29:47 -- common/autotest_common.sh@930 -- # kill -0 55371 00:10:05.678 12:29:47 -- common/autotest_common.sh@931 -- # uname 00:10:05.678 12:29:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:05.678 12:29:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55371 00:10:05.678 12:29:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:05.678 12:29:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:05.678 killing process with pid 55371 00:10:05.678 12:29:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55371' 00:10:05.678 12:29:48 -- common/autotest_common.sh@945 -- # kill 55371 00:10:05.678 12:29:48 -- common/autotest_common.sh@950 -- # wait 55371 00:10:08.212 00:10:08.212 real 0m5.221s 00:10:08.212 user 0m5.922s 00:10:08.212 sys 0m0.782s 00:10:08.212 12:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.212 12:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 ************************************ 00:10:08.212 END TEST locking_app_on_locked_coremask 00:10:08.212 ************************************ 00:10:08.212 12:29:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:08.212 12:29:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:08.212 12:29:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.212 12:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 ************************************ 00:10:08.212 START TEST locking_overlapped_coremask 00:10:08.212 ************************************ 00:10:08.212 12:29:50 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:08.212 12:29:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55470 00:10:08.212 12:29:50 -- event/cpu_locks.sh@133 -- # waitforlisten 55470 /var/tmp/spdk.sock 00:10:08.212 12:29:50 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:08.212 12:29:50 -- common/autotest_common.sh@819 -- # '[' -z 55470 ']' 00:10:08.212 12:29:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.212 12:29:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.212 12:29:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.212 12:29:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.212 12:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 [2024-10-01 12:29:50.272662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:08.212 [2024-10-01 12:29:50.272835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55470 ] 00:10:08.212 [2024-10-01 12:29:50.443232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.212 [2024-10-01 12:29:50.633228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.212 [2024-10-01 12:29:50.633657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.212 [2024-10-01 12:29:50.633848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.212 [2024-10-01 12:29:50.633849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.587 12:29:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.587 12:29:51 -- common/autotest_common.sh@852 -- # return 0 00:10:09.587 12:29:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55490 00:10:09.587 12:29:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:09.587 12:29:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55490 /var/tmp/spdk2.sock 00:10:09.587 12:29:51 -- common/autotest_common.sh@640 -- # local es=0 00:10:09.587 12:29:51 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55490 /var/tmp/spdk2.sock 00:10:09.587 12:29:51 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:09.587 12:29:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.587 12:29:51 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:09.587 12:29:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.587 12:29:51 -- common/autotest_common.sh@643 -- # waitforlisten 55490 /var/tmp/spdk2.sock 00:10:09.587 12:29:51 -- common/autotest_common.sh@819 -- # '[' -z 55490 ']' 00:10:09.587 12:29:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.587 12:29:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:09.587 12:29:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.587 12:29:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:09.587 12:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:09.587 [2024-10-01 12:29:52.096815] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:09.587 [2024-10-01 12:29:52.096995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55490 ] 00:10:09.845 [2024-10-01 12:29:52.276908] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55470 has claimed it. 00:10:09.845 [2024-10-01 12:29:52.277000] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:10.411 ERROR: process (pid: 55490) is no longer running 00:10:10.411 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55490) - No such process 00:10:10.411 12:29:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:10.411 12:29:52 -- common/autotest_common.sh@852 -- # return 1 00:10:10.411 12:29:52 -- common/autotest_common.sh@643 -- # es=1 00:10:10.411 12:29:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:10.411 12:29:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:10.411 12:29:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:10.411 12:29:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:10.411 12:29:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:10.411 12:29:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:10.411 12:29:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:10.411 12:29:52 -- event/cpu_locks.sh@141 -- # killprocess 55470 00:10:10.411 12:29:52 -- common/autotest_common.sh@926 -- # '[' -z 55470 ']' 00:10:10.411 12:29:52 -- common/autotest_common.sh@930 -- # kill -0 55470 00:10:10.411 12:29:52 -- common/autotest_common.sh@931 -- # uname 00:10:10.411 12:29:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:10.411 12:29:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55470 00:10:10.411 12:29:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:10.411 killing process with pid 55470 00:10:10.411 12:29:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:10.411 12:29:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55470' 00:10:10.411 12:29:52 -- common/autotest_common.sh@945 -- # kill 55470 00:10:10.411 12:29:52 -- common/autotest_common.sh@950 -- # wait 55470 00:10:12.935 00:10:12.935 real 0m4.728s 00:10:12.935 user 0m12.994s 00:10:12.935 sys 0m0.551s 00:10:12.935 12:29:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.935 12:29:54 -- common/autotest_common.sh@10 -- # set +x 00:10:12.935 ************************************ 00:10:12.936 END TEST locking_overlapped_coremask 00:10:12.936 ************************************ 00:10:12.936 12:29:54 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:12.936 12:29:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:12.936 12:29:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.936 12:29:54 -- common/autotest_common.sh@10 -- # set +x 00:10:12.936 ************************************ 00:10:12.936 START TEST locking_overlapped_coremask_via_rpc 00:10:12.936 ************************************ 00:10:12.936 12:29:54 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:12.936 12:29:54 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55556 00:10:12.936 12:29:54 -- event/cpu_locks.sh@149 -- # waitforlisten 55556 /var/tmp/spdk.sock 00:10:12.936 12:29:54 -- common/autotest_common.sh@819 -- # '[' -z 55556 ']' 00:10:12.936 12:29:54 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:12.936 12:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.936 12:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:12.936 12:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.936 12:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:12.936 12:29:54 -- common/autotest_common.sh@10 -- # set +x 00:10:12.936 [2024-10-01 12:29:55.042368] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:12.936 [2024-10-01 12:29:55.042533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55556 ] 00:10:12.936 [2024-10-01 12:29:55.225901] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:12.936 [2024-10-01 12:29:55.225980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.936 [2024-10-01 12:29:55.443310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.936 [2024-10-01 12:29:55.443705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.936 [2024-10-01 12:29:55.444008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.936 [2024-10-01 12:29:55.444015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.308 12:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.309 12:29:56 -- common/autotest_common.sh@852 -- # return 0 00:10:14.309 12:29:56 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:14.309 12:29:56 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55587 00:10:14.309 12:29:56 -- event/cpu_locks.sh@153 -- # waitforlisten 55587 /var/tmp/spdk2.sock 00:10:14.309 12:29:56 -- common/autotest_common.sh@819 -- # '[' -z 55587 ']' 00:10:14.309 12:29:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:14.309 12:29:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:14.309 12:29:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:14.309 12:29:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.309 12:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:14.566 [2024-10-01 12:29:56.885522] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:14.566 [2024-10-01 12:29:56.885678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55587 ] 00:10:14.566 [2024-10-01 12:29:57.057669] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:14.566 [2024-10-01 12:29:57.057731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.131 [2024-10-01 12:29:57.427559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.131 [2024-10-01 12:29:57.431869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.131 [2024-10-01 12:29:57.432170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.131 [2024-10-01 12:29:57.432184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:17.039 12:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.040 12:29:59 -- common/autotest_common.sh@852 -- # return 0 00:10:17.040 12:29:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:17.040 12:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.040 12:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.040 12:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.040 12:29:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.040 12:29:59 -- common/autotest_common.sh@640 -- # local es=0 00:10:17.040 12:29:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.040 12:29:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:17.040 12:29:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.040 12:29:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:17.040 12:29:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.040 12:29:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.040 12:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.040 12:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.040 [2024-10-01 12:29:59.423830] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55556 has claimed it. 00:10:17.040 request: 00:10:17.040 { 00:10:17.040 "method": "framework_enable_cpumask_locks", 00:10:17.040 "req_id": 1 00:10:17.040 } 00:10:17.040 Got JSON-RPC error response 00:10:17.040 response: 00:10:17.040 { 00:10:17.040 "code": -32603, 00:10:17.040 "message": "Failed to claim CPU core: 2" 00:10:17.040 } 00:10:17.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.040 12:29:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:17.040 12:29:59 -- common/autotest_common.sh@643 -- # es=1 00:10:17.040 12:29:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:17.040 12:29:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:17.040 12:29:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:17.040 12:29:59 -- event/cpu_locks.sh@158 -- # waitforlisten 55556 /var/tmp/spdk.sock 00:10:17.040 12:29:59 -- common/autotest_common.sh@819 -- # '[' -z 55556 ']' 00:10:17.040 12:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.040 12:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.040 12:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.040 12:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.040 12:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.298 12:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.298 12:29:59 -- common/autotest_common.sh@852 -- # return 0 00:10:17.298 12:29:59 -- event/cpu_locks.sh@159 -- # waitforlisten 55587 /var/tmp/spdk2.sock 00:10:17.298 12:29:59 -- common/autotest_common.sh@819 -- # '[' -z 55587 ']' 00:10:17.298 12:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.298 12:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.298 12:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.298 12:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.298 12:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.556 12:30:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.556 12:30:00 -- common/autotest_common.sh@852 -- # return 0 00:10:17.556 12:30:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:17.556 12:30:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:17.556 12:30:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:17.556 12:30:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:17.556 00:10:17.556 real 0m5.085s 00:10:17.556 user 0m2.233s 00:10:17.556 sys 0m0.270s 00:10:17.556 ************************************ 00:10:17.556 END TEST locking_overlapped_coremask_via_rpc 00:10:17.556 ************************************ 00:10:17.556 12:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.556 12:30:00 -- common/autotest_common.sh@10 -- # set +x 00:10:17.556 12:30:00 -- event/cpu_locks.sh@174 -- # cleanup 00:10:17.556 12:30:00 -- event/cpu_locks.sh@15 -- # [[ -z 55556 ]] 00:10:17.556 12:30:00 -- event/cpu_locks.sh@15 -- # killprocess 55556 00:10:17.556 12:30:00 -- common/autotest_common.sh@926 -- # '[' -z 55556 ']' 00:10:17.556 12:30:00 -- common/autotest_common.sh@930 -- # kill -0 55556 00:10:17.556 12:30:00 -- common/autotest_common.sh@931 -- # uname 00:10:17.556 12:30:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.556 12:30:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55556 00:10:17.813 killing process with pid 55556 00:10:17.813 12:30:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:17.813 12:30:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:17.813 12:30:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55556' 00:10:17.813 12:30:00 -- common/autotest_common.sh@945 -- # kill 55556 00:10:17.813 12:30:00 -- common/autotest_common.sh@950 -- # wait 55556 00:10:19.725 12:30:02 -- event/cpu_locks.sh@16 -- # [[ -z 55587 ]] 00:10:19.725 12:30:02 -- event/cpu_locks.sh@16 -- # killprocess 55587 00:10:19.725 12:30:02 -- common/autotest_common.sh@926 -- # '[' -z 55587 ']' 00:10:19.725 12:30:02 -- common/autotest_common.sh@930 -- # kill -0 55587 00:10:19.725 12:30:02 -- common/autotest_common.sh@931 -- # uname 00:10:19.725 12:30:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:19.725 12:30:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55587 00:10:19.725 killing process with pid 55587 00:10:19.725 12:30:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:19.725 12:30:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:19.725 12:30:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55587' 00:10:19.725 12:30:02 -- common/autotest_common.sh@945 -- # kill 55587 00:10:19.725 12:30:02 -- common/autotest_common.sh@950 -- # wait 55587 00:10:22.247 12:30:04 -- event/cpu_locks.sh@18 -- # rm -f 00:10:22.247 12:30:04 -- event/cpu_locks.sh@1 -- # cleanup 00:10:22.247 12:30:04 -- event/cpu_locks.sh@15 -- # [[ -z 55556 ]] 00:10:22.247 12:30:04 -- event/cpu_locks.sh@15 -- # killprocess 55556 00:10:22.247 12:30:04 -- common/autotest_common.sh@926 -- # '[' -z 55556 ']' 00:10:22.247 12:30:04 -- common/autotest_common.sh@930 -- # kill -0 55556 00:10:22.247 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55556) - No such process 00:10:22.247 Process with pid 55556 is not found 00:10:22.247 12:30:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55556 is not found' 00:10:22.247 12:30:04 -- event/cpu_locks.sh@16 -- # [[ -z 55587 ]] 00:10:22.247 12:30:04 -- event/cpu_locks.sh@16 -- # killprocess 55587 00:10:22.247 12:30:04 -- common/autotest_common.sh@926 -- # '[' -z 55587 ']' 00:10:22.247 12:30:04 -- common/autotest_common.sh@930 -- # kill -0 55587 00:10:22.247 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55587) - No such process 00:10:22.247 Process with pid 55587 is not found 00:10:22.247 12:30:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55587 is not found' 00:10:22.247 12:30:04 -- event/cpu_locks.sh@18 -- # rm -f 00:10:22.247 00:10:22.247 real 0m51.642s 00:10:22.247 user 1m31.086s 00:10:22.247 sys 0m6.274s 00:10:22.247 12:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.247 ************************************ 00:10:22.247 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 END TEST cpu_locks 00:10:22.247 ************************************ 00:10:22.247 ************************************ 00:10:22.247 END TEST event 00:10:22.247 ************************************ 00:10:22.247 00:10:22.247 real 1m24.041s 00:10:22.247 user 2m34.827s 00:10:22.247 sys 0m9.997s 00:10:22.247 12:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.247 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 12:30:04 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:22.247 12:30:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:22.247 12:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.247 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 ************************************ 00:10:22.247 START TEST thread 00:10:22.247 ************************************ 00:10:22.247 12:30:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:22.247 * Looking for test storage... 00:10:22.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:22.247 12:30:04 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:22.247 12:30:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:22.247 12:30:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.247 12:30:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 ************************************ 00:10:22.247 START TEST thread_poller_perf 00:10:22.247 ************************************ 00:10:22.247 12:30:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:22.247 [2024-10-01 12:30:04.532577] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:22.247 [2024-10-01 12:30:04.532763] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55772 ] 00:10:22.247 [2024-10-01 12:30:04.701036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.505 [2024-10-01 12:30:04.921022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.505 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:23.880 ====================================== 00:10:23.880 busy:2212528220 (cyc) 00:10:23.880 total_run_count: 268000 00:10:23.880 tsc_hz: 2200000000 (cyc) 00:10:23.880 ====================================== 00:10:23.880 poller_cost: 8255 (cyc), 3752 (nsec) 00:10:23.880 00:10:23.880 real 0m1.804s 00:10:23.880 user 0m1.582s 00:10:23.880 sys 0m0.111s 00:10:23.880 12:30:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.880 12:30:06 -- common/autotest_common.sh@10 -- # set +x 00:10:23.880 ************************************ 00:10:23.880 END TEST thread_poller_perf 00:10:23.880 ************************************ 00:10:23.880 12:30:06 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:23.880 12:30:06 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:23.880 12:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.880 12:30:06 -- common/autotest_common.sh@10 -- # set +x 00:10:23.880 ************************************ 00:10:23.880 START TEST thread_poller_perf 00:10:23.880 ************************************ 00:10:23.880 12:30:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:24.140 [2024-10-01 12:30:06.412730] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:24.140 [2024-10-01 12:30:06.412899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55815 ] 00:10:24.140 [2024-10-01 12:30:06.582331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.398 [2024-10-01 12:30:06.773876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.398 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:25.770 ====================================== 00:10:25.770 busy:2205126468 (cyc) 00:10:25.770 total_run_count: 3782000 00:10:25.770 tsc_hz: 2200000000 (cyc) 00:10:25.770 ====================================== 00:10:25.770 poller_cost: 583 (cyc), 265 (nsec) 00:10:25.770 00:10:25.770 real 0m1.784s 00:10:25.770 user 0m1.561s 00:10:25.770 sys 0m0.111s 00:10:25.770 12:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.770 ************************************ 00:10:25.770 END TEST thread_poller_perf 00:10:25.770 ************************************ 00:10:25.770 12:30:08 -- common/autotest_common.sh@10 -- # set +x 00:10:25.770 12:30:08 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:25.770 ************************************ 00:10:25.770 END TEST thread 00:10:25.770 ************************************ 00:10:25.770 00:10:25.770 real 0m3.762s 00:10:25.770 user 0m3.206s 00:10:25.770 sys 0m0.325s 00:10:25.771 12:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.771 12:30:08 -- common/autotest_common.sh@10 -- # set +x 00:10:25.771 12:30:08 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:25.771 12:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:25.771 12:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.771 12:30:08 -- common/autotest_common.sh@10 -- # set +x 00:10:25.771 ************************************ 00:10:25.771 START TEST accel 00:10:25.771 ************************************ 00:10:25.771 12:30:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:25.771 * Looking for test storage... 00:10:25.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:25.771 12:30:08 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:25.771 12:30:08 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:25.771 12:30:08 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:25.771 12:30:08 -- accel/accel.sh@59 -- # spdk_tgt_pid=55889 00:10:25.771 12:30:08 -- accel/accel.sh@60 -- # waitforlisten 55889 00:10:25.771 12:30:08 -- common/autotest_common.sh@819 -- # '[' -z 55889 ']' 00:10:25.771 12:30:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.771 12:30:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:25.771 12:30:08 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:25.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.771 12:30:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.771 12:30:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:25.771 12:30:08 -- accel/accel.sh@58 -- # build_accel_config 00:10:25.771 12:30:08 -- common/autotest_common.sh@10 -- # set +x 00:10:25.771 12:30:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.771 12:30:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.771 12:30:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.771 12:30:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.771 12:30:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.771 12:30:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.771 12:30:08 -- accel/accel.sh@42 -- # jq -r . 00:10:26.030 [2024-10-01 12:30:08.407900] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:26.030 [2024-10-01 12:30:08.408078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55889 ] 00:10:26.289 [2024-10-01 12:30:08.580425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.289 [2024-10-01 12:30:08.801637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.289 [2024-10-01 12:30:08.801949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.665 12:30:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:27.665 12:30:10 -- common/autotest_common.sh@852 -- # return 0 00:10:27.665 12:30:10 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:27.665 12:30:10 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:27.665 12:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:27.665 12:30:10 -- common/autotest_common.sh@10 -- # set +x 00:10:27.665 12:30:10 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:27.665 12:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # IFS== 00:10:27.924 12:30:10 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.924 12:30:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.924 12:30:10 -- accel/accel.sh@67 -- # killprocess 55889 00:10:27.924 12:30:10 -- common/autotest_common.sh@926 -- # '[' -z 55889 ']' 00:10:27.924 12:30:10 -- common/autotest_common.sh@930 -- # kill -0 55889 00:10:27.924 12:30:10 -- common/autotest_common.sh@931 -- # uname 00:10:27.924 12:30:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:27.924 12:30:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55889 00:10:27.924 12:30:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:27.924 12:30:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:27.924 killing process with pid 55889 00:10:27.924 12:30:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55889' 00:10:27.924 12:30:10 -- common/autotest_common.sh@945 -- # kill 55889 00:10:27.924 12:30:10 -- common/autotest_common.sh@950 -- # wait 55889 00:10:29.827 12:30:12 -- accel/accel.sh@68 -- # trap - ERR 00:10:29.827 12:30:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:29.827 12:30:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:29.827 12:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.827 12:30:12 -- common/autotest_common.sh@10 -- # set +x 00:10:29.827 12:30:12 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:29.827 12:30:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:29.827 12:30:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.827 12:30:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.827 12:30:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.827 12:30:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.827 12:30:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.827 12:30:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.827 12:30:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.827 12:30:12 -- accel/accel.sh@42 -- # jq -r . 00:10:30.087 12:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.087 12:30:12 -- common/autotest_common.sh@10 -- # set +x 00:10:30.087 12:30:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:30.087 12:30:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:30.087 12:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.087 12:30:12 -- common/autotest_common.sh@10 -- # set +x 00:10:30.087 ************************************ 00:10:30.087 START TEST accel_missing_filename 00:10:30.087 ************************************ 00:10:30.087 12:30:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:30.087 12:30:12 -- common/autotest_common.sh@640 -- # local es=0 00:10:30.087 12:30:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:30.087 12:30:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:30.087 12:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:30.087 12:30:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:30.087 12:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:30.087 12:30:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:30.087 12:30:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:30.087 12:30:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.087 12:30:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.087 12:30:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.087 12:30:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.087 12:30:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.087 12:30:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.087 12:30:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.087 12:30:12 -- accel/accel.sh@42 -- # jq -r . 00:10:30.087 [2024-10-01 12:30:12.511479] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:30.087 [2024-10-01 12:30:12.511655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55978 ] 00:10:30.346 [2024-10-01 12:30:12.681400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.605 [2024-10-01 12:30:12.927995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.864 [2024-10-01 12:30:13.138807] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.123 [2024-10-01 12:30:13.603547] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:31.692 A filename is required. 00:10:31.692 12:30:13 -- common/autotest_common.sh@643 -- # es=234 00:10:31.692 12:30:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:31.692 12:30:13 -- common/autotest_common.sh@652 -- # es=106 00:10:31.692 12:30:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:31.692 12:30:13 -- common/autotest_common.sh@660 -- # es=1 00:10:31.692 12:30:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:31.692 00:10:31.692 real 0m1.513s 00:10:31.692 user 0m1.299s 00:10:31.692 sys 0m0.156s 00:10:31.692 ************************************ 00:10:31.692 END TEST accel_missing_filename 00:10:31.692 ************************************ 00:10:31.692 12:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.692 12:30:13 -- common/autotest_common.sh@10 -- # set +x 00:10:31.692 12:30:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:31.692 12:30:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:31.692 12:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.692 12:30:14 -- common/autotest_common.sh@10 -- # set +x 00:10:31.692 ************************************ 00:10:31.692 START TEST accel_compress_verify 00:10:31.692 ************************************ 00:10:31.692 12:30:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:31.692 12:30:14 -- common/autotest_common.sh@640 -- # local es=0 00:10:31.692 12:30:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:31.692 12:30:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:31.692 12:30:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.692 12:30:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:31.692 12:30:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.692 12:30:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:31.692 12:30:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:31.692 12:30:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.692 12:30:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.692 12:30:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.692 12:30:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.692 12:30:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.692 12:30:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.692 12:30:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.692 12:30:14 -- accel/accel.sh@42 -- # jq -r . 00:10:31.692 [2024-10-01 12:30:14.064094] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:31.692 [2024-10-01 12:30:14.064233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56011 ] 00:10:31.950 [2024-10-01 12:30:14.227795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.950 [2024-10-01 12:30:14.455777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.210 [2024-10-01 12:30:14.663315] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:32.777 [2024-10-01 12:30:15.119573] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:33.065 00:10:33.065 Compression does not support the verify option, aborting. 00:10:33.065 12:30:15 -- common/autotest_common.sh@643 -- # es=161 00:10:33.065 12:30:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:33.065 12:30:15 -- common/autotest_common.sh@652 -- # es=33 00:10:33.065 ************************************ 00:10:33.065 END TEST accel_compress_verify 00:10:33.065 ************************************ 00:10:33.065 12:30:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:33.066 12:30:15 -- common/autotest_common.sh@660 -- # es=1 00:10:33.066 12:30:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:33.066 00:10:33.066 real 0m1.473s 00:10:33.066 user 0m1.266s 00:10:33.066 sys 0m0.149s 00:10:33.066 12:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.066 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.066 12:30:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:33.066 12:30:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:33.066 12:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.066 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.066 ************************************ 00:10:33.066 START TEST accel_wrong_workload 00:10:33.066 ************************************ 00:10:33.066 12:30:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:33.066 12:30:15 -- common/autotest_common.sh@640 -- # local es=0 00:10:33.066 12:30:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:33.066 12:30:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:33.066 12:30:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:33.066 12:30:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:33.066 12:30:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:33.066 12:30:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:33.066 12:30:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:33.066 12:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.066 12:30:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.066 12:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.066 12:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.066 12:30:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.066 12:30:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.066 12:30:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.066 12:30:15 -- accel/accel.sh@42 -- # jq -r . 00:10:33.325 Unsupported workload type: foobar 00:10:33.325 [2024-10-01 12:30:15.595808] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:33.325 accel_perf options: 00:10:33.325 [-h help message] 00:10:33.325 [-q queue depth per core] 00:10:33.325 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:33.325 [-T number of threads per core 00:10:33.325 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:33.325 [-t time in seconds] 00:10:33.325 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:33.325 [ dif_verify, , dif_generate, dif_generate_copy 00:10:33.325 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:33.325 [-l for compress/decompress workloads, name of uncompressed input file 00:10:33.325 [-S for crc32c workload, use this seed value (default 0) 00:10:33.325 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:33.325 [-f for fill workload, use this BYTE value (default 255) 00:10:33.325 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:33.325 [-y verify result if this switch is on] 00:10:33.325 [-a tasks to allocate per core (default: same value as -q)] 00:10:33.325 Can be used to spread operations across a wider range of memory. 00:10:33.325 ************************************ 00:10:33.325 END TEST accel_wrong_workload 00:10:33.325 ************************************ 00:10:33.325 12:30:15 -- common/autotest_common.sh@643 -- # es=1 00:10:33.325 12:30:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:33.325 12:30:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:33.325 12:30:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:33.325 00:10:33.325 real 0m0.078s 00:10:33.325 user 0m0.087s 00:10:33.325 sys 0m0.039s 00:10:33.325 12:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.325 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.325 12:30:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:33.325 12:30:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:33.325 12:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.325 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.325 ************************************ 00:10:33.325 START TEST accel_negative_buffers 00:10:33.325 ************************************ 00:10:33.325 12:30:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:33.325 12:30:15 -- common/autotest_common.sh@640 -- # local es=0 00:10:33.325 12:30:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:33.325 12:30:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:33.325 12:30:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:33.325 12:30:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:33.325 12:30:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:33.325 12:30:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:33.325 12:30:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:33.325 12:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.325 12:30:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.325 12:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.325 12:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.325 12:30:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.325 12:30:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.325 12:30:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.325 12:30:15 -- accel/accel.sh@42 -- # jq -r . 00:10:33.325 -x option must be non-negative. 00:10:33.325 [2024-10-01 12:30:15.724025] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:33.325 accel_perf options: 00:10:33.325 [-h help message] 00:10:33.325 [-q queue depth per core] 00:10:33.325 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:33.325 [-T number of threads per core 00:10:33.325 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:33.325 [-t time in seconds] 00:10:33.325 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:33.325 [ dif_verify, , dif_generate, dif_generate_copy 00:10:33.325 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:33.325 [-l for compress/decompress workloads, name of uncompressed input file 00:10:33.325 [-S for crc32c workload, use this seed value (default 0) 00:10:33.325 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:33.325 [-f for fill workload, use this BYTE value (default 255) 00:10:33.325 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:33.325 [-y verify result if this switch is on] 00:10:33.325 [-a tasks to allocate per core (default: same value as -q)] 00:10:33.325 Can be used to spread operations across a wider range of memory. 00:10:33.325 12:30:15 -- common/autotest_common.sh@643 -- # es=1 00:10:33.325 12:30:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:33.325 12:30:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:33.325 12:30:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:33.325 ************************************ 00:10:33.325 END TEST accel_negative_buffers 00:10:33.325 ************************************ 00:10:33.325 00:10:33.325 real 0m0.079s 00:10:33.325 user 0m0.093s 00:10:33.325 sys 0m0.036s 00:10:33.325 12:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.325 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.325 12:30:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:33.325 12:30:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:33.325 12:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.325 12:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:33.325 ************************************ 00:10:33.325 START TEST accel_crc32c 00:10:33.325 ************************************ 00:10:33.325 12:30:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:33.326 12:30:15 -- accel/accel.sh@16 -- # local accel_opc 00:10:33.326 12:30:15 -- accel/accel.sh@17 -- # local accel_module 00:10:33.326 12:30:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:33.326 12:30:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:33.326 12:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.326 12:30:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.326 12:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.326 12:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.326 12:30:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.326 12:30:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.326 12:30:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.326 12:30:15 -- accel/accel.sh@42 -- # jq -r . 00:10:33.326 [2024-10-01 12:30:15.845254] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:33.326 [2024-10-01 12:30:15.845387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56089 ] 00:10:33.584 [2024-10-01 12:30:16.007738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.843 [2024-10-01 12:30:16.240968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.379 12:30:18 -- accel/accel.sh@18 -- # out=' 00:10:36.379 SPDK Configuration: 00:10:36.379 Core mask: 0x1 00:10:36.379 00:10:36.379 Accel Perf Configuration: 00:10:36.379 Workload Type: crc32c 00:10:36.379 CRC-32C seed: 32 00:10:36.379 Transfer size: 4096 bytes 00:10:36.379 Vector count 1 00:10:36.379 Module: software 00:10:36.379 Queue depth: 32 00:10:36.379 Allocate depth: 32 00:10:36.379 # threads/core: 1 00:10:36.379 Run time: 1 seconds 00:10:36.379 Verify: Yes 00:10:36.379 00:10:36.379 Running for 1 seconds... 00:10:36.379 00:10:36.379 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:36.379 ------------------------------------------------------------------------------------ 00:10:36.379 0,0 385856/s 1507 MiB/s 0 0 00:10:36.379 ==================================================================================== 00:10:36.379 Total 385856/s 1507 MiB/s 0 0' 00:10:36.379 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.379 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.379 12:30:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:36.379 12:30:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:36.379 12:30:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.379 12:30:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.379 12:30:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.379 12:30:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.379 12:30:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.379 12:30:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.379 12:30:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.379 12:30:18 -- accel/accel.sh@42 -- # jq -r . 00:10:36.379 [2024-10-01 12:30:18.366599] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:36.379 [2024-10-01 12:30:18.366768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56115 ] 00:10:36.379 [2024-10-01 12:30:18.539950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.379 [2024-10-01 12:30:18.731203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val=0x1 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val=crc32c 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val=32 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.639 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.639 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.639 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val=software 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@23 -- # accel_module=software 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val=32 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val=32 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val=1 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val=Yes 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:36.640 12:30:18 -- accel/accel.sh@21 -- # val= 00:10:36.640 12:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # IFS=: 00:10:36.640 12:30:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@21 -- # val= 00:10:38.548 12:30:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # IFS=: 00:10:38.548 12:30:20 -- accel/accel.sh@20 -- # read -r var val 00:10:38.548 12:30:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.548 12:30:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:38.548 12:30:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.548 ************************************ 00:10:38.548 END TEST accel_crc32c 00:10:38.548 ************************************ 00:10:38.548 00:10:38.548 real 0m4.960s 00:10:38.548 user 0m4.435s 00:10:38.548 sys 0m0.307s 00:10:38.548 12:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.548 12:30:20 -- common/autotest_common.sh@10 -- # set +x 00:10:38.548 12:30:20 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:38.548 12:30:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:38.548 12:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:38.548 12:30:20 -- common/autotest_common.sh@10 -- # set +x 00:10:38.548 ************************************ 00:10:38.548 START TEST accel_crc32c_C2 00:10:38.548 ************************************ 00:10:38.548 12:30:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:38.548 12:30:20 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.548 12:30:20 -- accel/accel.sh@17 -- # local accel_module 00:10:38.548 12:30:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:38.548 12:30:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.548 12:30:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:38.548 12:30:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.548 12:30:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.548 12:30:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.548 12:30:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.548 12:30:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.548 12:30:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.548 12:30:20 -- accel/accel.sh@42 -- # jq -r . 00:10:38.548 [2024-10-01 12:30:20.860388] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:38.548 [2024-10-01 12:30:20.860553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56167 ] 00:10:38.548 [2024-10-01 12:30:21.031705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.817 [2024-10-01 12:30:21.261813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.351 12:30:23 -- accel/accel.sh@18 -- # out=' 00:10:41.351 SPDK Configuration: 00:10:41.351 Core mask: 0x1 00:10:41.351 00:10:41.351 Accel Perf Configuration: 00:10:41.351 Workload Type: crc32c 00:10:41.351 CRC-32C seed: 0 00:10:41.351 Transfer size: 4096 bytes 00:10:41.351 Vector count 2 00:10:41.351 Module: software 00:10:41.351 Queue depth: 32 00:10:41.351 Allocate depth: 32 00:10:41.351 # threads/core: 1 00:10:41.351 Run time: 1 seconds 00:10:41.351 Verify: Yes 00:10:41.351 00:10:41.351 Running for 1 seconds... 00:10:41.351 00:10:41.351 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:41.351 ------------------------------------------------------------------------------------ 00:10:41.351 0,0 297792/s 2326 MiB/s 0 0 00:10:41.351 ==================================================================================== 00:10:41.351 Total 297792/s 1163 MiB/s 0 0' 00:10:41.351 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.351 12:30:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:41.351 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.351 12:30:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:41.351 12:30:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.351 12:30:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.351 12:30:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.351 12:30:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.351 12:30:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.351 12:30:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.351 12:30:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.351 12:30:23 -- accel/accel.sh@42 -- # jq -r . 00:10:41.351 [2024-10-01 12:30:23.325743] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:41.351 [2024-10-01 12:30:23.325909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56193 ] 00:10:41.351 [2024-10-01 12:30:23.499578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.351 [2024-10-01 12:30:23.686881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=0x1 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=crc32c 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=0 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=software 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=32 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=32 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=1 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val=Yes 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:41.611 12:30:23 -- accel/accel.sh@21 -- # val= 00:10:41.611 12:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # IFS=: 00:10:41.611 12:30:23 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@21 -- # val= 00:10:43.516 12:30:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # IFS=: 00:10:43.516 12:30:25 -- accel/accel.sh@20 -- # read -r var val 00:10:43.516 12:30:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:43.516 12:30:25 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:43.516 12:30:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.516 00:10:43.516 real 0m4.921s 00:10:43.516 user 0m4.398s 00:10:43.516 sys 0m0.307s 00:10:43.516 ************************************ 00:10:43.516 END TEST accel_crc32c_C2 00:10:43.516 ************************************ 00:10:43.516 12:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.516 12:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:43.516 12:30:25 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:43.516 12:30:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:43.516 12:30:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.516 12:30:25 -- common/autotest_common.sh@10 -- # set +x 00:10:43.516 ************************************ 00:10:43.516 START TEST accel_copy 00:10:43.516 ************************************ 00:10:43.516 12:30:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:43.516 12:30:25 -- accel/accel.sh@16 -- # local accel_opc 00:10:43.516 12:30:25 -- accel/accel.sh@17 -- # local accel_module 00:10:43.516 12:30:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:43.516 12:30:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:43.517 12:30:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.517 12:30:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.517 12:30:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.517 12:30:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.517 12:30:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.517 12:30:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.517 12:30:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.517 12:30:25 -- accel/accel.sh@42 -- # jq -r . 00:10:43.517 [2024-10-01 12:30:25.828262] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:43.517 [2024-10-01 12:30:25.828610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56240 ] 00:10:43.517 [2024-10-01 12:30:26.002544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.778 [2024-10-01 12:30:26.238227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.330 12:30:28 -- accel/accel.sh@18 -- # out=' 00:10:46.330 SPDK Configuration: 00:10:46.330 Core mask: 0x1 00:10:46.330 00:10:46.330 Accel Perf Configuration: 00:10:46.330 Workload Type: copy 00:10:46.330 Transfer size: 4096 bytes 00:10:46.330 Vector count 1 00:10:46.330 Module: software 00:10:46.330 Queue depth: 32 00:10:46.330 Allocate depth: 32 00:10:46.330 # threads/core: 1 00:10:46.330 Run time: 1 seconds 00:10:46.330 Verify: Yes 00:10:46.330 00:10:46.331 Running for 1 seconds... 00:10:46.331 00:10:46.331 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:46.331 ------------------------------------------------------------------------------------ 00:10:46.331 0,0 222944/s 870 MiB/s 0 0 00:10:46.331 ==================================================================================== 00:10:46.331 Total 222944/s 870 MiB/s 0 0' 00:10:46.331 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.331 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.331 12:30:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:46.331 12:30:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:46.331 12:30:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.331 12:30:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.331 12:30:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.331 12:30:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.331 12:30:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.331 12:30:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.331 12:30:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.331 12:30:28 -- accel/accel.sh@42 -- # jq -r . 00:10:46.331 [2024-10-01 12:30:28.399112] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:46.331 [2024-10-01 12:30:28.399970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56271 ] 00:10:46.331 [2024-10-01 12:30:28.573004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.331 [2024-10-01 12:30:28.804036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val=0x1 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val=copy 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.590 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.590 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.590 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val=software 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val=32 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val=32 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val=1 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val=Yes 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:46.591 12:30:28 -- accel/accel.sh@21 -- # val= 00:10:46.591 12:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # IFS=: 00:10:46.591 12:30:28 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@21 -- # val= 00:10:48.492 12:30:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # IFS=: 00:10:48.492 12:30:30 -- accel/accel.sh@20 -- # read -r var val 00:10:48.492 12:30:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:48.492 12:30:30 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:48.492 12:30:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.492 00:10:48.492 real 0m5.032s 00:10:48.492 user 0m4.490s 00:10:48.492 sys 0m0.320s 00:10:48.492 12:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.492 12:30:30 -- common/autotest_common.sh@10 -- # set +x 00:10:48.492 ************************************ 00:10:48.492 END TEST accel_copy 00:10:48.492 ************************************ 00:10:48.492 12:30:30 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.492 12:30:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:48.492 12:30:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.492 12:30:30 -- common/autotest_common.sh@10 -- # set +x 00:10:48.492 ************************************ 00:10:48.492 START TEST accel_fill 00:10:48.492 ************************************ 00:10:48.492 12:30:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.492 12:30:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.492 12:30:30 -- accel/accel.sh@17 -- # local accel_module 00:10:48.492 12:30:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.492 12:30:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.492 12:30:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.492 12:30:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.492 12:30:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.492 12:30:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.492 12:30:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.492 12:30:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.492 12:30:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.492 12:30:30 -- accel/accel.sh@42 -- # jq -r . 00:10:48.492 [2024-10-01 12:30:30.912536] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:48.492 [2024-10-01 12:30:30.912723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56318 ] 00:10:48.750 [2024-10-01 12:30:31.075158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.750 [2024-10-01 12:30:31.261937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.281 12:30:33 -- accel/accel.sh@18 -- # out=' 00:10:51.281 SPDK Configuration: 00:10:51.281 Core mask: 0x1 00:10:51.281 00:10:51.281 Accel Perf Configuration: 00:10:51.281 Workload Type: fill 00:10:51.281 Fill pattern: 0x80 00:10:51.281 Transfer size: 4096 bytes 00:10:51.281 Vector count 1 00:10:51.281 Module: software 00:10:51.281 Queue depth: 64 00:10:51.281 Allocate depth: 64 00:10:51.281 # threads/core: 1 00:10:51.281 Run time: 1 seconds 00:10:51.281 Verify: Yes 00:10:51.281 00:10:51.281 Running for 1 seconds... 00:10:51.281 00:10:51.281 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:51.281 ------------------------------------------------------------------------------------ 00:10:51.281 0,0 368320/s 1438 MiB/s 0 0 00:10:51.281 ==================================================================================== 00:10:51.281 Total 368320/s 1438 MiB/s 0 0' 00:10:51.281 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.281 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.281 12:30:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.281 12:30:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.281 12:30:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:51.281 12:30:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.281 12:30:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.282 12:30:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.282 12:30:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.282 12:30:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.282 12:30:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.282 12:30:33 -- accel/accel.sh@42 -- # jq -r . 00:10:51.282 [2024-10-01 12:30:33.292995] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:51.282 [2024-10-01 12:30:33.293156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56344 ] 00:10:51.282 [2024-10-01 12:30:33.462949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.282 [2024-10-01 12:30:33.644747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=0x1 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=fill 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=0x80 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=software 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=64 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=64 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=1 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val=Yes 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:51.541 12:30:33 -- accel/accel.sh@21 -- # val= 00:10:51.541 12:30:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # IFS=: 00:10:51.541 12:30:33 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@21 -- # val= 00:10:53.446 12:30:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # IFS=: 00:10:53.446 12:30:35 -- accel/accel.sh@20 -- # read -r var val 00:10:53.446 12:30:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:53.446 12:30:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:53.446 12:30:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.446 00:10:53.446 real 0m4.794s 00:10:53.446 user 0m4.291s 00:10:53.446 sys 0m0.289s 00:10:53.446 12:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.446 ************************************ 00:10:53.446 END TEST accel_fill 00:10:53.446 ************************************ 00:10:53.446 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 12:30:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:53.446 12:30:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:53.446 12:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.446 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 ************************************ 00:10:53.446 START TEST accel_copy_crc32c 00:10:53.446 ************************************ 00:10:53.446 12:30:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:53.446 12:30:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:53.446 12:30:35 -- accel/accel.sh@17 -- # local accel_module 00:10:53.446 12:30:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:53.446 12:30:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:53.446 12:30:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.446 12:30:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.446 12:30:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.446 12:30:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.446 12:30:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.446 12:30:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.446 12:30:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.446 12:30:35 -- accel/accel.sh@42 -- # jq -r . 00:10:53.446 [2024-10-01 12:30:35.761923] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:53.446 [2024-10-01 12:30:35.762118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56390 ] 00:10:53.446 [2024-10-01 12:30:35.927299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.705 [2024-10-01 12:30:36.182103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.237 12:30:38 -- accel/accel.sh@18 -- # out=' 00:10:56.237 SPDK Configuration: 00:10:56.237 Core mask: 0x1 00:10:56.237 00:10:56.237 Accel Perf Configuration: 00:10:56.237 Workload Type: copy_crc32c 00:10:56.237 CRC-32C seed: 0 00:10:56.237 Vector size: 4096 bytes 00:10:56.237 Transfer size: 4096 bytes 00:10:56.237 Vector count 1 00:10:56.237 Module: software 00:10:56.237 Queue depth: 32 00:10:56.237 Allocate depth: 32 00:10:56.237 # threads/core: 1 00:10:56.237 Run time: 1 seconds 00:10:56.237 Verify: Yes 00:10:56.237 00:10:56.237 Running for 1 seconds... 00:10:56.237 00:10:56.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:56.237 ------------------------------------------------------------------------------------ 00:10:56.237 0,0 189088/s 738 MiB/s 0 0 00:10:56.237 ==================================================================================== 00:10:56.237 Total 189088/s 738 MiB/s 0 0' 00:10:56.237 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.237 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.237 12:30:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:56.237 12:30:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:56.237 12:30:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.237 12:30:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.237 12:30:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.237 12:30:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.237 12:30:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.237 12:30:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.237 12:30:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.237 12:30:38 -- accel/accel.sh@42 -- # jq -r . 00:10:56.238 [2024-10-01 12:30:38.336694] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:56.238 [2024-10-01 12:30:38.337456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56422 ] 00:10:56.238 [2024-10-01 12:30:38.506905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.238 [2024-10-01 12:30:38.692886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=0x1 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=0 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=software 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@23 -- # accel_module=software 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=32 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=32 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=1 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val=Yes 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:56.497 12:30:38 -- accel/accel.sh@21 -- # val= 00:10:56.497 12:30:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # IFS=: 00:10:56.497 12:30:38 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@21 -- # val= 00:10:58.400 12:30:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # IFS=: 00:10:58.400 12:30:40 -- accel/accel.sh@20 -- # read -r var val 00:10:58.400 12:30:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:58.400 12:30:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:58.400 12:30:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.400 00:10:58.400 real 0m5.103s 00:10:58.400 user 0m4.599s 00:10:58.400 sys 0m0.286s 00:10:58.400 12:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.400 ************************************ 00:10:58.400 END TEST accel_copy_crc32c 00:10:58.400 ************************************ 00:10:58.400 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:10:58.400 12:30:40 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:58.400 12:30:40 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:58.400 12:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.400 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:10:58.400 ************************************ 00:10:58.400 START TEST accel_copy_crc32c_C2 00:10:58.400 ************************************ 00:10:58.400 12:30:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:58.400 12:30:40 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.400 12:30:40 -- accel/accel.sh@17 -- # local accel_module 00:10:58.400 12:30:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:58.400 12:30:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:58.400 12:30:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.400 12:30:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.400 12:30:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.400 12:30:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.400 12:30:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.400 12:30:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.400 12:30:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.400 12:30:40 -- accel/accel.sh@42 -- # jq -r . 00:10:58.400 [2024-10-01 12:30:40.909199] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:58.400 [2024-10-01 12:30:40.909403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56468 ] 00:10:58.659 [2024-10-01 12:30:41.082524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.917 [2024-10-01 12:30:41.329737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.869 12:30:43 -- accel/accel.sh@18 -- # out=' 00:11:00.869 SPDK Configuration: 00:11:00.869 Core mask: 0x1 00:11:00.869 00:11:00.869 Accel Perf Configuration: 00:11:00.869 Workload Type: copy_crc32c 00:11:00.869 CRC-32C seed: 0 00:11:00.869 Vector size: 4096 bytes 00:11:00.869 Transfer size: 8192 bytes 00:11:00.869 Vector count 2 00:11:00.869 Module: software 00:11:00.869 Queue depth: 32 00:11:00.869 Allocate depth: 32 00:11:00.869 # threads/core: 1 00:11:00.869 Run time: 1 seconds 00:11:00.869 Verify: Yes 00:11:00.869 00:11:00.869 Running for 1 seconds... 00:11:00.869 00:11:00.869 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:00.869 ------------------------------------------------------------------------------------ 00:11:00.869 0,0 126336/s 987 MiB/s 0 0 00:11:00.869 ==================================================================================== 00:11:00.869 Total 126336/s 493 MiB/s 0 0' 00:11:00.869 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:00.869 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:00.869 12:30:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:00.869 12:30:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:00.869 12:30:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.869 12:30:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.869 12:30:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.869 12:30:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.869 12:30:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.869 12:30:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.869 12:30:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.869 12:30:43 -- accel/accel.sh@42 -- # jq -r . 00:11:01.128 [2024-10-01 12:30:43.431046] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:01.128 [2024-10-01 12:30:43.431207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56500 ] 00:11:01.128 [2024-10-01 12:30:43.598118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.387 [2024-10-01 12:30:43.773353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=0x1 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=0 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=software 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@23 -- # accel_module=software 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=32 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=32 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=1 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val=Yes 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:01.647 12:30:43 -- accel/accel.sh@21 -- # val= 00:11:01.647 12:30:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # IFS=: 00:11:01.647 12:30:43 -- accel/accel.sh@20 -- # read -r var val 00:11:03.551 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.551 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.552 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.552 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.552 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.552 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@21 -- # val= 00:11:03.552 12:30:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # IFS=: 00:11:03.552 12:30:45 -- accel/accel.sh@20 -- # read -r var val 00:11:03.552 12:30:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:03.552 12:30:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:03.552 12:30:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.552 00:11:03.552 real 0m4.907s 00:11:03.552 user 0m4.394s 00:11:03.552 sys 0m0.300s 00:11:03.552 12:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.552 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.552 ************************************ 00:11:03.552 END TEST accel_copy_crc32c_C2 00:11:03.552 ************************************ 00:11:03.552 12:30:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:03.552 12:30:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:03.552 12:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.552 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.552 ************************************ 00:11:03.552 START TEST accel_dualcast 00:11:03.552 ************************************ 00:11:03.552 12:30:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:03.552 12:30:45 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.552 12:30:45 -- accel/accel.sh@17 -- # local accel_module 00:11:03.552 12:30:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:03.552 12:30:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:03.552 12:30:45 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.552 12:30:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.552 12:30:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.552 12:30:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.552 12:30:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.552 12:30:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.552 12:30:45 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.552 12:30:45 -- accel/accel.sh@42 -- # jq -r . 00:11:03.552 [2024-10-01 12:30:45.864663] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:03.552 [2024-10-01 12:30:45.864859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56541 ] 00:11:03.552 [2024-10-01 12:30:46.051790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.810 [2024-10-01 12:30:46.268315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.341 12:30:48 -- accel/accel.sh@18 -- # out=' 00:11:06.341 SPDK Configuration: 00:11:06.341 Core mask: 0x1 00:11:06.341 00:11:06.341 Accel Perf Configuration: 00:11:06.341 Workload Type: dualcast 00:11:06.341 Transfer size: 4096 bytes 00:11:06.341 Vector count 1 00:11:06.341 Module: software 00:11:06.341 Queue depth: 32 00:11:06.341 Allocate depth: 32 00:11:06.341 # threads/core: 1 00:11:06.341 Run time: 1 seconds 00:11:06.341 Verify: Yes 00:11:06.341 00:11:06.341 Running for 1 seconds... 00:11:06.341 00:11:06.341 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:06.341 ------------------------------------------------------------------------------------ 00:11:06.341 0,0 270528/s 1056 MiB/s 0 0 00:11:06.341 ==================================================================================== 00:11:06.341 Total 270528/s 1056 MiB/s 0 0' 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:06.341 12:30:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:06.341 12:30:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.341 12:30:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.341 12:30:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.341 12:30:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.341 12:30:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.341 12:30:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.341 12:30:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.341 12:30:48 -- accel/accel.sh@42 -- # jq -r . 00:11:06.341 [2024-10-01 12:30:48.318551] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:06.341 [2024-10-01 12:30:48.319086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56578 ] 00:11:06.341 [2024-10-01 12:30:48.481881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.341 [2024-10-01 12:30:48.666969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=0x1 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=dualcast 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=software 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@23 -- # accel_module=software 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=32 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=32 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.341 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.341 12:30:48 -- accel/accel.sh@21 -- # val=1 00:11:06.341 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.342 12:30:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:06.342 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.342 12:30:48 -- accel/accel.sh@21 -- # val=Yes 00:11:06.342 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.342 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.342 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:06.342 12:30:48 -- accel/accel.sh@21 -- # val= 00:11:06.342 12:30:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # IFS=: 00:11:06.342 12:30:48 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@21 -- # val= 00:11:08.245 12:30:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # IFS=: 00:11:08.245 12:30:50 -- accel/accel.sh@20 -- # read -r var val 00:11:08.245 12:30:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:08.245 12:30:50 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:08.245 12:30:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:08.245 00:11:08.245 real 0m4.854s 00:11:08.245 user 0m4.359s 00:11:08.245 sys 0m0.285s 00:11:08.245 12:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.245 ************************************ 00:11:08.245 END TEST accel_dualcast 00:11:08.245 ************************************ 00:11:08.245 12:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.245 12:30:50 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:08.245 12:30:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:08.245 12:30:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:08.245 12:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.245 ************************************ 00:11:08.245 START TEST accel_compare 00:11:08.245 ************************************ 00:11:08.245 12:30:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:08.245 12:30:50 -- accel/accel.sh@16 -- # local accel_opc 00:11:08.245 12:30:50 -- accel/accel.sh@17 -- # local accel_module 00:11:08.245 12:30:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:08.245 12:30:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:08.245 12:30:50 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.245 12:30:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.245 12:30:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.245 12:30:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.245 12:30:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.245 12:30:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.245 12:30:50 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.245 12:30:50 -- accel/accel.sh@42 -- # jq -r . 00:11:08.245 [2024-10-01 12:30:50.761156] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:08.245 [2024-10-01 12:30:50.761311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56619 ] 00:11:08.503 [2024-10-01 12:30:50.931108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.763 [2024-10-01 12:30:51.113548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.667 12:30:53 -- accel/accel.sh@18 -- # out=' 00:11:10.667 SPDK Configuration: 00:11:10.667 Core mask: 0x1 00:11:10.667 00:11:10.667 Accel Perf Configuration: 00:11:10.667 Workload Type: compare 00:11:10.667 Transfer size: 4096 bytes 00:11:10.667 Vector count 1 00:11:10.667 Module: software 00:11:10.667 Queue depth: 32 00:11:10.667 Allocate depth: 32 00:11:10.667 # threads/core: 1 00:11:10.667 Run time: 1 seconds 00:11:10.667 Verify: Yes 00:11:10.667 00:11:10.667 Running for 1 seconds... 00:11:10.667 00:11:10.667 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:10.667 ------------------------------------------------------------------------------------ 00:11:10.667 0,0 360960/s 1410 MiB/s 0 0 00:11:10.667 ==================================================================================== 00:11:10.667 Total 360960/s 1410 MiB/s 0 0' 00:11:10.667 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:10.667 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:10.667 12:30:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:10.667 12:30:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.667 12:30:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:10.667 12:30:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.667 12:30:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.667 12:30:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.667 12:30:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.667 12:30:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.667 12:30:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.667 12:30:53 -- accel/accel.sh@42 -- # jq -r . 00:11:10.667 [2024-10-01 12:30:53.160287] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:10.668 [2024-10-01 12:30:53.160645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56645 ] 00:11:10.926 [2024-10-01 12:30:53.330660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.185 [2024-10-01 12:30:53.512859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=0x1 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=compare 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=software 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=32 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=32 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=1 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.185 12:30:53 -- accel/accel.sh@21 -- # val=Yes 00:11:11.185 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.185 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.445 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.445 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.445 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.445 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.445 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:11.445 12:30:53 -- accel/accel.sh@21 -- # val= 00:11:11.445 12:30:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.445 12:30:53 -- accel/accel.sh@20 -- # IFS=: 00:11:11.445 12:30:53 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@21 -- # val= 00:11:13.349 12:30:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # IFS=: 00:11:13.349 12:30:55 -- accel/accel.sh@20 -- # read -r var val 00:11:13.349 12:30:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:13.349 ************************************ 00:11:13.349 END TEST accel_compare 00:11:13.349 ************************************ 00:11:13.349 12:30:55 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:13.349 12:30:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.349 00:11:13.349 real 0m4.790s 00:11:13.349 user 0m4.293s 00:11:13.349 sys 0m0.288s 00:11:13.349 12:30:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.349 12:30:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.349 12:30:55 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:13.349 12:30:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:13.349 12:30:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.349 12:30:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.349 ************************************ 00:11:13.349 START TEST accel_xor 00:11:13.349 ************************************ 00:11:13.349 12:30:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:13.349 12:30:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:13.349 12:30:55 -- accel/accel.sh@17 -- # local accel_module 00:11:13.349 12:30:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:13.349 12:30:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:13.349 12:30:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.349 12:30:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.349 12:30:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.349 12:30:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.349 12:30:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.349 12:30:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.349 12:30:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.349 12:30:55 -- accel/accel.sh@42 -- # jq -r . 00:11:13.349 [2024-10-01 12:30:55.590009] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:13.349 [2024-10-01 12:30:55.590153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56697 ] 00:11:13.349 [2024-10-01 12:30:55.759165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.608 [2024-10-01 12:30:55.969905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.512 12:30:57 -- accel/accel.sh@18 -- # out=' 00:11:15.512 SPDK Configuration: 00:11:15.512 Core mask: 0x1 00:11:15.512 00:11:15.512 Accel Perf Configuration: 00:11:15.512 Workload Type: xor 00:11:15.512 Source buffers: 2 00:11:15.512 Transfer size: 4096 bytes 00:11:15.512 Vector count 1 00:11:15.512 Module: software 00:11:15.512 Queue depth: 32 00:11:15.512 Allocate depth: 32 00:11:15.512 # threads/core: 1 00:11:15.512 Run time: 1 seconds 00:11:15.512 Verify: Yes 00:11:15.512 00:11:15.512 Running for 1 seconds... 00:11:15.512 00:11:15.512 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:15.512 ------------------------------------------------------------------------------------ 00:11:15.512 0,0 198464/s 775 MiB/s 0 0 00:11:15.512 ==================================================================================== 00:11:15.512 Total 198464/s 775 MiB/s 0 0' 00:11:15.512 12:30:57 -- accel/accel.sh@20 -- # IFS=: 00:11:15.512 12:30:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:15.512 12:30:57 -- accel/accel.sh@20 -- # read -r var val 00:11:15.512 12:30:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:15.512 12:30:57 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.512 12:30:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.512 12:30:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.512 12:30:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.512 12:30:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.512 12:30:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.512 12:30:57 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.512 12:30:57 -- accel/accel.sh@42 -- # jq -r . 00:11:15.512 [2024-10-01 12:30:58.005672] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:15.512 [2024-10-01 12:30:58.005828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56723 ] 00:11:15.810 [2024-10-01 12:30:58.173349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.070 [2024-10-01 12:30:58.355265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=0x1 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=xor 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=2 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=software 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@23 -- # accel_module=software 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=32 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=32 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=1 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val=Yes 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:16.070 12:30:58 -- accel/accel.sh@21 -- # val= 00:11:16.070 12:30:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # IFS=: 00:11:16.070 12:30:58 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@21 -- # val= 00:11:17.975 12:31:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # IFS=: 00:11:17.975 12:31:00 -- accel/accel.sh@20 -- # read -r var val 00:11:17.975 12:31:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:17.975 12:31:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:17.975 ************************************ 00:11:17.975 END TEST accel_xor 00:11:17.975 ************************************ 00:11:17.975 12:31:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:17.975 00:11:17.975 real 0m4.795s 00:11:17.975 user 0m4.283s 00:11:17.975 sys 0m0.301s 00:11:17.975 12:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.975 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 12:31:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:17.975 12:31:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:17.975 12:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.975 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 ************************************ 00:11:17.975 START TEST accel_xor 00:11:17.975 ************************************ 00:11:17.975 12:31:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:17.975 12:31:00 -- accel/accel.sh@16 -- # local accel_opc 00:11:17.975 12:31:00 -- accel/accel.sh@17 -- # local accel_module 00:11:17.975 12:31:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:17.975 12:31:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:17.975 12:31:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.975 12:31:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.975 12:31:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.975 12:31:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.975 12:31:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.975 12:31:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.975 12:31:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.975 12:31:00 -- accel/accel.sh@42 -- # jq -r . 00:11:17.975 [2024-10-01 12:31:00.439233] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:17.975 [2024-10-01 12:31:00.439400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56770 ] 00:11:18.234 [2024-10-01 12:31:00.610716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.493 [2024-10-01 12:31:00.790610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.396 12:31:02 -- accel/accel.sh@18 -- # out=' 00:11:20.396 SPDK Configuration: 00:11:20.396 Core mask: 0x1 00:11:20.396 00:11:20.396 Accel Perf Configuration: 00:11:20.396 Workload Type: xor 00:11:20.396 Source buffers: 3 00:11:20.396 Transfer size: 4096 bytes 00:11:20.396 Vector count 1 00:11:20.396 Module: software 00:11:20.396 Queue depth: 32 00:11:20.396 Allocate depth: 32 00:11:20.396 # threads/core: 1 00:11:20.396 Run time: 1 seconds 00:11:20.396 Verify: Yes 00:11:20.396 00:11:20.396 Running for 1 seconds... 00:11:20.396 00:11:20.396 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:20.396 ------------------------------------------------------------------------------------ 00:11:20.396 0,0 194592/s 760 MiB/s 0 0 00:11:20.396 ==================================================================================== 00:11:20.396 Total 194592/s 760 MiB/s 0 0' 00:11:20.396 12:31:02 -- accel/accel.sh@20 -- # IFS=: 00:11:20.396 12:31:02 -- accel/accel.sh@20 -- # read -r var val 00:11:20.396 12:31:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:20.396 12:31:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:20.396 12:31:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.396 12:31:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.396 12:31:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.396 12:31:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.396 12:31:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.396 12:31:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.396 12:31:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.396 12:31:02 -- accel/accel.sh@42 -- # jq -r . 00:11:20.396 [2024-10-01 12:31:02.821475] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:20.396 [2024-10-01 12:31:02.821697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56796 ] 00:11:20.655 [2024-10-01 12:31:03.002057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.913 [2024-10-01 12:31:03.182027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=0x1 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=xor 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=3 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=software 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=32 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=32 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=1 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val=Yes 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:20.913 12:31:03 -- accel/accel.sh@21 -- # val= 00:11:20.913 12:31:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # IFS=: 00:11:20.913 12:31:03 -- accel/accel.sh@20 -- # read -r var val 00:11:22.812 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.812 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.812 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.812 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.812 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.812 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.812 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.812 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.812 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.812 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.812 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.813 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.813 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.813 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.813 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.813 12:31:05 -- accel/accel.sh@21 -- # val= 00:11:22.813 12:31:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # IFS=: 00:11:22.813 12:31:05 -- accel/accel.sh@20 -- # read -r var val 00:11:22.813 12:31:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.813 12:31:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:22.813 12:31:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.813 00:11:22.813 real 0m4.787s 00:11:22.813 user 0m4.263s 00:11:22.813 sys 0m0.316s 00:11:22.813 12:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.813 12:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:22.813 ************************************ 00:11:22.813 END TEST accel_xor 00:11:22.813 ************************************ 00:11:22.813 12:31:05 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:22.813 12:31:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.813 12:31:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.813 12:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:22.813 ************************************ 00:11:22.813 START TEST accel_dif_verify 00:11:22.813 ************************************ 00:11:22.813 12:31:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:22.813 12:31:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.813 12:31:05 -- accel/accel.sh@17 -- # local accel_module 00:11:22.813 12:31:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:22.813 12:31:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:22.813 12:31:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.813 12:31:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.813 12:31:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.813 12:31:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.813 12:31:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.813 12:31:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.813 12:31:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.813 12:31:05 -- accel/accel.sh@42 -- # jq -r . 00:11:22.813 [2024-10-01 12:31:05.276655] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:22.813 [2024-10-01 12:31:05.276801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56842 ] 00:11:23.071 [2024-10-01 12:31:05.445543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.329 [2024-10-01 12:31:05.668362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.227 12:31:07 -- accel/accel.sh@18 -- # out=' 00:11:25.227 SPDK Configuration: 00:11:25.227 Core mask: 0x1 00:11:25.227 00:11:25.227 Accel Perf Configuration: 00:11:25.227 Workload Type: dif_verify 00:11:25.227 Vector size: 4096 bytes 00:11:25.227 Transfer size: 4096 bytes 00:11:25.227 Block size: 512 bytes 00:11:25.227 Metadata size: 8 bytes 00:11:25.227 Vector count 1 00:11:25.227 Module: software 00:11:25.227 Queue depth: 32 00:11:25.227 Allocate depth: 32 00:11:25.227 # threads/core: 1 00:11:25.227 Run time: 1 seconds 00:11:25.227 Verify: No 00:11:25.227 00:11:25.227 Running for 1 seconds... 00:11:25.227 00:11:25.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:25.227 ------------------------------------------------------------------------------------ 00:11:25.227 0,0 87168/s 345 MiB/s 0 0 00:11:25.227 ==================================================================================== 00:11:25.227 Total 87168/s 340 MiB/s 0 0' 00:11:25.227 12:31:07 -- accel/accel.sh@20 -- # IFS=: 00:11:25.227 12:31:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:25.227 12:31:07 -- accel/accel.sh@20 -- # read -r var val 00:11:25.227 12:31:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:25.227 12:31:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.227 12:31:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.227 12:31:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.227 12:31:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.227 12:31:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.227 12:31:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.227 12:31:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.228 12:31:07 -- accel/accel.sh@42 -- # jq -r . 00:11:25.228 [2024-10-01 12:31:07.741897] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:25.228 [2024-10-01 12:31:07.742045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56874 ] 00:11:25.485 [2024-10-01 12:31:07.913884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.743 [2024-10-01 12:31:08.141508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=0x1 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=dif_verify 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=software 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@23 -- # accel_module=software 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=32 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=32 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=1 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val=No 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:26.001 12:31:08 -- accel/accel.sh@21 -- # val= 00:11:26.001 12:31:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # IFS=: 00:11:26.001 12:31:08 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@21 -- # val= 00:11:27.900 12:31:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # IFS=: 00:11:27.900 12:31:10 -- accel/accel.sh@20 -- # read -r var val 00:11:27.900 12:31:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:27.900 12:31:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:27.900 12:31:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:27.900 00:11:27.900 real 0m4.943s 00:11:27.900 user 0m4.414s 00:11:27.900 sys 0m0.319s 00:11:27.900 12:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.900 12:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:27.900 ************************************ 00:11:27.900 END TEST accel_dif_verify 00:11:27.900 ************************************ 00:11:27.900 12:31:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:27.900 12:31:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:27.900 12:31:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.900 12:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:27.900 ************************************ 00:11:27.900 START TEST accel_dif_generate 00:11:27.900 ************************************ 00:11:27.900 12:31:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:27.900 12:31:10 -- accel/accel.sh@16 -- # local accel_opc 00:11:27.900 12:31:10 -- accel/accel.sh@17 -- # local accel_module 00:11:27.900 12:31:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:27.900 12:31:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.900 12:31:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:27.900 12:31:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.900 12:31:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.900 12:31:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.900 12:31:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.900 12:31:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.900 12:31:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.900 12:31:10 -- accel/accel.sh@42 -- # jq -r . 00:11:27.900 [2024-10-01 12:31:10.265554] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:27.900 [2024-10-01 12:31:10.265738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56915 ] 00:11:28.159 [2024-10-01 12:31:10.433924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.159 [2024-10-01 12:31:10.659721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.691 12:31:12 -- accel/accel.sh@18 -- # out=' 00:11:30.691 SPDK Configuration: 00:11:30.691 Core mask: 0x1 00:11:30.691 00:11:30.691 Accel Perf Configuration: 00:11:30.691 Workload Type: dif_generate 00:11:30.691 Vector size: 4096 bytes 00:11:30.691 Transfer size: 4096 bytes 00:11:30.691 Block size: 512 bytes 00:11:30.691 Metadata size: 8 bytes 00:11:30.691 Vector count 1 00:11:30.691 Module: software 00:11:30.691 Queue depth: 32 00:11:30.691 Allocate depth: 32 00:11:30.691 # threads/core: 1 00:11:30.691 Run time: 1 seconds 00:11:30.691 Verify: No 00:11:30.691 00:11:30.691 Running for 1 seconds... 00:11:30.691 00:11:30.691 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.691 ------------------------------------------------------------------------------------ 00:11:30.691 0,0 105728/s 419 MiB/s 0 0 00:11:30.691 ==================================================================================== 00:11:30.691 Total 105728/s 413 MiB/s 0 0' 00:11:30.691 12:31:12 -- accel/accel.sh@20 -- # IFS=: 00:11:30.691 12:31:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:30.691 12:31:12 -- accel/accel.sh@20 -- # read -r var val 00:11:30.691 12:31:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:30.691 12:31:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.691 12:31:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.691 12:31:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.691 12:31:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.691 12:31:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.691 12:31:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.691 12:31:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.691 12:31:12 -- accel/accel.sh@42 -- # jq -r . 00:11:30.691 [2024-10-01 12:31:12.734930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:30.691 [2024-10-01 12:31:12.735073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56952 ] 00:11:30.691 [2024-10-01 12:31:12.901163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.691 [2024-10-01 12:31:13.083085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=0x1 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=dif_generate 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=software 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@23 -- # accel_module=software 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=32 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=32 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=1 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val=No 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:30.951 12:31:13 -- accel/accel.sh@21 -- # val= 00:11:30.951 12:31:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # IFS=: 00:11:30.951 12:31:13 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:14 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:14 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:14 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:14 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:15 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:15 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:15 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:15 -- accel/accel.sh@21 -- # val= 00:11:32.854 12:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # IFS=: 00:11:32.854 12:31:15 -- accel/accel.sh@20 -- # read -r var val 00:11:32.854 12:31:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:32.854 12:31:15 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:32.854 12:31:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.854 00:11:32.854 real 0m4.801s 00:11:32.854 user 0m4.295s 00:11:32.854 sys 0m0.299s 00:11:32.854 12:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.854 12:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:32.854 ************************************ 00:11:32.854 END TEST accel_dif_generate 00:11:32.854 ************************************ 00:11:32.854 12:31:15 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:32.854 12:31:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:32.855 12:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.855 12:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 ************************************ 00:11:32.855 START TEST accel_dif_generate_copy 00:11:32.855 ************************************ 00:11:32.855 12:31:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:32.855 12:31:15 -- accel/accel.sh@16 -- # local accel_opc 00:11:32.855 12:31:15 -- accel/accel.sh@17 -- # local accel_module 00:11:32.855 12:31:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:32.855 12:31:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:32.855 12:31:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.855 12:31:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:32.855 12:31:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.855 12:31:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.855 12:31:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:32.855 12:31:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:32.855 12:31:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:32.855 12:31:15 -- accel/accel.sh@42 -- # jq -r . 00:11:32.855 [2024-10-01 12:31:15.119500] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:32.855 [2024-10-01 12:31:15.119706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56993 ] 00:11:32.855 [2024-10-01 12:31:15.288989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.113 [2024-10-01 12:31:15.455565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.011 12:31:17 -- accel/accel.sh@18 -- # out=' 00:11:35.011 SPDK Configuration: 00:11:35.011 Core mask: 0x1 00:11:35.011 00:11:35.011 Accel Perf Configuration: 00:11:35.011 Workload Type: dif_generate_copy 00:11:35.011 Vector size: 4096 bytes 00:11:35.011 Transfer size: 4096 bytes 00:11:35.011 Vector count 1 00:11:35.011 Module: software 00:11:35.011 Queue depth: 32 00:11:35.011 Allocate depth: 32 00:11:35.011 # threads/core: 1 00:11:35.011 Run time: 1 seconds 00:11:35.011 Verify: No 00:11:35.011 00:11:35.011 Running for 1 seconds... 00:11:35.011 00:11:35.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:35.011 ------------------------------------------------------------------------------------ 00:11:35.011 0,0 78560/s 311 MiB/s 0 0 00:11:35.011 ==================================================================================== 00:11:35.011 Total 78560/s 306 MiB/s 0 0' 00:11:35.011 12:31:17 -- accel/accel.sh@20 -- # IFS=: 00:11:35.012 12:31:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:35.012 12:31:17 -- accel/accel.sh@20 -- # read -r var val 00:11:35.012 12:31:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:35.012 12:31:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:35.012 12:31:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:35.012 12:31:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.012 12:31:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.012 12:31:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:35.012 12:31:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:35.012 12:31:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:35.012 12:31:17 -- accel/accel.sh@42 -- # jq -r . 00:11:35.012 [2024-10-01 12:31:17.493539] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:35.012 [2024-10-01 12:31:17.493704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57019 ] 00:11:35.269 [2024-10-01 12:31:17.660292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.526 [2024-10-01 12:31:17.840760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.526 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.526 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.526 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.526 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.526 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.526 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.526 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.526 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.526 12:31:18 -- accel/accel.sh@21 -- # val=0x1 00:11:35.526 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=software 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@23 -- # accel_module=software 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=32 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=32 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=1 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val=No 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:35.527 12:31:18 -- accel/accel.sh@21 -- # val= 00:11:35.527 12:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # IFS=: 00:11:35.527 12:31:18 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@21 -- # val= 00:11:37.425 12:31:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # IFS=: 00:11:37.425 12:31:19 -- accel/accel.sh@20 -- # read -r var val 00:11:37.425 12:31:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:37.425 12:31:19 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:37.425 12:31:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:37.425 00:11:37.425 real 0m4.755s 00:11:37.425 user 0m4.257s 00:11:37.425 sys 0m0.291s 00:11:37.425 12:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.425 12:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:37.425 ************************************ 00:11:37.425 END TEST accel_dif_generate_copy 00:11:37.425 ************************************ 00:11:37.425 12:31:19 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:37.425 12:31:19 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.425 12:31:19 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:37.425 12:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.425 12:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:37.425 ************************************ 00:11:37.425 START TEST accel_comp 00:11:37.425 ************************************ 00:11:37.425 12:31:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.425 12:31:19 -- accel/accel.sh@16 -- # local accel_opc 00:11:37.425 12:31:19 -- accel/accel.sh@17 -- # local accel_module 00:11:37.425 12:31:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.425 12:31:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.425 12:31:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:37.425 12:31:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:37.425 12:31:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.425 12:31:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.425 12:31:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:37.425 12:31:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:37.425 12:31:19 -- accel/accel.sh@41 -- # local IFS=, 00:11:37.425 12:31:19 -- accel/accel.sh@42 -- # jq -r . 00:11:37.425 [2024-10-01 12:31:19.912275] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:37.425 [2024-10-01 12:31:19.912410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57071 ] 00:11:37.684 [2024-10-01 12:31:20.077715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.943 [2024-10-01 12:31:20.301097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.843 12:31:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:39.843 00:11:39.843 SPDK Configuration: 00:11:39.843 Core mask: 0x1 00:11:39.843 00:11:39.843 Accel Perf Configuration: 00:11:39.843 Workload Type: compress 00:11:39.843 Transfer size: 4096 bytes 00:11:39.843 Vector count 1 00:11:39.843 Module: software 00:11:39.843 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.843 Queue depth: 32 00:11:39.843 Allocate depth: 32 00:11:39.843 # threads/core: 1 00:11:39.843 Run time: 1 seconds 00:11:39.843 Verify: No 00:11:39.843 00:11:39.843 Running for 1 seconds... 00:11:39.843 00:11:39.843 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:39.843 ------------------------------------------------------------------------------------ 00:11:39.843 0,0 44416/s 185 MiB/s 0 0 00:11:39.843 ==================================================================================== 00:11:39.843 Total 44416/s 173 MiB/s 0 0' 00:11:39.843 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:39.843 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:39.843 12:31:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.843 12:31:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:39.843 12:31:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.843 12:31:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.843 12:31:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.843 12:31:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.843 12:31:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.843 12:31:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.843 12:31:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.843 12:31:22 -- accel/accel.sh@42 -- # jq -r . 00:11:39.843 [2024-10-01 12:31:22.366468] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:39.843 [2024-10-01 12:31:22.366642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57097 ] 00:11:40.101 [2024-10-01 12:31:22.538508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.360 [2024-10-01 12:31:22.739703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val=0x1 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val=compress 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.619 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.619 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.619 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=software 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@23 -- # accel_module=software 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=32 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=32 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=1 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val=No 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:40.620 12:31:22 -- accel/accel.sh@21 -- # val= 00:11:40.620 12:31:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # IFS=: 00:11:40.620 12:31:22 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@21 -- # val= 00:11:42.522 12:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # IFS=: 00:11:42.522 12:31:24 -- accel/accel.sh@20 -- # read -r var val 00:11:42.522 12:31:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:42.522 12:31:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:42.522 12:31:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:42.522 00:11:42.522 real 0m4.902s 00:11:42.522 user 0m4.382s 00:11:42.522 sys 0m0.311s 00:11:42.522 12:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.522 12:31:24 -- common/autotest_common.sh@10 -- # set +x 00:11:42.522 ************************************ 00:11:42.522 END TEST accel_comp 00:11:42.522 ************************************ 00:11:42.522 12:31:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.522 12:31:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:42.522 12:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.522 12:31:24 -- common/autotest_common.sh@10 -- # set +x 00:11:42.522 ************************************ 00:11:42.522 START TEST accel_decomp 00:11:42.522 ************************************ 00:11:42.522 12:31:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.522 12:31:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:42.522 12:31:24 -- accel/accel.sh@17 -- # local accel_module 00:11:42.522 12:31:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.522 12:31:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.522 12:31:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:42.522 12:31:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:42.522 12:31:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.522 12:31:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.522 12:31:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:42.522 12:31:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:42.522 12:31:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:42.522 12:31:24 -- accel/accel.sh@42 -- # jq -r . 00:11:42.522 [2024-10-01 12:31:24.860022] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:42.522 [2024-10-01 12:31:24.860175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57144 ] 00:11:42.522 [2024-10-01 12:31:25.017081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.791 [2024-10-01 12:31:25.201367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.707 12:31:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:44.707 00:11:44.707 SPDK Configuration: 00:11:44.707 Core mask: 0x1 00:11:44.707 00:11:44.707 Accel Perf Configuration: 00:11:44.707 Workload Type: decompress 00:11:44.707 Transfer size: 4096 bytes 00:11:44.707 Vector count 1 00:11:44.707 Module: software 00:11:44.707 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:44.707 Queue depth: 32 00:11:44.707 Allocate depth: 32 00:11:44.707 # threads/core: 1 00:11:44.707 Run time: 1 seconds 00:11:44.707 Verify: Yes 00:11:44.707 00:11:44.707 Running for 1 seconds... 00:11:44.707 00:11:44.707 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:44.707 ------------------------------------------------------------------------------------ 00:11:44.707 0,0 57120/s 105 MiB/s 0 0 00:11:44.707 ==================================================================================== 00:11:44.707 Total 57120/s 223 MiB/s 0 0' 00:11:44.707 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:44.707 12:31:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.707 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:44.707 12:31:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:44.707 12:31:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:44.707 12:31:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:44.707 12:31:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:44.707 12:31:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:44.707 12:31:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:44.707 12:31:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:44.707 12:31:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:44.707 12:31:27 -- accel/accel.sh@42 -- # jq -r . 00:11:44.707 [2024-10-01 12:31:27.215234] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:44.707 [2024-10-01 12:31:27.215394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57170 ] 00:11:44.966 [2024-10-01 12:31:27.387803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.225 [2024-10-01 12:31:27.615383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=0x1 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=decompress 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=software 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@23 -- # accel_module=software 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=32 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=32 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=1 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val=Yes 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:45.484 12:31:27 -- accel/accel.sh@21 -- # val= 00:11:45.484 12:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # IFS=: 00:11:45.484 12:31:27 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@21 -- # val= 00:11:47.387 12:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # IFS=: 00:11:47.387 12:31:29 -- accel/accel.sh@20 -- # read -r var val 00:11:47.387 12:31:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:47.387 12:31:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:47.387 12:31:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:47.387 00:11:47.387 real 0m4.801s 00:11:47.387 user 0m4.295s 00:11:47.387 sys 0m0.293s 00:11:47.387 12:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.387 ************************************ 00:11:47.387 12:31:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.387 END TEST accel_decomp 00:11:47.387 ************************************ 00:11:47.387 12:31:29 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:47.387 12:31:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:47.387 12:31:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.387 12:31:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.387 ************************************ 00:11:47.387 START TEST accel_decmop_full 00:11:47.387 ************************************ 00:11:47.387 12:31:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:47.387 12:31:29 -- accel/accel.sh@16 -- # local accel_opc 00:11:47.387 12:31:29 -- accel/accel.sh@17 -- # local accel_module 00:11:47.387 12:31:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:47.387 12:31:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:47.387 12:31:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:47.387 12:31:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:47.387 12:31:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:47.387 12:31:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:47.387 12:31:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:47.387 12:31:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:47.387 12:31:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:47.387 12:31:29 -- accel/accel.sh@42 -- # jq -r . 00:11:47.387 [2024-10-01 12:31:29.738280] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:47.388 [2024-10-01 12:31:29.738459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57216 ] 00:11:47.388 [2024-10-01 12:31:29.910053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.646 [2024-10-01 12:31:30.136561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.179 12:31:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:50.179 00:11:50.179 SPDK Configuration: 00:11:50.179 Core mask: 0x1 00:11:50.179 00:11:50.179 Accel Perf Configuration: 00:11:50.179 Workload Type: decompress 00:11:50.179 Transfer size: 111250 bytes 00:11:50.179 Vector count 1 00:11:50.179 Module: software 00:11:50.179 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.179 Queue depth: 32 00:11:50.179 Allocate depth: 32 00:11:50.179 # threads/core: 1 00:11:50.179 Run time: 1 seconds 00:11:50.179 Verify: Yes 00:11:50.179 00:11:50.179 Running for 1 seconds... 00:11:50.179 00:11:50.179 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:50.179 ------------------------------------------------------------------------------------ 00:11:50.179 0,0 4256/s 175 MiB/s 0 0 00:11:50.179 ==================================================================================== 00:11:50.179 Total 4256/s 451 MiB/s 0 0' 00:11:50.179 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.179 12:31:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:50.179 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.179 12:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:50.179 12:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.179 12:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.179 12:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.179 12:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.179 12:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.179 12:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.179 12:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.179 12:31:32 -- accel/accel.sh@42 -- # jq -r . 00:11:50.179 [2024-10-01 12:31:32.204536] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:50.179 [2024-10-01 12:31:32.205139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57248 ] 00:11:50.179 [2024-10-01 12:31:32.369442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.179 [2024-10-01 12:31:32.552503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val=0x1 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val=decompress 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.438 12:31:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.438 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.438 12:31:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:50.438 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=software 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@23 -- # accel_module=software 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=32 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=32 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=1 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val=Yes 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:50.439 12:31:32 -- accel/accel.sh@21 -- # val= 00:11:50.439 12:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # IFS=: 00:11:50.439 12:31:32 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@21 -- # val= 00:11:52.368 12:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # IFS=: 00:11:52.368 12:31:34 -- accel/accel.sh@20 -- # read -r var val 00:11:52.368 12:31:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:52.368 12:31:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:52.368 12:31:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.368 00:11:52.368 real 0m4.850s 00:11:52.368 user 0m4.349s 00:11:52.368 sys 0m0.292s 00:11:52.368 12:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.368 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.368 ************************************ 00:11:52.368 END TEST accel_decmop_full 00:11:52.368 ************************************ 00:11:52.368 12:31:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.368 12:31:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:52.368 12:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.368 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.368 ************************************ 00:11:52.368 START TEST accel_decomp_mcore 00:11:52.368 ************************************ 00:11:52.368 12:31:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.368 12:31:34 -- accel/accel.sh@16 -- # local accel_opc 00:11:52.368 12:31:34 -- accel/accel.sh@17 -- # local accel_module 00:11:52.368 12:31:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.368 12:31:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:52.368 12:31:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:52.368 12:31:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.368 12:31:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.368 12:31:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.368 12:31:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.368 12:31:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.368 12:31:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.368 12:31:34 -- accel/accel.sh@42 -- # jq -r . 00:11:52.368 [2024-10-01 12:31:34.622481] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:52.368 [2024-10-01 12:31:34.622649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57289 ] 00:11:52.368 [2024-10-01 12:31:34.814362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.627 [2024-10-01 12:31:35.000038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.627 [2024-10-01 12:31:35.000176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.627 [2024-10-01 12:31:35.000517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.627 [2024-10-01 12:31:35.000539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.530 12:31:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:54.530 00:11:54.530 SPDK Configuration: 00:11:54.530 Core mask: 0xf 00:11:54.530 00:11:54.530 Accel Perf Configuration: 00:11:54.530 Workload Type: decompress 00:11:54.530 Transfer size: 4096 bytes 00:11:54.530 Vector count 1 00:11:54.530 Module: software 00:11:54.530 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.530 Queue depth: 32 00:11:54.530 Allocate depth: 32 00:11:54.530 # threads/core: 1 00:11:54.530 Run time: 1 seconds 00:11:54.530 Verify: Yes 00:11:54.530 00:11:54.530 Running for 1 seconds... 00:11:54.530 00:11:54.530 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:54.530 ------------------------------------------------------------------------------------ 00:11:54.530 0,0 52384/s 96 MiB/s 0 0 00:11:54.530 3,0 51200/s 94 MiB/s 0 0 00:11:54.530 2,0 52864/s 97 MiB/s 0 0 00:11:54.530 1,0 52384/s 96 MiB/s 0 0 00:11:54.530 ==================================================================================== 00:11:54.530 Total 208832/s 815 MiB/s 0 0' 00:11:54.530 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:54.530 12:31:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.530 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:54.530 12:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:54.530 12:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.530 12:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:54.530 12:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.530 12:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.530 12:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:54.530 12:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:54.530 12:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:54.530 12:31:37 -- accel/accel.sh@42 -- # jq -r . 00:11:54.789 [2024-10-01 12:31:37.097484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:54.789 [2024-10-01 12:31:37.097721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57329 ] 00:11:54.789 [2024-10-01 12:31:37.279426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.047 [2024-10-01 12:31:37.458319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.047 [2024-10-01 12:31:37.458465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.047 [2024-10-01 12:31:37.458988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.047 [2024-10-01 12:31:37.459010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=0xf 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=decompress 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=software 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@23 -- # accel_module=software 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=32 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=32 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=1 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val=Yes 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:55.306 12:31:37 -- accel/accel.sh@21 -- # val= 00:11:55.306 12:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # IFS=: 00:11:55.306 12:31:37 -- accel/accel.sh@20 -- # read -r var val 00:11:57.205 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.205 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.205 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.205 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.205 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.205 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.205 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.205 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.205 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.205 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.205 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.206 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.206 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.206 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.206 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.206 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.206 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.206 12:31:39 -- accel/accel.sh@21 -- # val= 00:11:57.206 12:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # IFS=: 00:11:57.206 12:31:39 -- accel/accel.sh@20 -- # read -r var val 00:11:57.206 12:31:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.206 12:31:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.206 12:31:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.206 00:11:57.206 real 0m4.909s 00:11:57.206 user 0m14.204s 00:11:57.206 sys 0m0.380s 00:11:57.206 12:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.206 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 ************************************ 00:11:57.206 END TEST accel_decomp_mcore 00:11:57.206 ************************************ 00:11:57.206 12:31:39 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.206 12:31:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:57.206 12:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.206 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 ************************************ 00:11:57.206 START TEST accel_decomp_full_mcore 00:11:57.206 ************************************ 00:11:57.206 12:31:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.206 12:31:39 -- accel/accel.sh@16 -- # local accel_opc 00:11:57.206 12:31:39 -- accel/accel.sh@17 -- # local accel_module 00:11:57.206 12:31:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.206 12:31:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.206 12:31:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.206 12:31:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.206 12:31:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.206 12:31:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.206 12:31:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.206 12:31:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.206 12:31:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.206 12:31:39 -- accel/accel.sh@42 -- # jq -r . 00:11:57.206 [2024-10-01 12:31:39.573745] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:57.206 [2024-10-01 12:31:39.573903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57373 ] 00:11:57.464 [2024-10-01 12:31:39.743800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.464 [2024-10-01 12:31:39.946980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.464 [2024-10-01 12:31:39.947075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.465 [2024-10-01 12:31:39.947898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.465 [2024-10-01 12:31:39.947905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.992 12:31:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:59.992 00:11:59.992 SPDK Configuration: 00:11:59.992 Core mask: 0xf 00:11:59.992 00:11:59.992 Accel Perf Configuration: 00:11:59.992 Workload Type: decompress 00:11:59.992 Transfer size: 111250 bytes 00:11:59.992 Vector count 1 00:11:59.992 Module: software 00:11:59.992 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:59.992 Queue depth: 32 00:11:59.992 Allocate depth: 32 00:11:59.992 # threads/core: 1 00:11:59.992 Run time: 1 seconds 00:11:59.992 Verify: Yes 00:11:59.992 00:11:59.992 Running for 1 seconds... 00:11:59.992 00:11:59.992 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:59.992 ------------------------------------------------------------------------------------ 00:11:59.992 0,0 4000/s 165 MiB/s 0 0 00:11:59.992 3,0 4160/s 171 MiB/s 0 0 00:11:59.992 2,0 3968/s 163 MiB/s 0 0 00:11:59.992 1,0 4192/s 173 MiB/s 0 0 00:11:59.992 ==================================================================================== 00:11:59.992 Total 16320/s 1731 MiB/s 0 0' 00:11:59.992 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:11:59.992 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:11:59.992 12:31:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.992 12:31:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:59.992 12:31:42 -- accel/accel.sh@12 -- # build_accel_config 00:11:59.993 12:31:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:59.993 12:31:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.993 12:31:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.993 12:31:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:59.993 12:31:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:59.993 12:31:42 -- accel/accel.sh@41 -- # local IFS=, 00:11:59.993 12:31:42 -- accel/accel.sh@42 -- # jq -r . 00:11:59.993 [2024-10-01 12:31:42.064153] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:59.993 [2024-10-01 12:31:42.064307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57402 ] 00:11:59.993 [2024-10-01 12:31:42.239600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.993 [2024-10-01 12:31:42.432189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.993 [2024-10-01 12:31:42.432256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.993 [2024-10-01 12:31:42.432786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.993 [2024-10-01 12:31:42.432794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=0xf 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=decompress 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=software 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@23 -- # accel_module=software 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=32 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=32 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=1 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val=Yes 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:00.251 12:31:42 -- accel/accel.sh@21 -- # val= 00:12:00.251 12:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # IFS=: 00:12:00.251 12:31:42 -- accel/accel.sh@20 -- # read -r var val 00:12:02.152 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.152 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.152 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.152 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@21 -- # val= 00:12:02.153 12:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # IFS=: 00:12:02.153 12:31:44 -- accel/accel.sh@20 -- # read -r var val 00:12:02.153 12:31:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:02.153 12:31:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:02.153 12:31:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.153 00:12:02.153 real 0m4.978s 00:12:02.153 user 0m14.581s 00:12:02.153 sys 0m0.327s 00:12:02.153 12:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.153 ************************************ 00:12:02.153 END TEST accel_decomp_full_mcore 00:12:02.153 ************************************ 00:12:02.153 12:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:02.153 12:31:44 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.153 12:31:44 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:02.153 12:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.153 12:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:02.153 ************************************ 00:12:02.153 START TEST accel_decomp_mthread 00:12:02.153 ************************************ 00:12:02.153 12:31:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.153 12:31:44 -- accel/accel.sh@16 -- # local accel_opc 00:12:02.153 12:31:44 -- accel/accel.sh@17 -- # local accel_module 00:12:02.153 12:31:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.153 12:31:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:02.153 12:31:44 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.153 12:31:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:02.153 12:31:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.153 12:31:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.153 12:31:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:02.153 12:31:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:02.153 12:31:44 -- accel/accel.sh@41 -- # local IFS=, 00:12:02.153 12:31:44 -- accel/accel.sh@42 -- # jq -r . 00:12:02.153 [2024-10-01 12:31:44.586741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:02.153 [2024-10-01 12:31:44.587305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57457 ] 00:12:02.411 [2024-10-01 12:31:44.752342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.669 [2024-10-01 12:31:44.980249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.590 12:31:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:04.590 00:12:04.590 SPDK Configuration: 00:12:04.590 Core mask: 0x1 00:12:04.590 00:12:04.590 Accel Perf Configuration: 00:12:04.590 Workload Type: decompress 00:12:04.590 Transfer size: 4096 bytes 00:12:04.590 Vector count 1 00:12:04.590 Module: software 00:12:04.590 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:04.590 Queue depth: 32 00:12:04.590 Allocate depth: 32 00:12:04.590 # threads/core: 2 00:12:04.590 Run time: 1 seconds 00:12:04.590 Verify: Yes 00:12:04.590 00:12:04.590 Running for 1 seconds... 00:12:04.590 00:12:04.590 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:04.590 ------------------------------------------------------------------------------------ 00:12:04.590 0,1 28832/s 53 MiB/s 0 0 00:12:04.590 0,0 28672/s 52 MiB/s 0 0 00:12:04.590 ==================================================================================== 00:12:04.590 Total 57504/s 224 MiB/s 0 0' 00:12:04.590 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:04.590 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:04.590 12:31:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.590 12:31:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:04.590 12:31:47 -- accel/accel.sh@12 -- # build_accel_config 00:12:04.590 12:31:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:04.590 12:31:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.590 12:31:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.590 12:31:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:04.590 12:31:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:04.590 12:31:47 -- accel/accel.sh@41 -- # local IFS=, 00:12:04.590 12:31:47 -- accel/accel.sh@42 -- # jq -r . 00:12:04.590 [2024-10-01 12:31:47.054508] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:04.590 [2024-10-01 12:31:47.054725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57483 ] 00:12:04.856 [2024-10-01 12:31:47.224286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.116 [2024-10-01 12:31:47.409559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=0x1 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=decompress 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=software 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@23 -- # accel_module=software 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=32 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=32 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=2 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val=Yes 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:05.116 12:31:47 -- accel/accel.sh@21 -- # val= 00:12:05.116 12:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # IFS=: 00:12:05.116 12:31:47 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@21 -- # val= 00:12:07.022 12:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # IFS=: 00:12:07.022 12:31:49 -- accel/accel.sh@20 -- # read -r var val 00:12:07.022 12:31:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:07.022 12:31:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:07.022 12:31:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.022 00:12:07.022 real 0m4.890s 00:12:07.022 user 0m4.374s 00:12:07.022 sys 0m0.303s 00:12:07.022 12:31:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.022 ************************************ 00:12:07.022 END TEST accel_decomp_mthread 00:12:07.022 ************************************ 00:12:07.022 12:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 12:31:49 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.022 12:31:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:07.022 12:31:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.022 12:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:07.022 ************************************ 00:12:07.022 START TEST accel_deomp_full_mthread 00:12:07.022 ************************************ 00:12:07.022 12:31:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.022 12:31:49 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.022 12:31:49 -- accel/accel.sh@17 -- # local accel_module 00:12:07.022 12:31:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.022 12:31:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.022 12:31:49 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.022 12:31:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.022 12:31:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.022 12:31:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.022 12:31:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.022 12:31:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.022 12:31:49 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.022 12:31:49 -- accel/accel.sh@42 -- # jq -r . 00:12:07.022 [2024-10-01 12:31:49.540470] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:07.022 [2024-10-01 12:31:49.540697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57530 ] 00:12:07.281 [2024-10-01 12:31:49.713344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.541 [2024-10-01 12:31:49.928222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.473 12:31:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:09.473 00:12:09.473 SPDK Configuration: 00:12:09.473 Core mask: 0x1 00:12:09.473 00:12:09.473 Accel Perf Configuration: 00:12:09.473 Workload Type: decompress 00:12:09.473 Transfer size: 111250 bytes 00:12:09.473 Vector count 1 00:12:09.473 Module: software 00:12:09.473 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:09.473 Queue depth: 32 00:12:09.473 Allocate depth: 32 00:12:09.473 # threads/core: 2 00:12:09.473 Run time: 1 seconds 00:12:09.473 Verify: Yes 00:12:09.473 00:12:09.473 Running for 1 seconds... 00:12:09.473 00:12:09.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:09.473 ------------------------------------------------------------------------------------ 00:12:09.473 0,1 2112/s 87 MiB/s 0 0 00:12:09.473 0,0 2112/s 87 MiB/s 0 0 00:12:09.473 ==================================================================================== 00:12:09.473 Total 4224/s 448 MiB/s 0 0' 00:12:09.473 12:31:51 -- accel/accel.sh@20 -- # IFS=: 00:12:09.473 12:31:51 -- accel/accel.sh@20 -- # read -r var val 00:12:09.473 12:31:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.473 12:31:51 -- accel/accel.sh@12 -- # build_accel_config 00:12:09.473 12:31:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:09.474 12:31:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:09.474 12:31:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.474 12:31:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.474 12:31:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:09.474 12:31:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:09.474 12:31:51 -- accel/accel.sh@41 -- # local IFS=, 00:12:09.474 12:31:51 -- accel/accel.sh@42 -- # jq -r . 00:12:09.732 [2024-10-01 12:31:52.039614] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:09.732 [2024-10-01 12:31:52.039787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57561 ] 00:12:09.732 [2024-10-01 12:31:52.209859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.990 [2024-10-01 12:31:52.398983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val=0x1 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val=decompress 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val=software 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@23 -- # accel_module=software 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:10.249 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.249 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.249 12:31:52 -- accel/accel.sh@21 -- # val=32 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val=32 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val=2 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val=Yes 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:10.250 12:31:52 -- accel/accel.sh@21 -- # val= 00:12:10.250 12:31:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # IFS=: 00:12:10.250 12:31:52 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 12:31:54 -- accel/accel.sh@21 -- # val= 00:12:12.155 12:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # IFS=: 00:12:12.155 12:31:54 -- accel/accel.sh@20 -- # read -r var val 00:12:12.155 ************************************ 00:12:12.155 END TEST accel_deomp_full_mthread 00:12:12.155 ************************************ 00:12:12.155 12:31:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:12.155 12:31:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:12.155 12:31:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.155 00:12:12.155 real 0m4.964s 00:12:12.155 user 0m4.448s 00:12:12.155 sys 0m0.310s 00:12:12.155 12:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.155 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.155 12:31:54 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:12.155 12:31:54 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.155 12:31:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:12.155 12:31:54 -- accel/accel.sh@129 -- # build_accel_config 00:12:12.155 12:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.155 12:31:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.155 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:12.155 12:31:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.155 12:31:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.155 12:31:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.155 12:31:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.155 12:31:54 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.155 12:31:54 -- accel/accel.sh@42 -- # jq -r . 00:12:12.155 ************************************ 00:12:12.155 START TEST accel_dif_functional_tests 00:12:12.155 ************************************ 00:12:12.155 12:31:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.155 [2024-10-01 12:31:54.591572] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:12.155 [2024-10-01 12:31:54.591749] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57609 ] 00:12:12.414 [2024-10-01 12:31:54.764068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.673 [2024-10-01 12:31:54.955526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.673 [2024-10-01 12:31:54.955668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.673 [2024-10-01 12:31:54.955688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.932 00:12:12.932 00:12:12.932 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.932 http://cunit.sourceforge.net/ 00:12:12.932 00:12:12.932 00:12:12.932 Suite: accel_dif 00:12:12.932 Test: verify: DIF generated, GUARD check ...passed 00:12:12.932 Test: verify: DIF generated, APPTAG check ...passed 00:12:12.932 Test: verify: DIF generated, REFTAG check ...passed 00:12:12.932 Test: verify: DIF not generated, GUARD check ...[2024-10-01 12:31:55.237806] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:12.932 passed 00:12:12.932 Test: verify: DIF not generated, APPTAG check ...[2024-10-01 12:31:55.237921] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:12.932 [2024-10-01 12:31:55.237990] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:12.932 passed 00:12:12.932 Test: verify: DIF not generated, REFTAG check ...[2024-10-01 12:31:55.238172] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:12.932 passed 00:12:12.932 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:12.932 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-01 12:31:55.238250] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:12.932 [2024-10-01 12:31:55.238360] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:12.932 passed 00:12:12.932 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-10-01 12:31:55.238701] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:12.932 passed 00:12:12.932 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:12.932 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:12.932 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-10-01 12:31:55.239180] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:12.932 passed 00:12:12.932 Test: generate copy: DIF generated, GUARD check ...passed 00:12:12.932 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:12.932 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:12.932 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:12.933 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:12.933 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:12.933 Test: generate copy: iovecs-len validate ...[2024-10-01 12:31:55.240284] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:12.933 passed 00:12:12.933 Test: generate copy: buffer alignment validate ...passed 00:12:12.933 00:12:12.933 Run Summary: Type Total Ran Passed Failed Inactive 00:12:12.933 suites 1 1 n/a 0 0 00:12:12.933 tests 20 20 20 0 0 00:12:12.933 asserts 204 204 204 0 n/a 00:12:12.933 00:12:12.933 Elapsed time = 0.007 seconds 00:12:13.871 00:12:13.871 real 0m1.830s 00:12:13.871 user 0m3.490s 00:12:13.871 sys 0m0.222s 00:12:13.871 12:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.871 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.871 ************************************ 00:12:13.871 END TEST accel_dif_functional_tests 00:12:13.871 ************************************ 00:12:13.871 00:12:13.871 real 1m48.157s 00:12:13.871 user 1m58.565s 00:12:13.871 sys 0m8.001s 00:12:13.871 12:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.871 ************************************ 00:12:13.871 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.871 END TEST accel 00:12:13.871 ************************************ 00:12:14.130 12:31:56 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.130 12:31:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:14.130 12:31:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.130 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:14.130 ************************************ 00:12:14.130 START TEST accel_rpc 00:12:14.130 ************************************ 00:12:14.130 12:31:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.130 * Looking for test storage... 00:12:14.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:14.130 12:31:56 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:14.130 12:31:56 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57690 00:12:14.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.130 12:31:56 -- accel/accel_rpc.sh@15 -- # waitforlisten 57690 00:12:14.130 12:31:56 -- common/autotest_common.sh@819 -- # '[' -z 57690 ']' 00:12:14.130 12:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.130 12:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:14.130 12:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.130 12:31:56 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:14.130 12:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:14.130 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:14.130 [2024-10-01 12:31:56.593944] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:14.130 [2024-10-01 12:31:56.594119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57690 ] 00:12:14.389 [2024-10-01 12:31:56.755165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.647 [2024-10-01 12:31:56.950277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.648 [2024-10-01 12:31:56.950527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.215 12:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:15.215 12:31:57 -- common/autotest_common.sh@852 -- # return 0 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:15.215 12:31:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:15.215 12:31:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:15.215 12:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.215 ************************************ 00:12:15.215 START TEST accel_assign_opcode 00:12:15.215 ************************************ 00:12:15.215 12:31:57 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:15.215 12:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.215 12:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.215 [2024-10-01 12:31:57.583433] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:15.215 12:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:15.215 12:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.215 12:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.215 [2024-10-01 12:31:57.591379] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:15.215 12:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.215 12:31:57 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:15.216 12:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.216 12:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 12:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.783 12:31:58 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:15.783 12:31:58 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:15.783 12:31:58 -- accel/accel_rpc.sh@42 -- # grep software 00:12:15.783 12:31:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.783 12:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 12:31:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.042 software 00:12:16.042 00:12:16.042 real 0m0.752s 00:12:16.042 ************************************ 00:12:16.042 END TEST accel_assign_opcode 00:12:16.042 ************************************ 00:12:16.042 user 0m0.053s 00:12:16.042 sys 0m0.009s 00:12:16.042 12:31:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.042 12:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:16.042 12:31:58 -- accel/accel_rpc.sh@55 -- # killprocess 57690 00:12:16.042 12:31:58 -- common/autotest_common.sh@926 -- # '[' -z 57690 ']' 00:12:16.042 12:31:58 -- common/autotest_common.sh@930 -- # kill -0 57690 00:12:16.042 12:31:58 -- common/autotest_common.sh@931 -- # uname 00:12:16.042 12:31:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:16.042 12:31:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57690 00:12:16.042 killing process with pid 57690 00:12:16.042 12:31:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:16.042 12:31:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:16.042 12:31:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57690' 00:12:16.042 12:31:58 -- common/autotest_common.sh@945 -- # kill 57690 00:12:16.042 12:31:58 -- common/autotest_common.sh@950 -- # wait 57690 00:12:18.587 00:12:18.587 real 0m4.098s 00:12:18.587 user 0m4.226s 00:12:18.587 sys 0m0.440s 00:12:18.587 ************************************ 00:12:18.587 END TEST accel_rpc 00:12:18.587 ************************************ 00:12:18.587 12:32:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.587 12:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.587 12:32:00 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.587 12:32:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:18.587 12:32:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.587 12:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.587 ************************************ 00:12:18.587 START TEST app_cmdline 00:12:18.587 ************************************ 00:12:18.587 12:32:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.587 * Looking for test storage... 00:12:18.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:18.587 12:32:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:18.587 12:32:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57805 00:12:18.587 12:32:00 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:18.587 12:32:00 -- app/cmdline.sh@18 -- # waitforlisten 57805 00:12:18.587 12:32:00 -- common/autotest_common.sh@819 -- # '[' -z 57805 ']' 00:12:18.587 12:32:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.587 12:32:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.587 12:32:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.587 12:32:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.587 12:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.587 [2024-10-01 12:32:00.767043] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:18.588 [2024-10-01 12:32:00.767471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57805 ] 00:12:18.588 [2024-10-01 12:32:00.946887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.847 [2024-10-01 12:32:01.172010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:18.847 [2024-10-01 12:32:01.172269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.222 12:32:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:20.222 12:32:02 -- common/autotest_common.sh@852 -- # return 0 00:12:20.222 12:32:02 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:20.222 { 00:12:20.222 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:12:20.222 "fields": { 00:12:20.222 "major": 24, 00:12:20.222 "minor": 1, 00:12:20.222 "patch": 1, 00:12:20.222 "suffix": "-pre", 00:12:20.222 "commit": "726a04d70" 00:12:20.222 } 00:12:20.222 } 00:12:20.222 12:32:02 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:20.222 12:32:02 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:20.222 12:32:02 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:20.222 12:32:02 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:20.222 12:32:02 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:20.222 12:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.222 12:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:20.222 12:32:02 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:20.222 12:32:02 -- app/cmdline.sh@26 -- # sort 00:12:20.222 12:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.481 12:32:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:20.481 12:32:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:20.481 12:32:02 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.481 12:32:02 -- common/autotest_common.sh@640 -- # local es=0 00:12:20.481 12:32:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.481 12:32:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.481 12:32:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.481 12:32:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.481 12:32:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.481 12:32:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.481 12:32:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.481 12:32:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.481 12:32:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:20.481 12:32:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.740 request: 00:12:20.740 { 00:12:20.740 "method": "env_dpdk_get_mem_stats", 00:12:20.740 "req_id": 1 00:12:20.740 } 00:12:20.740 Got JSON-RPC error response 00:12:20.740 response: 00:12:20.740 { 00:12:20.740 "code": -32601, 00:12:20.740 "message": "Method not found" 00:12:20.740 } 00:12:20.740 12:32:03 -- common/autotest_common.sh@643 -- # es=1 00:12:20.740 12:32:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:20.740 12:32:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:20.740 12:32:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:20.740 12:32:03 -- app/cmdline.sh@1 -- # killprocess 57805 00:12:20.740 12:32:03 -- common/autotest_common.sh@926 -- # '[' -z 57805 ']' 00:12:20.740 12:32:03 -- common/autotest_common.sh@930 -- # kill -0 57805 00:12:20.740 12:32:03 -- common/autotest_common.sh@931 -- # uname 00:12:20.740 12:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.740 12:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57805 00:12:20.740 killing process with pid 57805 00:12:20.740 12:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.740 12:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.740 12:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57805' 00:12:20.740 12:32:03 -- common/autotest_common.sh@945 -- # kill 57805 00:12:20.740 12:32:03 -- common/autotest_common.sh@950 -- # wait 57805 00:12:23.275 ************************************ 00:12:23.275 END TEST app_cmdline 00:12:23.275 ************************************ 00:12:23.275 00:12:23.275 real 0m4.769s 00:12:23.275 user 0m5.549s 00:12:23.275 sys 0m0.515s 00:12:23.275 12:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 12:32:05 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:23.275 12:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:23.275 12:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 ************************************ 00:12:23.275 START TEST version 00:12:23.275 ************************************ 00:12:23.275 12:32:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:23.275 * Looking for test storage... 00:12:23.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:23.275 12:32:05 -- app/version.sh@17 -- # get_header_version major 00:12:23.275 12:32:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.275 12:32:05 -- app/version.sh@14 -- # cut -f2 00:12:23.275 12:32:05 -- app/version.sh@14 -- # tr -d '"' 00:12:23.275 12:32:05 -- app/version.sh@17 -- # major=24 00:12:23.275 12:32:05 -- app/version.sh@18 -- # get_header_version minor 00:12:23.275 12:32:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.275 12:32:05 -- app/version.sh@14 -- # cut -f2 00:12:23.275 12:32:05 -- app/version.sh@14 -- # tr -d '"' 00:12:23.275 12:32:05 -- app/version.sh@18 -- # minor=1 00:12:23.275 12:32:05 -- app/version.sh@19 -- # get_header_version patch 00:12:23.275 12:32:05 -- app/version.sh@14 -- # cut -f2 00:12:23.275 12:32:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.275 12:32:05 -- app/version.sh@14 -- # tr -d '"' 00:12:23.275 12:32:05 -- app/version.sh@19 -- # patch=1 00:12:23.275 12:32:05 -- app/version.sh@20 -- # get_header_version suffix 00:12:23.275 12:32:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:23.275 12:32:05 -- app/version.sh@14 -- # cut -f2 00:12:23.275 12:32:05 -- app/version.sh@14 -- # tr -d '"' 00:12:23.275 12:32:05 -- app/version.sh@20 -- # suffix=-pre 00:12:23.275 12:32:05 -- app/version.sh@22 -- # version=24.1 00:12:23.275 12:32:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:23.275 12:32:05 -- app/version.sh@25 -- # version=24.1.1 00:12:23.275 12:32:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:23.275 12:32:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:23.275 12:32:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:23.275 12:32:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:23.275 12:32:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:23.275 00:12:23.275 real 0m0.148s 00:12:23.275 user 0m0.084s 00:12:23.275 sys 0m0.090s 00:12:23.275 12:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 ************************************ 00:12:23.275 END TEST version 00:12:23.275 ************************************ 00:12:23.275 12:32:05 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@204 -- # uname -s 00:12:23.275 12:32:05 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:12:23.275 12:32:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:12:23.275 12:32:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:12:23.275 12:32:05 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@268 -- # timing_exit lib 00:12:23.275 12:32:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 12:32:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@319 -- # '[' 1 -eq 1 ']' 00:12:23.275 12:32:05 -- spdk/autotest.sh@320 -- # run_test lvol /home/vagrant/spdk_repo/spdk/test/lvol/lvol.sh 00:12:23.275 12:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:23.275 12:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 ************************************ 00:12:23.275 START TEST lvol 00:12:23.275 ************************************ 00:12:23.275 12:32:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/lvol.sh 00:12:23.275 * Looking for test storage... 00:12:23.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:12:23.275 12:32:05 -- lvol/lvol.sh@11 -- # timing_enter lvol 00:12:23.275 12:32:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 12:32:05 -- lvol/lvol.sh@13 -- # timing_enter basic 00:12:23.275 12:32:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 12:32:05 -- lvol/lvol.sh@14 -- # run_test lvol_basic /home/vagrant/spdk_repo/spdk/test/lvol/basic.sh 00:12:23.275 12:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:23.275 12:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.275 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.275 ************************************ 00:12:23.275 START TEST lvol_basic 00:12:23.275 ************************************ 00:12:23.275 12:32:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/basic.sh 00:12:23.275 * Looking for test storage... 00:12:23.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:12:23.534 12:32:05 -- lvol/basic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:23.534 12:32:05 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:23.534 12:32:05 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:23.534 12:32:05 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:23.534 12:32:05 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:23.534 12:32:05 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:23.534 12:32:05 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:23.534 12:32:05 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:23.534 12:32:05 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:23.534 12:32:05 -- lvol/basic.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:23.534 12:32:05 -- bdev/nbd_common.sh@6 -- # set -e 00:12:23.534 12:32:05 -- lvol/basic.sh@572 -- # spdk_pid=58025 00:12:23.534 12:32:05 -- lvol/basic.sh@571 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.534 12:32:05 -- lvol/basic.sh@573 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:23.534 12:32:05 -- lvol/basic.sh@574 -- # waitforlisten 58025 00:12:23.534 12:32:05 -- common/autotest_common.sh@819 -- # '[' -z 58025 ']' 00:12:23.534 12:32:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.534 12:32:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:23.534 12:32:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.534 12:32:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:23.534 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.534 [2024-10-01 12:32:05.902814] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:23.534 [2024-10-01 12:32:05.903159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58025 ] 00:12:23.793 [2024-10-01 12:32:06.065648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.793 [2024-10-01 12:32:06.291864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:23.793 [2024-10-01 12:32:06.292391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.168 12:32:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.168 12:32:07 -- common/autotest_common.sh@852 -- # return 0 00:12:25.168 12:32:07 -- lvol/basic.sh@576 -- # run_test test_construct_lvs test_construct_lvs 00:12:25.168 12:32:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:25.168 12:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:25.168 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.168 ************************************ 00:12:25.168 START TEST test_construct_lvs 00:12:25.168 ************************************ 00:12:25.168 12:32:07 -- common/autotest_common.sh@1104 -- # test_construct_lvs 00:12:25.168 12:32:07 -- lvol/basic.sh@15 -- # rpc_cmd bdev_malloc_create 128 512 00:12:25.168 12:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.168 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 12:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.428 12:32:07 -- lvol/basic.sh@15 -- # malloc_name=Malloc0 00:12:25.428 12:32:07 -- lvol/basic.sh@18 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:12:25.428 12:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.428 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 12:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.428 12:32:07 -- lvol/basic.sh@18 -- # lvs_uuid=7181d8ec-d70e-4040-b171-17609f492135 00:12:25.428 12:32:07 -- lvol/basic.sh@19 -- # rpc_cmd bdev_lvol_get_lvstores -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.428 12:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.428 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 12:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.428 12:32:07 -- lvol/basic.sh@19 -- # lvs='[ 00:12:25.428 { 00:12:25.428 "uuid": "7181d8ec-d70e-4040-b171-17609f492135", 00:12:25.428 "name": "lvs_test", 00:12:25.428 "base_bdev": "Malloc0", 00:12:25.428 "total_data_clusters": 31, 00:12:25.428 "free_clusters": 31, 00:12:25.428 "block_size": 512, 00:12:25.428 "cluster_size": 4194304 00:12:25.428 } 00:12:25.428 ]' 00:12:25.428 12:32:07 -- lvol/basic.sh@22 -- # dummy_uuid=00000000-0000-0000-0000-000000000000 00:12:25.428 12:32:07 -- lvol/basic.sh@23 -- # NOT rpc_cmd bdev_lvol_delete_lvstore -u 00000000-0000-0000-0000-000000000000 00:12:25.428 12:32:07 -- common/autotest_common.sh@640 -- # local es=0 00:12:25.428 12:32:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_delete_lvstore -u 00000000-0000-0000-0000-000000000000 00:12:25.428 12:32:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:25.428 12:32:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.428 12:32:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:25.428 12:32:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.428 12:32:07 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_delete_lvstore -u 00000000-0000-0000-0000-000000000000 00:12:25.428 12:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.428 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 request: 00:12:25.428 { 00:12:25.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.428 "method": "bdev_lvol_delete_lvstore", 00:12:25.428 "req_id": 1 00:12:25.428 } 00:12:25.428 Got JSON-RPC error response 00:12:25.428 response: 00:12:25.428 { 00:12:25.428 "code": -19, 00:12:25.428 "message": "No such device" 00:12:25.428 } 00:12:25.428 12:32:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:25.428 12:32:07 -- common/autotest_common.sh@643 -- # es=1 00:12:25.428 12:32:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:25.428 12:32:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:25.428 12:32:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:25.428 12:32:07 -- lvol/basic.sh@25 -- # rpc_cmd bdev_lvol_get_lvstores -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.428 12:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.428 12:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:25.428 [ 00:12:25.428 { 00:12:25.428 "uuid": "7181d8ec-d70e-4040-b171-17609f492135", 00:12:25.428 "name": "lvs_test", 00:12:25.428 "base_bdev": "Malloc0", 00:12:25.428 "total_data_clusters": 31, 00:12:25.428 "free_clusters": 31, 00:12:25.428 "block_size": 512, 00:12:25.428 "cluster_size": 4194304 00:12:25.428 } 00:12:25.428 ] 00:12:25.428 12:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.428 12:32:07 -- lvol/basic.sh@28 -- # jq -r '.[0].uuid' 00:12:25.428 12:32:07 -- lvol/basic.sh@28 -- # '[' 7181d8ec-d70e-4040-b171-17609f492135 = 7181d8ec-d70e-4040-b171-17609f492135 ']' 00:12:25.428 12:32:07 -- lvol/basic.sh@29 -- # jq -r '.[0].name' 00:12:25.686 12:32:07 -- lvol/basic.sh@29 -- # '[' lvs_test = lvs_test ']' 00:12:25.686 12:32:07 -- lvol/basic.sh@30 -- # jq -r '.[0].base_bdev' 00:12:25.686 12:32:08 -- lvol/basic.sh@30 -- # '[' Malloc0 = Malloc0 ']' 00:12:25.686 12:32:08 -- lvol/basic.sh@33 -- # jq -r '.[0].cluster_size' 00:12:25.686 12:32:08 -- lvol/basic.sh@33 -- # cluster_size=4194304 00:12:25.686 12:32:08 -- lvol/basic.sh@34 -- # '[' 4194304 = 4194304 ']' 00:12:25.687 12:32:08 -- lvol/basic.sh@35 -- # jq -r '.[0].total_data_clusters' 00:12:25.687 12:32:08 -- lvol/basic.sh@35 -- # total_clusters=31 00:12:25.687 12:32:08 -- lvol/basic.sh@36 -- # jq -r '.[0].free_clusters' 00:12:25.687 12:32:08 -- lvol/basic.sh@36 -- # '[' 31 = 31 ']' 00:12:25.687 12:32:08 -- lvol/basic.sh@37 -- # '[' 130023424 = 130023424 ']' 00:12:25.687 12:32:08 -- lvol/basic.sh@40 -- # rpc_cmd bdev_lvol_delete_lvstore -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.687 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:25.687 12:32:08 -- lvol/basic.sh@41 -- # NOT rpc_cmd bdev_lvol_get_lvstores -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@640 -- # local es=0 00:12:25.687 12:32:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.687 12:32:08 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.687 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 request: 00:12:25.687 { 00:12:25.687 "uuid": "7181d8ec-d70e-4040-b171-17609f492135", 00:12:25.687 "method": "bdev_lvol_get_lvstores", 00:12:25.687 "req_id": 1 00:12:25.687 } 00:12:25.687 Got JSON-RPC error response 00:12:25.687 response: 00:12:25.687 { 00:12:25.687 "code": -19, 00:12:25.687 "message": "No such device" 00:12:25.687 } 00:12:25.687 12:32:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:25.687 12:32:08 -- common/autotest_common.sh@643 -- # es=1 00:12:25.687 12:32:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:25.687 12:32:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:25.687 12:32:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:25.687 12:32:08 -- lvol/basic.sh@43 -- # NOT rpc_cmd bdev_lvol_delete_lvstore -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@640 -- # local es=0 00:12:25.687 12:32:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_delete_lvstore -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:25.687 12:32:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:25.687 12:32:08 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_delete_lvstore -u 7181d8ec-d70e-4040-b171-17609f492135 00:12:25.687 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.687 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 request: 00:12:25.687 { 00:12:25.687 "uuid": "7181d8ec-d70e-4040-b171-17609f492135", 00:12:25.687 "method": "bdev_lvol_delete_lvstore", 00:12:25.687 "req_id": 1 00:12:25.687 } 00:12:25.687 Got JSON-RPC error response 00:12:25.687 response: 00:12:25.687 { 00:12:25.687 "code": -19, 00:12:25.687 "message": "No such device" 00:12:25.687 } 00:12:25.687 12:32:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:25.687 12:32:08 -- common/autotest_common.sh@643 -- # es=1 00:12:25.687 12:32:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:25.687 12:32:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:25.687 12:32:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:25.687 12:32:08 -- lvol/basic.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:25.687 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:25.687 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.254 12:32:08 -- lvol/basic.sh@46 -- # check_leftover_devices 00:12:26.254 12:32:08 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:26.254 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.254 12:32:08 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:26.254 12:32:08 -- lvol/common.sh@26 -- # jq length 00:12:26.254 12:32:08 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:26.254 12:32:08 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:26.254 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.254 12:32:08 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:26.254 12:32:08 -- lvol/common.sh@28 -- # jq length 00:12:26.254 ************************************ 00:12:26.254 END TEST test_construct_lvs 00:12:26.254 ************************************ 00:12:26.254 12:32:08 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:26.254 00:12:26.254 real 0m0.958s 00:12:26.254 user 0m0.381s 00:12:26.254 sys 0m0.050s 00:12:26.254 12:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 12:32:08 -- lvol/basic.sh@577 -- # run_test test_construct_lvs_nonexistent_bdev test_construct_lvs_nonexistent_bdev 00:12:26.254 12:32:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:26.254 12:32:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 ************************************ 00:12:26.254 START TEST test_construct_lvs_nonexistent_bdev 00:12:26.254 ************************************ 00:12:26.254 12:32:08 -- common/autotest_common.sh@1104 -- # test_construct_lvs_nonexistent_bdev 00:12:26.254 12:32:08 -- lvol/basic.sh@53 -- # rpc_cmd bdev_lvol_create_lvstore NotMalloc lvs_test 00:12:26.254 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 [2024-10-01 12:32:08.702974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: NotMalloc 00:12:26.254 [2024-10-01 12:32:08.703222] vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:12:26.254 request: 00:12:26.254 { 00:12:26.254 "bdev_name": "NotMalloc", 00:12:26.254 "lvs_name": "lvs_test", 00:12:26.254 "method": "bdev_lvol_create_lvstore", 00:12:26.254 "req_id": 1 00:12:26.254 } 00:12:26.254 Got JSON-RPC error response 00:12:26.254 response: 00:12:26.254 { 00:12:26.254 "code": -19, 00:12:26.254 "message": "No such device" 00:12:26.254 } 00:12:26.254 ************************************ 00:12:26.254 END TEST test_construct_lvs_nonexistent_bdev 00:12:26.254 ************************************ 00:12:26.254 12:32:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:26.254 12:32:08 -- lvol/basic.sh@54 -- # return 0 00:12:26.254 00:12:26.254 real 0m0.011s 00:12:26.254 user 0m0.003s 00:12:26.254 sys 0m0.000s 00:12:26.254 12:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 12:32:08 -- lvol/basic.sh@578 -- # run_test test_construct_two_lvs_on_the_same_bdev test_construct_two_lvs_on_the_same_bdev 00:12:26.254 12:32:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:26.254 12:32:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.254 ************************************ 00:12:26.254 START TEST test_construct_two_lvs_on_the_same_bdev 00:12:26.254 ************************************ 00:12:26.254 12:32:08 -- common/autotest_common.sh@1104 -- # test_construct_two_lvs_on_the_same_bdev 00:12:26.254 12:32:08 -- lvol/basic.sh@60 -- # rpc_cmd bdev_malloc_create 128 512 00:12:26.254 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.254 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.513 12:32:08 -- lvol/basic.sh@60 -- # malloc_name=Malloc1 00:12:26.513 12:32:08 -- lvol/basic.sh@61 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test 00:12:26.513 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.513 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.513 12:32:08 -- lvol/basic.sh@61 -- # lvs_uuid=12f2f053-32a4-4102-8cae-26f0ad85f6e8 00:12:26.513 12:32:08 -- lvol/basic.sh@64 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test2 00:12:26.513 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.513 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 [2024-10-01 12:32:08.916814] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc1 already claimed: type read_many_write_one by module lvol 00:12:26.513 [2024-10-01 12:32:08.917082] vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:12:26.513 request: 00:12:26.513 { 00:12:26.513 "bdev_name": "Malloc1", 00:12:26.513 "lvs_name": "lvs_test2", 00:12:26.513 "method": "bdev_lvol_create_lvstore", 00:12:26.513 "req_id": 1 00:12:26.513 } 00:12:26.513 Got JSON-RPC error response 00:12:26.513 response: 00:12:26.513 { 00:12:26.513 "code": -1, 00:12:26.513 "message": "Operation not permitted" 00:12:26.513 } 00:12:26.513 12:32:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:26.513 12:32:08 -- lvol/basic.sh@67 -- # rpc_cmd bdev_lvol_delete_lvstore -u 12f2f053-32a4-4102-8cae-26f0ad85f6e8 00:12:26.513 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.513 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 12:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.513 12:32:08 -- lvol/basic.sh@68 -- # rpc_cmd bdev_lvol_get_lvstores -u 12f2f053-32a4-4102-8cae-26f0ad85f6e8 00:12:26.513 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.513 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 request: 00:12:26.513 { 00:12:26.513 "uuid": "12f2f053-32a4-4102-8cae-26f0ad85f6e8", 00:12:26.513 "method": "bdev_lvol_get_lvstores", 00:12:26.513 "req_id": 1 00:12:26.513 } 00:12:26.513 Got JSON-RPC error response 00:12:26.513 response: 00:12:26.513 { 00:12:26.513 "code": -19, 00:12:26.513 "message": "No such device" 00:12:26.513 } 00:12:26.513 12:32:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:26.513 12:32:08 -- lvol/basic.sh@69 -- # rpc_cmd bdev_malloc_delete Malloc1 00:12:26.513 12:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.513 12:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.770 12:32:09 -- lvol/basic.sh@70 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:26.770 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.770 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 [2024-10-01 12:32:09.261169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:26.770 request: 00:12:26.770 { 00:12:26.770 "name": "Malloc1", 00:12:26.770 "method": "bdev_get_bdevs", 00:12:26.770 "req_id": 1 00:12:26.770 } 00:12:26.770 Got JSON-RPC error response 00:12:26.770 response: 00:12:26.770 { 00:12:26.770 "code": -19, 00:12:26.770 "message": "No such device" 00:12:26.770 } 00:12:26.770 12:32:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:26.770 12:32:09 -- lvol/basic.sh@71 -- # check_leftover_devices 00:12:26.770 12:32:09 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:26.770 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.770 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.770 12:32:09 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:26.771 12:32:09 -- lvol/common.sh@26 -- # jq length 00:12:27.030 12:32:09 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:27.030 12:32:09 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:27.030 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.030 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.030 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.030 12:32:09 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:27.030 12:32:09 -- lvol/common.sh@28 -- # jq length 00:12:27.030 ************************************ 00:12:27.030 END TEST test_construct_two_lvs_on_the_same_bdev 00:12:27.030 ************************************ 00:12:27.030 12:32:09 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:27.030 00:12:27.030 real 0m0.649s 00:12:27.030 user 0m0.127s 00:12:27.030 sys 0m0.023s 00:12:27.030 12:32:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.030 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.030 12:32:09 -- lvol/basic.sh@579 -- # run_test test_construct_lvs_conflict_alias test_construct_lvs_conflict_alias 00:12:27.030 12:32:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:27.030 12:32:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.030 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.030 ************************************ 00:12:27.030 START TEST test_construct_lvs_conflict_alias 00:12:27.030 ************************************ 00:12:27.030 12:32:09 -- common/autotest_common.sh@1104 -- # test_construct_lvs_conflict_alias 00:12:27.030 12:32:09 -- lvol/basic.sh@77 -- # rpc_cmd bdev_malloc_create 128 512 00:12:27.030 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.030 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@77 -- # malloc1_name=Malloc2 00:12:27.290 12:32:09 -- lvol/basic.sh@78 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@78 -- # lvs1_uuid=6f08007a-d4be-4ce2-8838-635cba9fd7a2 00:12:27.290 12:32:09 -- lvol/basic.sh@81 -- # rpc_cmd bdev_malloc_create 128 512 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@81 -- # malloc2_name=Malloc3 00:12:27.290 12:32:09 -- lvol/basic.sh@82 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 [2024-10-01 12:32:09.763359] lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name lvs_test already exists 00:12:27.290 request: 00:12:27.290 { 00:12:27.290 "bdev_name": "Malloc3", 00:12:27.290 "lvs_name": "lvs_test", 00:12:27.290 "method": "bdev_lvol_create_lvstore", 00:12:27.290 "req_id": 1 00:12:27.290 } 00:12:27.290 Got JSON-RPC error response 00:12:27.290 response: 00:12:27.290 { 00:12:27.290 "code": -17, 00:12:27.290 "message": "File exists" 00:12:27.290 } 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@85 -- # rpc_cmd bdev_lvol_delete_lvstore -u 6f08007a-d4be-4ce2-8838-635cba9fd7a2 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@86 -- # rpc_cmd bdev_lvol_get_lvstores -u 6f08007a-d4be-4ce2-8838-635cba9fd7a2 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.290 request: 00:12:27.290 { 00:12:27.290 "uuid": "6f08007a-d4be-4ce2-8838-635cba9fd7a2", 00:12:27.290 "method": "bdev_lvol_get_lvstores", 00:12:27.290 "req_id": 1 00:12:27.290 } 00:12:27.290 Got JSON-RPC error response 00:12:27.290 response: 00:12:27.290 { 00:12:27.290 "code": -19, 00:12:27.290 "message": "No such device" 00:12:27.290 } 00:12:27.290 12:32:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:27.290 12:32:09 -- lvol/basic.sh@87 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:27.290 12:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.290 12:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:27.857 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.857 12:32:10 -- lvol/basic.sh@88 -- # rpc_cmd bdev_malloc_delete Malloc3 00:12:27.857 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.857 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.116 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.116 12:32:10 -- lvol/basic.sh@89 -- # check_leftover_devices 00:12:28.116 12:32:10 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:28.116 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.116 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.116 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.116 12:32:10 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:28.116 12:32:10 -- lvol/common.sh@26 -- # jq length 00:12:28.116 12:32:10 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:28.116 12:32:10 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:28.116 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.116 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.116 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.116 12:32:10 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:28.116 12:32:10 -- lvol/common.sh@28 -- # jq length 00:12:28.116 ************************************ 00:12:28.116 END TEST test_construct_lvs_conflict_alias 00:12:28.116 ************************************ 00:12:28.116 12:32:10 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:28.116 00:12:28.116 real 0m1.090s 00:12:28.116 user 0m0.129s 00:12:28.116 sys 0m0.022s 00:12:28.116 12:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.116 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.116 12:32:10 -- lvol/basic.sh@580 -- # run_test test_construct_lvs_different_cluster_size test_construct_lvs_different_cluster_size 00:12:28.116 12:32:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:28.116 12:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.117 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 ************************************ 00:12:28.117 START TEST test_construct_lvs_different_cluster_size 00:12:28.117 ************************************ 00:12:28.117 12:32:10 -- common/autotest_common.sh@1104 -- # test_construct_lvs_different_cluster_size 00:12:28.117 12:32:10 -- lvol/basic.sh@96 -- # rpc_cmd bdev_malloc_create 128 512 00:12:28.117 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.117 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.376 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.376 12:32:10 -- lvol/basic.sh@96 -- # malloc1_name=Malloc4 00:12:28.376 12:32:10 -- lvol/basic.sh@97 -- # rpc_cmd bdev_lvol_create_lvstore Malloc4 lvs_test 00:12:28.376 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.376 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.376 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.376 12:32:10 -- lvol/basic.sh@97 -- # lvs1_uuid=15b57d04-2c58-490e-b95a-1adbf25b5b17 00:12:28.376 12:32:10 -- lvol/basic.sh@100 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:28.376 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.376 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.376 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.376 12:32:10 -- lvol/basic.sh@100 -- # lvol_stores='[ 00:12:28.376 { 00:12:28.376 "uuid": "15b57d04-2c58-490e-b95a-1adbf25b5b17", 00:12:28.376 "name": "lvs_test", 00:12:28.376 "base_bdev": "Malloc4", 00:12:28.376 "total_data_clusters": 31, 00:12:28.376 "free_clusters": 31, 00:12:28.376 "block_size": 512, 00:12:28.376 "cluster_size": 4194304 00:12:28.376 } 00:12:28.376 ]' 00:12:28.376 12:32:10 -- lvol/basic.sh@101 -- # jq length 00:12:28.376 12:32:10 -- lvol/basic.sh@101 -- # '[' 1 == 1 ']' 00:12:28.376 12:32:10 -- lvol/basic.sh@104 -- # rpc_cmd bdev_malloc_create 128 512 00:12:28.376 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.376 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@104 -- # malloc2_name=Malloc5 00:12:28.634 12:32:10 -- lvol/basic.sh@106 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test -c 1 00:12:28.634 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 [2024-10-01 12:32:10.942048] lvol.c: 705:spdk_lvs_init: *ERROR*: Cluster size 1 is smaller than blocklen 512 00:12:28.634 request: 00:12:28.634 { 00:12:28.634 "bdev_name": "Malloc5", 00:12:28.634 "lvs_name": "lvs2_test", 00:12:28.634 "cluster_sz": 1, 00:12:28.634 "method": "bdev_lvol_create_lvstore", 00:12:28.634 "req_id": 1 00:12:28.634 } 00:12:28.634 Got JSON-RPC error response 00:12:28.634 response: 00:12:28.634 { 00:12:28.634 "code": -22, 00:12:28.634 "message": "Invalid argument" 00:12:28.634 } 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@108 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test -c 00:12:28.634 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 usage: rpc.py [options] bdev_lvol_create_lvstore [-h] [-c CLUSTER_SZ] 00:12:28.634 [--clear-method CLEAR_METHOD] 00:12:28.634 [-m MD_PAGES_PER_CLUSTER_RATIO] 00:12:28.634 bdev_name lvs_name 00:12:28.634 rpc.py [options] bdev_lvol_create_lvstore: error: argument -c/--cluster-sz: expected one argument 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@110 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test -c -1 00:12:28.634 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 request: 00:12:28.634 { 00:12:28.634 "bdev_name": "Malloc5", 00:12:28.634 "lvs_name": "lvs2_test", 00:12:28.634 "cluster_sz": -1, 00:12:28.634 "method": "bdev_lvol_create_lvstore", 00:12:28.634 "req_id": 1 00:12:28.634 } 00:12:28.634 Got JSON-RPC error response 00:12:28.634 response: 00:12:28.634 { 00:12:28.634 "code": -32603, 00:12:28.634 "message": "spdk_json_decode_object failed" 00:12:28.634 } 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@112 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test -c 8191 00:12:28.634 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 [2024-10-01 12:32:10.970383] blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:12:28.634 [2024-10-01 12:32:10.970667] vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:12:28.634 [2024-10-01 12:32:10.970930] lvol.c: 602:lvs_init_cb: *ERROR*: Lvol store init failed: could not initialize blobstore 00:12:28.634 request: 00:12:28.634 { 00:12:28.634 "bdev_name": "Malloc5", 00:12:28.634 "lvs_name": "lvs2_test", 00:12:28.634 "cluster_sz": 8191, 00:12:28.634 "method": "bdev_lvol_create_lvstore", 00:12:28.634 "req_id": 1 00:12:28.634 } 00:12:28.634 Got JSON-RPC error response 00:12:28.634 response: 00:12:28.634 { 00:12:28.634 "code": -32602, 00:12:28.634 "message": "Cannot allocate memory" 00:12:28.634 } 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@115 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:28.634 12:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 12:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.634 12:32:10 -- lvol/basic.sh@115 -- # lvol_stores='[ 00:12:28.634 { 00:12:28.634 "uuid": "15b57d04-2c58-490e-b95a-1adbf25b5b17", 00:12:28.634 "name": "lvs_test", 00:12:28.634 "base_bdev": "Malloc4", 00:12:28.634 "total_data_clusters": 31, 00:12:28.634 "free_clusters": 31, 00:12:28.634 "block_size": 512, 00:12:28.634 "cluster_size": 4194304 00:12:28.634 } 00:12:28.634 ]' 00:12:28.634 12:32:10 -- lvol/basic.sh@116 -- # jq length 00:12:28.634 12:32:11 -- lvol/basic.sh@116 -- # '[' 1 == 1 ']' 00:12:28.634 12:32:11 -- lvol/basic.sh@119 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test -c 8192 00:12:28.634 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.634 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.635 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.635 12:32:11 -- lvol/basic.sh@119 -- # lvs2_uuid=9c33d716-70f8-4344-a1e7-1ddb70f8ac5b 00:12:28.635 12:32:11 -- lvol/basic.sh@121 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:28.635 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.635 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.635 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.635 12:32:11 -- lvol/basic.sh@121 -- # lvol_stores='[ 00:12:28.635 { 00:12:28.635 "uuid": "15b57d04-2c58-490e-b95a-1adbf25b5b17", 00:12:28.635 "name": "lvs_test", 00:12:28.635 "base_bdev": "Malloc4", 00:12:28.635 "total_data_clusters": 31, 00:12:28.635 "free_clusters": 31, 00:12:28.635 "block_size": 512, 00:12:28.635 "cluster_size": 4194304 00:12:28.635 }, 00:12:28.635 { 00:12:28.635 "uuid": "9c33d716-70f8-4344-a1e7-1ddb70f8ac5b", 00:12:28.635 "name": "lvs2_test", 00:12:28.635 "base_bdev": "Malloc5", 00:12:28.635 "total_data_clusters": 8190, 00:12:28.635 "free_clusters": 8190, 00:12:28.635 "block_size": 512, 00:12:28.635 "cluster_size": 8192 00:12:28.635 } 00:12:28.635 ]' 00:12:28.635 12:32:11 -- lvol/basic.sh@122 -- # jq length 00:12:28.635 12:32:11 -- lvol/basic.sh@122 -- # '[' 2 == 2 ']' 00:12:28.635 12:32:11 -- lvol/basic.sh@125 -- # rpc_cmd bdev_lvol_delete_lvstore -u 15b57d04-2c58-490e-b95a-1adbf25b5b17 00:12:28.635 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.635 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.635 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.635 12:32:11 -- lvol/basic.sh@126 -- # rpc_cmd bdev_lvol_get_lvstores -u 15b57d04-2c58-490e-b95a-1adbf25b5b17 00:12:28.635 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.635 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.635 request: 00:12:28.635 { 00:12:28.635 "uuid": "15b57d04-2c58-490e-b95a-1adbf25b5b17", 00:12:28.635 "method": "bdev_lvol_get_lvstores", 00:12:28.635 "req_id": 1 00:12:28.635 } 00:12:28.635 Got JSON-RPC error response 00:12:28.635 response: 00:12:28.635 { 00:12:28.635 "code": -19, 00:12:28.635 "message": "No such device" 00:12:28.635 } 00:12:28.635 12:32:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.635 12:32:11 -- lvol/basic.sh@129 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs2_test 00:12:28.635 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.635 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.894 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.894 12:32:11 -- lvol/basic.sh@130 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs2_test 00:12:28.894 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.894 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.894 request: 00:12:28.894 { 00:12:28.894 "lvs_name": "lvs2_test", 00:12:28.894 "method": "bdev_lvol_get_lvstores", 00:12:28.894 "req_id": 1 00:12:28.894 } 00:12:28.894 Got JSON-RPC error response 00:12:28.894 response: 00:12:28.894 { 00:12:28.894 "code": -19, 00:12:28.894 "message": "No such device" 00:12:28.894 } 00:12:28.894 12:32:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.894 12:32:11 -- lvol/basic.sh@131 -- # rpc_cmd bdev_lvol_get_lvstores -u 9c33d716-70f8-4344-a1e7-1ddb70f8ac5b 00:12:28.894 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.894 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.894 request: 00:12:28.894 { 00:12:28.894 "uuid": "9c33d716-70f8-4344-a1e7-1ddb70f8ac5b", 00:12:28.894 "method": "bdev_lvol_get_lvstores", 00:12:28.894 "req_id": 1 00:12:28.894 } 00:12:28.894 Got JSON-RPC error response 00:12:28.894 response: 00:12:28.894 { 00:12:28.894 "code": -19, 00:12:28.894 "message": "No such device" 00:12:28.894 } 00:12:28.894 12:32:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:28.894 12:32:11 -- lvol/basic.sh@133 -- # rpc_cmd bdev_malloc_delete Malloc4 00:12:28.894 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.894 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.151 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.151 12:32:11 -- lvol/basic.sh@134 -- # rpc_cmd bdev_malloc_delete Malloc5 00:12:29.151 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.151 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.431 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.431 12:32:11 -- lvol/basic.sh@135 -- # check_leftover_devices 00:12:29.431 12:32:11 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:29.431 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.431 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.431 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.431 12:32:11 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:29.431 12:32:11 -- lvol/common.sh@26 -- # jq length 00:12:29.431 12:32:11 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:29.431 12:32:11 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:29.431 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.431 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.431 12:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.431 12:32:11 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:29.431 12:32:11 -- lvol/common.sh@28 -- # jq length 00:12:29.431 ************************************ 00:12:29.431 END TEST test_construct_lvs_different_cluster_size 00:12:29.431 ************************************ 00:12:29.431 12:32:11 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:29.431 00:12:29.431 real 0m1.330s 00:12:29.431 user 0m0.279s 00:12:29.431 sys 0m0.051s 00:12:29.431 12:32:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.431 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 12:32:11 -- lvol/basic.sh@581 -- # run_test test_construct_lvs_clear_methods test_construct_lvs_clear_methods 00:12:29.688 12:32:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:29.688 12:32:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.688 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 ************************************ 00:12:29.688 START TEST test_construct_lvs_clear_methods 00:12:29.688 ************************************ 00:12:29.688 12:32:11 -- common/autotest_common.sh@1104 -- # test_construct_lvs_clear_methods 00:12:29.688 12:32:11 -- lvol/basic.sh@140 -- # rpc_cmd bdev_malloc_create 128 512 00:12:29.688 12:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.688 12:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.688 12:32:12 -- lvol/basic.sh@140 -- # malloc_name=Malloc6 00:12:29.688 12:32:12 -- lvol/basic.sh@143 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs2_test --clear-method invalid123 00:12:29.688 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.688 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 request: 00:12:29.688 { 00:12:29.688 "bdev_name": "Malloc5", 00:12:29.688 "lvs_name": "lvs2_test", 00:12:29.688 "clear_method": "invalid123", 00:12:29.688 "method": "bdev_lvol_create_lvstore", 00:12:29.688 "req_id": 1 00:12:29.688 } 00:12:29.688 Got JSON-RPC error response 00:12:29.688 response: 00:12:29.688 { 00:12:29.688 "code": -22, 00:12:29.688 "message": "Invalid clear_method parameter" 00:12:29.688 } 00:12:29.688 12:32:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:29.688 12:32:12 -- lvol/basic.sh@146 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:29.688 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.688 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.688 12:32:12 -- lvol/basic.sh@146 -- # lvol_stores='[]' 00:12:29.688 12:32:12 -- lvol/basic.sh@147 -- # jq length 00:12:29.688 12:32:12 -- lvol/basic.sh@147 -- # '[' 0 == 0 ']' 00:12:29.688 12:32:12 -- lvol/basic.sh@149 -- # methods='none unmap write_zeroes' 00:12:29.688 12:32:12 -- lvol/basic.sh@150 -- # for clear_method in $methods 00:12:29.688 12:32:12 -- lvol/basic.sh@151 -- # rpc_cmd bdev_lvol_create_lvstore Malloc6 lvs_test --clear-method none 00:12:29.688 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.688 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.688 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.688 12:32:12 -- lvol/basic.sh@151 -- # lvs_uuid=efdd606c-7e17-492d-aa49-89978a18b968 00:12:29.688 12:32:12 -- lvol/basic.sh@154 -- # rpc_cmd bdev_lvol_create -u efdd606c-7e17-492d-aa49-89978a18b968 lvol_test 124 00:12:29.688 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.688 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.946 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.946 12:32:12 -- lvol/basic.sh@154 -- # lvol_uuid=d9cc675d-d33b-48c7-8af7-dd327203a90d 00:12:29.946 12:32:12 -- lvol/basic.sh@155 -- # rpc_cmd bdev_get_bdevs -b d9cc675d-d33b-48c7-8af7-dd327203a90d 00:12:29.946 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.946 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.946 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.946 12:32:12 -- lvol/basic.sh@155 -- # lvol='[ 00:12:29.946 { 00:12:29.946 "name": "d9cc675d-d33b-48c7-8af7-dd327203a90d", 00:12:29.946 "aliases": [ 00:12:29.946 "lvs_test/lvol_test" 00:12:29.946 ], 00:12:29.946 "product_name": "Logical Volume", 00:12:29.946 "block_size": 512, 00:12:29.946 "num_blocks": 253952, 00:12:29.946 "uuid": "d9cc675d-d33b-48c7-8af7-dd327203a90d", 00:12:29.946 "assigned_rate_limits": { 00:12:29.946 "rw_ios_per_sec": 0, 00:12:29.946 "rw_mbytes_per_sec": 0, 00:12:29.946 "r_mbytes_per_sec": 0, 00:12:29.946 "w_mbytes_per_sec": 0 00:12:29.946 }, 00:12:29.946 "claimed": false, 00:12:29.946 "zoned": false, 00:12:29.946 "supported_io_types": { 00:12:29.946 "read": true, 00:12:29.946 "write": true, 00:12:29.946 "unmap": true, 00:12:29.946 "write_zeroes": true, 00:12:29.946 "flush": false, 00:12:29.946 "reset": true, 00:12:29.946 "compare": false, 00:12:29.946 "compare_and_write": false, 00:12:29.946 "abort": false, 00:12:29.946 "nvme_admin": false, 00:12:29.946 "nvme_io": false 00:12:29.946 }, 00:12:29.946 "memory_domains": [ 00:12:29.946 { 00:12:29.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.946 "dma_device_type": 2 00:12:29.946 } 00:12:29.946 ], 00:12:29.946 "driver_specific": { 00:12:29.946 "lvol": { 00:12:29.946 "lvol_store_uuid": "efdd606c-7e17-492d-aa49-89978a18b968", 00:12:29.946 "base_bdev": "Malloc6", 00:12:29.946 "thin_provision": false, 00:12:29.946 "snapshot": false, 00:12:29.946 "clone": false, 00:12:29.946 "esnap_clone": false 00:12:29.946 } 00:12:29.946 } 00:12:29.946 } 00:12:29.946 ]' 00:12:29.946 12:32:12 -- lvol/basic.sh@156 -- # jq -r '.[0].name' 00:12:29.946 12:32:12 -- lvol/basic.sh@156 -- # '[' d9cc675d-d33b-48c7-8af7-dd327203a90d = d9cc675d-d33b-48c7-8af7-dd327203a90d ']' 00:12:29.946 12:32:12 -- lvol/basic.sh@157 -- # jq -r '.[0].uuid' 00:12:29.946 12:32:12 -- lvol/basic.sh@157 -- # '[' d9cc675d-d33b-48c7-8af7-dd327203a90d = d9cc675d-d33b-48c7-8af7-dd327203a90d ']' 00:12:29.946 12:32:12 -- lvol/basic.sh@158 -- # jq -r '.[0].aliases[0]' 00:12:29.946 12:32:12 -- lvol/basic.sh@158 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:12:29.946 12:32:12 -- lvol/basic.sh@159 -- # jq -r '.[0].block_size' 00:12:29.946 12:32:12 -- lvol/basic.sh@159 -- # '[' 512 = 512 ']' 00:12:29.946 12:32:12 -- lvol/basic.sh@160 -- # jq -r '.[0].num_blocks' 00:12:30.203 12:32:12 -- lvol/basic.sh@160 -- # '[' 253952 = 253952 ']' 00:12:30.203 12:32:12 -- lvol/basic.sh@163 -- # rpc_cmd bdev_lvol_delete d9cc675d-d33b-48c7-8af7-dd327203a90d 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@164 -- # rpc_cmd bdev_get_bdevs -b d9cc675d-d33b-48c7-8af7-dd327203a90d 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 [2024-10-01 12:32:12.513991] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: d9cc675d-d33b-48c7-8af7-dd327203a90d 00:12:30.203 request: 00:12:30.203 { 00:12:30.203 "name": "d9cc675d-d33b-48c7-8af7-dd327203a90d", 00:12:30.203 "method": "bdev_get_bdevs", 00:12:30.203 "req_id": 1 00:12:30.203 } 00:12:30.203 Got JSON-RPC error response 00:12:30.203 response: 00:12:30.203 { 00:12:30.203 "code": -19, 00:12:30.203 "message": "No such device" 00:12:30.203 } 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@165 -- # rpc_cmd bdev_lvol_delete_lvstore -u efdd606c-7e17-492d-aa49-89978a18b968 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@166 -- # rpc_cmd bdev_lvol_get_lvstores -u efdd606c-7e17-492d-aa49-89978a18b968 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 request: 00:12:30.203 { 00:12:30.203 "uuid": "efdd606c-7e17-492d-aa49-89978a18b968", 00:12:30.203 "method": "bdev_lvol_get_lvstores", 00:12:30.203 "req_id": 1 00:12:30.203 } 00:12:30.203 Got JSON-RPC error response 00:12:30.203 response: 00:12:30.203 { 00:12:30.203 "code": -19, 00:12:30.203 "message": "No such device" 00:12:30.203 } 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@150 -- # for clear_method in $methods 00:12:30.203 12:32:12 -- lvol/basic.sh@151 -- # rpc_cmd bdev_lvol_create_lvstore Malloc6 lvs_test --clear-method unmap 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@151 -- # lvs_uuid=01c5a71f-7251-4a92-a13f-23d28e2eb980 00:12:30.203 12:32:12 -- lvol/basic.sh@154 -- # rpc_cmd bdev_lvol_create -u 01c5a71f-7251-4a92-a13f-23d28e2eb980 lvol_test 124 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@154 -- # lvol_uuid=c525af68-dea4-4163-9d73-077a95281e92 00:12:30.203 12:32:12 -- lvol/basic.sh@155 -- # rpc_cmd bdev_get_bdevs -b c525af68-dea4-4163-9d73-077a95281e92 00:12:30.203 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.203 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.203 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.203 12:32:12 -- lvol/basic.sh@155 -- # lvol='[ 00:12:30.203 { 00:12:30.203 "name": "c525af68-dea4-4163-9d73-077a95281e92", 00:12:30.203 "aliases": [ 00:12:30.203 "lvs_test/lvol_test" 00:12:30.203 ], 00:12:30.203 "product_name": "Logical Volume", 00:12:30.203 "block_size": 512, 00:12:30.203 "num_blocks": 253952, 00:12:30.203 "uuid": "c525af68-dea4-4163-9d73-077a95281e92", 00:12:30.203 "assigned_rate_limits": { 00:12:30.203 "rw_ios_per_sec": 0, 00:12:30.203 "rw_mbytes_per_sec": 0, 00:12:30.203 "r_mbytes_per_sec": 0, 00:12:30.203 "w_mbytes_per_sec": 0 00:12:30.203 }, 00:12:30.203 "claimed": false, 00:12:30.203 "zoned": false, 00:12:30.203 "supported_io_types": { 00:12:30.203 "read": true, 00:12:30.203 "write": true, 00:12:30.203 "unmap": true, 00:12:30.203 "write_zeroes": true, 00:12:30.203 "flush": false, 00:12:30.203 "reset": true, 00:12:30.203 "compare": false, 00:12:30.203 "compare_and_write": false, 00:12:30.203 "abort": false, 00:12:30.203 "nvme_admin": false, 00:12:30.204 "nvme_io": false 00:12:30.204 }, 00:12:30.204 "memory_domains": [ 00:12:30.204 { 00:12:30.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.204 "dma_device_type": 2 00:12:30.204 } 00:12:30.204 ], 00:12:30.204 "driver_specific": { 00:12:30.204 "lvol": { 00:12:30.204 "lvol_store_uuid": "01c5a71f-7251-4a92-a13f-23d28e2eb980", 00:12:30.204 "base_bdev": "Malloc6", 00:12:30.204 "thin_provision": false, 00:12:30.204 "snapshot": false, 00:12:30.204 "clone": false, 00:12:30.204 "esnap_clone": false 00:12:30.204 } 00:12:30.204 } 00:12:30.204 } 00:12:30.204 ]' 00:12:30.204 12:32:12 -- lvol/basic.sh@156 -- # jq -r '.[0].name' 00:12:30.204 12:32:12 -- lvol/basic.sh@156 -- # '[' c525af68-dea4-4163-9d73-077a95281e92 = c525af68-dea4-4163-9d73-077a95281e92 ']' 00:12:30.204 12:32:12 -- lvol/basic.sh@157 -- # jq -r '.[0].uuid' 00:12:30.204 12:32:12 -- lvol/basic.sh@157 -- # '[' c525af68-dea4-4163-9d73-077a95281e92 = c525af68-dea4-4163-9d73-077a95281e92 ']' 00:12:30.204 12:32:12 -- lvol/basic.sh@158 -- # jq -r '.[0].aliases[0]' 00:12:30.461 12:32:12 -- lvol/basic.sh@158 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:12:30.461 12:32:12 -- lvol/basic.sh@159 -- # jq -r '.[0].block_size' 00:12:30.461 12:32:12 -- lvol/basic.sh@159 -- # '[' 512 = 512 ']' 00:12:30.461 12:32:12 -- lvol/basic.sh@160 -- # jq -r '.[0].num_blocks' 00:12:30.461 12:32:12 -- lvol/basic.sh@160 -- # '[' 253952 = 253952 ']' 00:12:30.461 12:32:12 -- lvol/basic.sh@163 -- # rpc_cmd bdev_lvol_delete c525af68-dea4-4163-9d73-077a95281e92 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@164 -- # rpc_cmd bdev_get_bdevs -b c525af68-dea4-4163-9d73-077a95281e92 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 [2024-10-01 12:32:12.881947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: c525af68-dea4-4163-9d73-077a95281e92 00:12:30.461 request: 00:12:30.461 { 00:12:30.461 "name": "c525af68-dea4-4163-9d73-077a95281e92", 00:12:30.461 "method": "bdev_get_bdevs", 00:12:30.461 "req_id": 1 00:12:30.461 } 00:12:30.461 Got JSON-RPC error response 00:12:30.461 response: 00:12:30.461 { 00:12:30.461 "code": -19, 00:12:30.461 "message": "No such device" 00:12:30.461 } 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@165 -- # rpc_cmd bdev_lvol_delete_lvstore -u 01c5a71f-7251-4a92-a13f-23d28e2eb980 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@166 -- # rpc_cmd bdev_lvol_get_lvstores -u 01c5a71f-7251-4a92-a13f-23d28e2eb980 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 request: 00:12:30.461 { 00:12:30.461 "uuid": "01c5a71f-7251-4a92-a13f-23d28e2eb980", 00:12:30.461 "method": "bdev_lvol_get_lvstores", 00:12:30.461 "req_id": 1 00:12:30.461 } 00:12:30.461 Got JSON-RPC error response 00:12:30.461 response: 00:12:30.461 { 00:12:30.461 "code": -19, 00:12:30.461 "message": "No such device" 00:12:30.461 } 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@150 -- # for clear_method in $methods 00:12:30.461 12:32:12 -- lvol/basic.sh@151 -- # rpc_cmd bdev_lvol_create_lvstore Malloc6 lvs_test --clear-method write_zeroes 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@151 -- # lvs_uuid=777c260e-a436-462c-aa3a-aceb8b711f66 00:12:30.461 12:32:12 -- lvol/basic.sh@154 -- # rpc_cmd bdev_lvol_create -u 777c260e-a436-462c-aa3a-aceb8b711f66 lvol_test 124 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@154 -- # lvol_uuid=1f755457-58c9-4efe-8a1c-e5d91271da8b 00:12:30.461 12:32:12 -- lvol/basic.sh@155 -- # rpc_cmd bdev_get_bdevs -b 1f755457-58c9-4efe-8a1c-e5d91271da8b 00:12:30.461 12:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.461 12:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.461 12:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.461 12:32:12 -- lvol/basic.sh@155 -- # lvol='[ 00:12:30.461 { 00:12:30.461 "name": "1f755457-58c9-4efe-8a1c-e5d91271da8b", 00:12:30.461 "aliases": [ 00:12:30.461 "lvs_test/lvol_test" 00:12:30.461 ], 00:12:30.461 "product_name": "Logical Volume", 00:12:30.461 "block_size": 512, 00:12:30.461 "num_blocks": 253952, 00:12:30.461 "uuid": "1f755457-58c9-4efe-8a1c-e5d91271da8b", 00:12:30.461 "assigned_rate_limits": { 00:12:30.461 "rw_ios_per_sec": 0, 00:12:30.461 "rw_mbytes_per_sec": 0, 00:12:30.461 "r_mbytes_per_sec": 0, 00:12:30.461 "w_mbytes_per_sec": 0 00:12:30.461 }, 00:12:30.461 "claimed": false, 00:12:30.461 "zoned": false, 00:12:30.461 "supported_io_types": { 00:12:30.461 "read": true, 00:12:30.461 "write": true, 00:12:30.461 "unmap": true, 00:12:30.461 "write_zeroes": true, 00:12:30.461 "flush": false, 00:12:30.461 "reset": true, 00:12:30.461 "compare": false, 00:12:30.461 "compare_and_write": false, 00:12:30.461 "abort": false, 00:12:30.461 "nvme_admin": false, 00:12:30.461 "nvme_io": false 00:12:30.461 }, 00:12:30.461 "memory_domains": [ 00:12:30.461 { 00:12:30.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.461 "dma_device_type": 2 00:12:30.461 } 00:12:30.461 ], 00:12:30.461 "driver_specific": { 00:12:30.461 "lvol": { 00:12:30.461 "lvol_store_uuid": "777c260e-a436-462c-aa3a-aceb8b711f66", 00:12:30.461 "base_bdev": "Malloc6", 00:12:30.461 "thin_provision": false, 00:12:30.461 "snapshot": false, 00:12:30.461 "clone": false, 00:12:30.461 "esnap_clone": false 00:12:30.461 } 00:12:30.461 } 00:12:30.461 } 00:12:30.461 ]' 00:12:30.461 12:32:12 -- lvol/basic.sh@156 -- # jq -r '.[0].name' 00:12:30.718 12:32:13 -- lvol/basic.sh@156 -- # '[' 1f755457-58c9-4efe-8a1c-e5d91271da8b = 1f755457-58c9-4efe-8a1c-e5d91271da8b ']' 00:12:30.718 12:32:13 -- lvol/basic.sh@157 -- # jq -r '.[0].uuid' 00:12:30.718 12:32:13 -- lvol/basic.sh@157 -- # '[' 1f755457-58c9-4efe-8a1c-e5d91271da8b = 1f755457-58c9-4efe-8a1c-e5d91271da8b ']' 00:12:30.718 12:32:13 -- lvol/basic.sh@158 -- # jq -r '.[0].aliases[0]' 00:12:30.718 12:32:13 -- lvol/basic.sh@158 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:12:30.718 12:32:13 -- lvol/basic.sh@159 -- # jq -r '.[0].block_size' 00:12:30.718 12:32:13 -- lvol/basic.sh@159 -- # '[' 512 = 512 ']' 00:12:30.718 12:32:13 -- lvol/basic.sh@160 -- # jq -r '.[0].num_blocks' 00:12:30.718 12:32:13 -- lvol/basic.sh@160 -- # '[' 253952 = 253952 ']' 00:12:30.718 12:32:13 -- lvol/basic.sh@163 -- # rpc_cmd bdev_lvol_delete 1f755457-58c9-4efe-8a1c-e5d91271da8b 00:12:30.718 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.718 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:30.976 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.976 12:32:13 -- lvol/basic.sh@164 -- # rpc_cmd bdev_get_bdevs -b 1f755457-58c9-4efe-8a1c-e5d91271da8b 00:12:30.976 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.976 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:30.976 [2024-10-01 12:32:13.253459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1f755457-58c9-4efe-8a1c-e5d91271da8b 00:12:30.976 request: 00:12:30.976 { 00:12:30.976 "name": "1f755457-58c9-4efe-8a1c-e5d91271da8b", 00:12:30.976 "method": "bdev_get_bdevs", 00:12:30.976 "req_id": 1 00:12:30.976 } 00:12:30.976 Got JSON-RPC error response 00:12:30.976 response: 00:12:30.976 { 00:12:30.976 "code": -19, 00:12:30.976 "message": "No such device" 00:12:30.976 } 00:12:30.976 12:32:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.976 12:32:13 -- lvol/basic.sh@165 -- # rpc_cmd bdev_lvol_delete_lvstore -u 777c260e-a436-462c-aa3a-aceb8b711f66 00:12:30.976 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.976 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:30.976 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.976 12:32:13 -- lvol/basic.sh@166 -- # rpc_cmd bdev_lvol_get_lvstores -u 777c260e-a436-462c-aa3a-aceb8b711f66 00:12:30.976 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.976 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:30.976 request: 00:12:30.976 { 00:12:30.976 "uuid": "777c260e-a436-462c-aa3a-aceb8b711f66", 00:12:30.976 "method": "bdev_lvol_get_lvstores", 00:12:30.976 "req_id": 1 00:12:30.976 } 00:12:30.976 Got JSON-RPC error response 00:12:30.976 response: 00:12:30.976 { 00:12:30.976 "code": -19, 00:12:30.976 "message": "No such device" 00:12:30.976 } 00:12:30.976 12:32:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:30.976 12:32:13 -- lvol/basic.sh@168 -- # rpc_cmd bdev_malloc_delete Malloc6 00:12:30.976 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.976 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.233 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.233 12:32:13 -- lvol/basic.sh@169 -- # check_leftover_devices 00:12:31.233 12:32:13 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:31.233 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.233 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.233 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.233 12:32:13 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:31.233 12:32:13 -- lvol/common.sh@26 -- # jq length 00:12:31.233 12:32:13 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:31.233 12:32:13 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:31.233 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.233 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.233 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.233 12:32:13 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:31.233 12:32:13 -- lvol/common.sh@28 -- # jq length 00:12:31.233 ************************************ 00:12:31.233 END TEST test_construct_lvs_clear_methods 00:12:31.233 ************************************ 00:12:31.233 12:32:13 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:31.233 00:12:31.233 real 0m1.724s 00:12:31.233 user 0m0.898s 00:12:31.233 sys 0m0.109s 00:12:31.233 12:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.233 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.233 12:32:13 -- lvol/basic.sh@582 -- # run_test test_construct_lvol_fio_clear_method_none test_construct_lvol_fio_clear_method_none 00:12:31.233 12:32:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:31.233 12:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.233 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.496 ************************************ 00:12:31.496 START TEST test_construct_lvol_fio_clear_method_none 00:12:31.497 ************************************ 00:12:31.497 12:32:13 -- common/autotest_common.sh@1104 -- # test_construct_lvol_fio_clear_method_none 00:12:31.497 12:32:13 -- lvol/basic.sh@174 -- # local nbd_name=/dev/nbd0 00:12:31.497 12:32:13 -- lvol/basic.sh@175 -- # local clear_method=none 00:12:31.497 12:32:13 -- lvol/basic.sh@177 -- # local lvstore_name=lvs_test lvstore_uuid 00:12:31.497 12:32:13 -- lvol/basic.sh@178 -- # local lvol_name=lvol_test lvol_uuid 00:12:31.497 12:32:13 -- lvol/basic.sh@179 -- # local malloc_dev 00:12:31.497 12:32:13 -- lvol/basic.sh@181 -- # rpc_cmd bdev_malloc_create 256 512 00:12:31.497 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.497 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.497 12:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.497 12:32:13 -- lvol/basic.sh@181 -- # malloc_dev=Malloc7 00:12:31.497 12:32:13 -- lvol/basic.sh@182 -- # rpc_cmd bdev_lvol_create_lvstore Malloc7 lvs_test 00:12:31.497 12:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.497 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 12:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.761 12:32:14 -- lvol/basic.sh@182 -- # lvstore_uuid=b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:31.761 12:32:14 -- lvol/basic.sh@184 -- # get_lvs_jq bdev_lvol_get_lvstores -u b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:31.761 12:32:14 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:31.761 12:32:14 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:12:31.761 12:32:14 -- common/autotest_common.sh@585 -- # jq_out=() 00:12:31.761 12:32:14 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:12:31.761 12:32:14 -- common/autotest_common.sh@586 -- # local jq val 00:12:31.761 12:32:14 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:12:31.761 12:32:14 -- common/autotest_common.sh@596 -- # local lvs 00:12:31.761 12:32:14 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:12:31.761 12:32:14 -- common/autotest_common.sh@611 -- # local bdev 00:12:31.761 12:32:14 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.761 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.761 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.761 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.761 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.761 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:12:31.761 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.762 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:12:31.762 12:32:14 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:31.762 12:32:14 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:12:31.762 12:32:14 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:12:31.762 12:32:14 -- common/autotest_common.sh@620 -- # shift 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:31.762 12:32:14 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:12:31.762 12:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.762 12:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:31.762 12:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc7 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:12:31.762 12:32:14 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:31.762 12:32:14 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:12:31.762 12:32:14 -- lvol/basic.sh@190 -- # rpc_cmd bdev_lvol_create -c none -u b03fcc2e-47af-4b74-9f27-120c8a951f4b lvol_test 4 00:12:31.762 12:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.762 12:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:31.762 12:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.762 12:32:14 -- lvol/basic.sh@190 -- # lvol_uuid=ff89d5e8-7a54-462c-92b4-a8538c01f8c1 00:12:31.762 12:32:14 -- lvol/basic.sh@192 -- # nbd_start_disks /var/tmp/spdk.sock ff89d5e8-7a54-462c-92b4-a8538c01f8c1 /dev/nbd0 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('ff89d5e8-7a54-462c-92b4-a8538c01f8c1') 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@12 -- # local i 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.762 12:32:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk ff89d5e8-7a54-462c-92b4-a8538c01f8c1 /dev/nbd0 00:12:32.019 /dev/nbd0 00:12:32.019 12:32:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.019 12:32:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.019 12:32:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:32.019 12:32:14 -- common/autotest_common.sh@857 -- # local i 00:12:32.019 12:32:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.019 12:32:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.019 12:32:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:32.019 12:32:14 -- common/autotest_common.sh@861 -- # break 00:12:32.019 12:32:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.019 12:32:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.019 12:32:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:12:32.019 1+0 records in 00:12:32.019 1+0 records out 00:12:32.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264631 s, 15.5 MB/s 00:12:32.019 12:32:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:32.019 12:32:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.019 12:32:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:32.019 12:32:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.019 12:32:14 -- common/autotest_common.sh@877 -- # return 0 00:12:32.019 12:32:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.019 12:32:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.019 12:32:14 -- lvol/basic.sh@193 -- # run_fio_test /dev/nbd0 0 4194304 write 0xdd 00:12:32.019 12:32:14 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:32.019 12:32:14 -- lvol/common.sh@41 -- # local offset=0 00:12:32.019 12:32:14 -- lvol/common.sh@42 -- # local size=4194304 00:12:32.019 12:32:14 -- lvol/common.sh@43 -- # local rw=write 00:12:32.019 12:32:14 -- lvol/common.sh@44 -- # local pattern=0xdd 00:12:32.019 12:32:14 -- lvol/common.sh@45 -- # local extra_params= 00:12:32.019 12:32:14 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:32.019 12:32:14 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:12:32.019 12:32:14 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:32.019 12:32:14 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:32.019 12:32:14 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:12:32.277 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:32.277 fio-3.35 00:12:32.277 Starting 1 process 00:12:32.534 00:12:32.534 fio_test: (groupid=0, jobs=1): err= 0: pid=58351: Tue Oct 1 12:32:14 2024 00:12:32.534 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(4096KiB/98msec) 00:12:32.534 clat (usec): min=62, max=488, avg=93.94, stdev=20.92 00:12:32.534 lat (usec): min=62, max=488, avg=94.06, stdev=20.95 00:12:32.534 clat percentiles (usec): 00:12:32.534 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 84], 00:12:32.534 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 00:12:32.534 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 127], 00:12:32.534 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 235], 99.95th=[ 490], 00:12:32.534 | 99.99th=[ 490] 00:12:32.534 write: IOPS=9752, BW=38.1MiB/s (39.9MB/s)(4096KiB/105msec); 0 zone resets 00:12:32.534 clat (usec): min=75, max=274, avg=99.11, stdev=16.05 00:12:32.534 lat (usec): min=75, max=297, avg=100.32, stdev=16.82 00:12:32.534 clat percentiles (usec): 00:12:32.534 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:32.534 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 98], 00:12:32.534 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 119], 95.00th=[ 130], 00:12:32.534 | 99.00th=[ 151], 99.50th=[ 169], 99.90th=[ 208], 99.95th=[ 277], 00:12:32.534 | 99.99th=[ 277] 00:12:32.534 lat (usec) : 100=67.14%, 250=32.76%, 500=0.10% 00:12:32.534 cpu : usr=3.96%, sys=9.41%, ctx=2060, majf=0, minf=43 00:12:32.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.534 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.534 00:12:32.534 Run status group 0 (all jobs): 00:12:32.534 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=4096KiB (4194kB), run=98-98msec 00:12:32.534 WRITE: bw=38.1MiB/s (39.9MB/s), 38.1MiB/s-38.1MiB/s (39.9MB/s-39.9MB/s), io=4096KiB (4194kB), run=105-105msec 00:12:32.534 00:12:32.534 Disk stats (read/write): 00:12:32.534 nbd0: ios=394/1024, merge=0/0, ticks=33/90, in_queue=123, util=57.92% 00:12:32.534 12:32:14 -- lvol/basic.sh@194 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@51 -- # local i 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.534 12:32:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@41 -- # break 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.792 12:32:15 -- lvol/basic.sh@196 -- # rpc_cmd bdev_lvol_delete ff89d5e8-7a54-462c-92b4-a8538c01f8c1 00:12:32.792 12:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.792 12:32:15 -- common/autotest_common.sh@10 -- # set +x 00:12:32.792 12:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.792 12:32:15 -- lvol/basic.sh@197 -- # rpc_cmd bdev_lvol_delete_lvstore -u b03fcc2e-47af-4b74-9f27-120c8a951f4b 00:12:32.792 12:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:32.792 12:32:15 -- common/autotest_common.sh@10 -- # set +x 00:12:32.792 12:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:32.792 12:32:15 -- lvol/basic.sh@198 -- # nbd_start_disks /var/tmp/spdk.sock Malloc7 /dev/nbd0 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc7') 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@12 -- # local i 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.792 12:32:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk Malloc7 /dev/nbd0 00:12:33.051 /dev/nbd0 00:12:33.051 12:32:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.051 12:32:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.051 12:32:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:33.051 12:32:15 -- common/autotest_common.sh@857 -- # local i 00:12:33.051 12:32:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.051 12:32:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.051 12:32:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:33.051 12:32:15 -- common/autotest_common.sh@861 -- # break 00:12:33.051 12:32:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.051 12:32:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.051 12:32:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:12:33.051 1+0 records in 00:12:33.051 1+0 records out 00:12:33.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279781 s, 14.6 MB/s 00:12:33.051 12:32:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:33.051 12:32:15 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.051 12:32:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:33.051 12:32:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.051 12:32:15 -- common/autotest_common.sh@877 -- # return 0 00:12:33.051 12:32:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.051 12:32:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.051 12:32:15 -- lvol/basic.sh@200 -- # local metadata_pages 00:12:33.051 12:32:15 -- lvol/basic.sh@201 -- # local last_metadata_lba 00:12:33.051 12:32:15 -- lvol/basic.sh@202 -- # local offset_metadata_end 00:12:33.051 12:32:15 -- lvol/basic.sh@203 -- # local last_cluster_of_metadata 00:12:33.051 12:32:15 -- lvol/basic.sh@204 -- # local offset 00:12:33.051 12:32:15 -- lvol/basic.sh@205 -- # local size_metadata_end 00:12:33.051 12:32:15 -- lvol/basic.sh@207 -- # calc '1 + 63 + ceil(5 + ceil(63 / 8) / 4096) * 3' 00:12:33.051 12:32:15 -- lvol/common.sh@57 -- # bc -l 00:12:33.051 12:32:15 -- lvol/basic.sh@207 -- # metadata_pages=79 00:12:33.051 12:32:15 -- lvol/basic.sh@209 -- # last_metadata_lba=632 00:12:33.051 12:32:15 -- lvol/basic.sh@210 -- # offset_metadata_end=323584 00:12:33.051 12:32:15 -- lvol/basic.sh@211 -- # calc 'ceil(79 / 4194304 / 4096)' 00:12:33.051 12:32:15 -- lvol/common.sh@57 -- # bc -l 00:12:33.310 12:32:15 -- lvol/basic.sh@211 -- # last_cluster_of_metadata=1 00:12:33.310 12:32:15 -- lvol/basic.sh@212 -- # last_cluster_of_metadata=1 00:12:33.310 12:32:15 -- lvol/basic.sh@213 -- # offset=4194304 00:12:33.310 12:32:15 -- lvol/basic.sh@214 -- # size_metadata_end=3870720 00:12:33.310 12:32:15 -- lvol/basic.sh@217 -- # run_fio_test /dev/nbd0 323584 3870720 read 0x00 00:12:33.310 12:32:15 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:33.310 12:32:15 -- lvol/common.sh@41 -- # local offset=323584 00:12:33.310 12:32:15 -- lvol/common.sh@42 -- # local size=3870720 00:12:33.310 12:32:15 -- lvol/common.sh@43 -- # local rw=read 00:12:33.310 12:32:15 -- lvol/common.sh@44 -- # local pattern=0x00 00:12:33.310 12:32:15 -- lvol/common.sh@45 -- # local extra_params= 00:12:33.310 12:32:15 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:33.310 12:32:15 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:12:33.310 12:32:15 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:12:33.310 12:32:15 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=323584 --size=3870720 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:12:33.310 12:32:15 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=323584 --size=3870720 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:12:33.310 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:33.310 fio-3.35 00:12:33.310 Starting 1 process 00:12:33.569 00:12:33.569 fio_test: (groupid=0, jobs=1): err= 0: pid=58383: Tue Oct 1 12:32:15 2024 00:12:33.569 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(3780KiB/81msec) 00:12:33.569 clat (usec): min=59, max=272, avg=83.38, stdev=12.40 00:12:33.569 lat (usec): min=59, max=273, avg=83.53, stdev=12.42 00:12:33.569 clat percentiles (usec): 00:12:33.569 | 1.00th=[ 62], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 78], 00:12:33.569 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 80], 60.00th=[ 81], 00:12:33.569 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 103], 00:12:33.569 | 99.00th=[ 118], 99.50th=[ 141], 99.90th=[ 273], 99.95th=[ 273], 00:12:33.569 | 99.99th=[ 273] 00:12:33.569 lat (usec) : 100=94.18%, 250=5.71%, 500=0.11% 00:12:33.569 cpu : usr=5.00%, sys=7.50%, ctx=947, majf=0, minf=11 00:12:33.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.569 issued rwts: total=945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.569 00:12:33.569 Run status group 0 (all jobs): 00:12:33.569 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=3780KiB (3871kB), run=81-81msec 00:12:33.569 00:12:33.569 Disk stats (read/write): 00:12:33.569 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:12:33.569 12:32:15 -- lvol/basic.sh@219 -- # run_fio_test /dev/nbd0 4194304 4194304 read 0xdd 00:12:33.569 12:32:15 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:33.569 12:32:15 -- lvol/common.sh@41 -- # local offset=4194304 00:12:33.569 12:32:15 -- lvol/common.sh@42 -- # local size=4194304 00:12:33.569 12:32:15 -- lvol/common.sh@43 -- # local rw=read 00:12:33.570 12:32:15 -- lvol/common.sh@44 -- # local pattern=0xdd 00:12:33.570 12:32:15 -- lvol/common.sh@45 -- # local extra_params= 00:12:33.570 12:32:15 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:33.570 12:32:15 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:12:33.570 12:32:15 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:33.570 12:32:15 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:33.570 12:32:15 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:12:33.570 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:33.570 fio-3.35 00:12:33.570 Starting 1 process 00:12:33.828 00:12:33.828 fio_test: (groupid=0, jobs=1): err= 0: pid=58392: Tue Oct 1 12:32:16 2024 00:12:33.828 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(4096KiB/81msec) 00:12:33.828 clat (usec): min=52, max=285, avg=76.79, stdev=18.76 00:12:33.828 lat (usec): min=52, max=286, avg=76.92, stdev=18.78 00:12:33.828 clat percentiles (usec): 00:12:33.828 | 1.00th=[ 55], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 58], 00:12:33.828 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 78], 00:12:33.828 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 110], 00:12:33.828 | 99.00th=[ 128], 99.50th=[ 143], 99.90th=[ 217], 99.95th=[ 285], 00:12:33.828 | 99.99th=[ 285] 00:12:33.828 lat (usec) : 100=90.04%, 250=9.86%, 500=0.10% 00:12:33.828 cpu : usr=0.00%, sys=12.50%, ctx=1024, majf=0, minf=9 00:12:33.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.828 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.828 00:12:33.828 Run status group 0 (all jobs): 00:12:33.828 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=4096KiB (4194kB), run=81-81msec 00:12:33.828 00:12:33.828 Disk stats (read/write): 00:12:33.828 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:12:33.828 12:32:16 -- lvol/basic.sh@221 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.828 12:32:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@41 -- # break 00:12:34.086 12:32:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.086 12:32:16 -- lvol/basic.sh@222 -- # rpc_cmd bdev_malloc_delete Malloc7 00:12:34.086 12:32:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.086 12:32:16 -- common/autotest_common.sh@10 -- # set +x 00:12:34.652 12:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.652 12:32:17 -- lvol/basic.sh@224 -- # check_leftover_devices 00:12:34.652 12:32:17 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:34.652 12:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.652 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:12:34.652 12:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.652 12:32:17 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:34.652 12:32:17 -- lvol/common.sh@26 -- # jq length 00:12:34.652 12:32:17 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:34.652 12:32:17 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:34.652 12:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.652 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:12:34.910 12:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:34.910 12:32:17 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:34.910 12:32:17 -- lvol/common.sh@28 -- # jq length 00:12:34.910 ************************************ 00:12:34.910 END TEST test_construct_lvol_fio_clear_method_none 00:12:34.910 ************************************ 00:12:34.910 12:32:17 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:34.910 00:12:34.910 real 0m3.470s 00:12:34.910 user 0m1.473s 00:12:34.910 sys 0m0.330s 00:12:34.910 12:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.910 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:12:34.910 12:32:17 -- lvol/basic.sh@583 -- # run_test test_construct_lvol_fio_clear_method_unmap test_construct_lvol_fio_clear_method_unmap 00:12:34.910 12:32:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:34.910 12:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.910 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:12:34.910 ************************************ 00:12:34.910 START TEST test_construct_lvol_fio_clear_method_unmap 00:12:34.910 ************************************ 00:12:34.910 12:32:17 -- common/autotest_common.sh@1104 -- # test_construct_lvol_fio_clear_method_unmap 00:12:34.910 12:32:17 -- lvol/basic.sh@229 -- # local nbd_name=/dev/nbd0 00:12:34.910 12:32:17 -- lvol/basic.sh@230 -- # local clear_method=unmap 00:12:34.910 12:32:17 -- lvol/basic.sh@232 -- # local lvstore_name=lvs_test lvstore_uuid 00:12:34.910 12:32:17 -- lvol/basic.sh@233 -- # local lvol_name=lvol_test lvol_uuid 00:12:34.910 12:32:17 -- lvol/basic.sh@234 -- # local malloc_dev 00:12:34.910 12:32:17 -- lvol/basic.sh@236 -- # rpc_cmd bdev_malloc_create 256 512 00:12:34.910 12:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:34.910 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:12:35.169 12:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.169 12:32:17 -- lvol/basic.sh@236 -- # malloc_dev=Malloc8 00:12:35.169 12:32:17 -- lvol/basic.sh@238 -- # nbd_start_disks /var/tmp/spdk.sock Malloc8 /dev/nbd0 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc8') 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@12 -- # local i 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.169 12:32:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk Malloc8 /dev/nbd0 00:12:35.427 /dev/nbd0 00:12:35.427 12:32:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.427 12:32:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.427 12:32:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:35.427 12:32:17 -- common/autotest_common.sh@857 -- # local i 00:12:35.427 12:32:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:35.427 12:32:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:35.427 12:32:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:35.427 12:32:17 -- common/autotest_common.sh@861 -- # break 00:12:35.427 12:32:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:35.427 12:32:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:35.427 12:32:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:12:35.427 1+0 records in 00:12:35.427 1+0 records out 00:12:35.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287777 s, 14.2 MB/s 00:12:35.427 12:32:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:35.427 12:32:17 -- common/autotest_common.sh@874 -- # size=4096 00:12:35.427 12:32:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:35.427 12:32:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:35.427 12:32:17 -- common/autotest_common.sh@877 -- # return 0 00:12:35.427 12:32:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.427 12:32:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.427 12:32:17 -- lvol/basic.sh@239 -- # run_fio_test /dev/nbd0 0 268435456 write 0xdd 00:12:35.427 12:32:17 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:35.427 12:32:17 -- lvol/common.sh@41 -- # local offset=0 00:12:35.427 12:32:17 -- lvol/common.sh@42 -- # local size=268435456 00:12:35.427 12:32:17 -- lvol/common.sh@43 -- # local rw=write 00:12:35.427 12:32:17 -- lvol/common.sh@44 -- # local pattern=0xdd 00:12:35.427 12:32:17 -- lvol/common.sh@45 -- # local extra_params= 00:12:35.427 12:32:17 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:35.427 12:32:17 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:12:35.427 12:32:17 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:35.427 12:32:17 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=268435456 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:35.427 12:32:17 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=268435456 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:12:35.427 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:35.427 fio-3.35 00:12:35.427 Starting 1 process 00:12:47.629 00:12:47.629 fio_test: (groupid=0, jobs=1): err= 0: pid=58455: Tue Oct 1 12:32:28 2024 00:12:47.629 read: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(256MiB/5216msec) 00:12:47.629 clat (usec): min=56, max=2685, avg=78.38, stdev=31.24 00:12:47.629 lat (usec): min=56, max=2685, avg=78.47, stdev=31.24 00:12:47.629 clat percentiles (usec): 00:12:47.629 | 1.00th=[ 63], 5.00th=[ 68], 10.00th=[ 68], 20.00th=[ 69], 00:12:47.629 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 75], 00:12:47.629 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 98], 95.00th=[ 106], 00:12:47.629 | 99.00th=[ 126], 99.50th=[ 135], 99.90th=[ 174], 99.95th=[ 478], 00:12:47.629 | 99.99th=[ 1483] 00:12:47.629 write: IOPS=13.3k, BW=51.8MiB/s (54.3MB/s)(256MiB/4942msec); 0 zone resets 00:12:47.629 clat (usec): min=51, max=2882, avg=73.80, stdev=27.82 00:12:47.629 lat (usec): min=51, max=2883, avg=74.68, stdev=27.91 00:12:47.629 clat percentiles (usec): 00:12:47.629 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 61], 00:12:47.629 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 76], 00:12:47.629 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 102], 00:12:47.629 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 190], 00:12:47.629 | 99.99th=[ 1123] 00:12:47.629 bw ( KiB/s): min=46144, max=60960, per=98.84%, avg=52428.80, stdev=4883.56, samples=10 00:12:47.629 iops : min=11536, max=15240, avg=13107.20, stdev=1220.89, samples=10 00:12:47.629 lat (usec) : 100=92.99%, 250=6.96%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:47.629 lat (msec) : 2=0.02%, 4=0.01% 00:12:47.629 cpu : usr=4.11%, sys=7.84%, ctx=137819, majf=0, minf=1586 00:12:47.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.629 issued rwts: total=65536,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.629 00:12:47.629 Run status group 0 (all jobs): 00:12:47.629 READ: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=256MiB (268MB), run=5216-5216msec 00:12:47.629 WRITE: bw=51.8MiB/s (54.3MB/s), 51.8MiB/s-51.8MiB/s (54.3MB/s-54.3MB/s), io=256MiB (268MB), run=4942-4942msec 00:12:47.629 00:12:47.629 Disk stats (read/write): 00:12:47.629 nbd0: ios=65418/65536, merge=0/0, ticks=4674/4352, in_queue=9025, util=99.20% 00:12:47.629 12:32:28 -- lvol/basic.sh@240 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@51 -- # local i 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@41 -- # break 00:12:47.629 12:32:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.629 12:32:28 -- lvol/basic.sh@242 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none Malloc8 lvs_test 00:12:47.629 12:32:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.629 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.629 12:32:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.629 12:32:28 -- lvol/basic.sh@242 -- # lvstore_uuid=9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.629 12:32:28 -- lvol/basic.sh@243 -- # get_lvs_jq bdev_lvol_get_lvstores -u 9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.629 12:32:28 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.629 12:32:28 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:12:47.629 12:32:28 -- common/autotest_common.sh@585 -- # jq_out=() 00:12:47.629 12:32:28 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:12:47.629 12:32:28 -- common/autotest_common.sh@586 -- # local jq val 00:12:47.629 12:32:28 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:12:47.629 12:32:28 -- common/autotest_common.sh@596 -- # local lvs 00:12:47.629 12:32:28 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:12:47.630 12:32:28 -- common/autotest_common.sh@611 -- # local bdev 00:12:47.630 12:32:28 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:12:47.630 12:32:28 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:12:47.630 12:32:28 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:12:47.630 12:32:28 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:12:47.630 12:32:28 -- common/autotest_common.sh@620 -- # shift 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.630 12:32:28 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:12:47.630 12:32:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.630 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 12:32:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc8 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:12:47.630 12:32:28 -- common/autotest_common.sh@621 -- # read -r elem val 00:12:47.630 12:32:28 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:12:47.630 12:32:28 -- lvol/basic.sh@249 -- # rpc_cmd bdev_lvol_create -c unmap -u 9fffcf9a-d431-4dc2-bc01-12a637e21c5f lvol_test 4 00:12:47.630 12:32:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.630 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 12:32:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.630 12:32:28 -- lvol/basic.sh@249 -- # lvol_uuid=473728b2-1e39-43ab-ae26-d6b3adf89176 00:12:47.630 12:32:28 -- lvol/basic.sh@251 -- # nbd_start_disks /var/tmp/spdk.sock 473728b2-1e39-43ab-ae26-d6b3adf89176 /dev/nbd0 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('473728b2-1e39-43ab-ae26-d6b3adf89176') 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@12 -- # local i 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 473728b2-1e39-43ab-ae26-d6b3adf89176 /dev/nbd0 00:12:47.630 /dev/nbd0 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.630 12:32:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:47.630 12:32:28 -- common/autotest_common.sh@857 -- # local i 00:12:47.630 12:32:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.630 12:32:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.630 12:32:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:47.630 12:32:28 -- common/autotest_common.sh@861 -- # break 00:12:47.630 12:32:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.630 12:32:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.630 12:32:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:12:47.630 1+0 records in 00:12:47.630 1+0 records out 00:12:47.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526356 s, 7.8 MB/s 00:12:47.630 12:32:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:47.630 12:32:28 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.630 12:32:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:47.630 12:32:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.630 12:32:28 -- common/autotest_common.sh@877 -- # return 0 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.630 12:32:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.630 12:32:28 -- lvol/basic.sh@252 -- # run_fio_test /dev/nbd0 0 4194304 read 0xdd 00:12:47.630 12:32:28 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:47.630 12:32:28 -- lvol/common.sh@41 -- # local offset=0 00:12:47.630 12:32:28 -- lvol/common.sh@42 -- # local size=4194304 00:12:47.630 12:32:28 -- lvol/common.sh@43 -- # local rw=read 00:12:47.630 12:32:28 -- lvol/common.sh@44 -- # local pattern=0xdd 00:12:47.630 12:32:28 -- lvol/common.sh@45 -- # local extra_params= 00:12:47.630 12:32:28 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:47.630 12:32:28 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:12:47.630 12:32:28 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:47.630 12:32:28 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:47.630 12:32:28 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:12:47.630 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:47.630 fio-3.35 00:12:47.630 Starting 1 process 00:12:47.630 00:12:47.630 fio_test: (groupid=0, jobs=1): err= 0: pid=58589: Tue Oct 1 12:32:29 2024 00:12:47.630 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(4096KiB/101msec) 00:12:47.630 clat (usec): min=72, max=260, avg=96.68, stdev=19.99 00:12:47.630 lat (usec): min=72, max=262, avg=96.80, stdev=20.01 00:12:47.630 clat percentiles (usec): 00:12:47.630 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 81], 00:12:47.630 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 97], 00:12:47.630 | 70.00th=[ 103], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 137], 00:12:47.630 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 262], 00:12:47.630 | 99.99th=[ 262] 00:12:47.630 lat (usec) : 100=66.31%, 250=33.59%, 500=0.10% 00:12:47.630 cpu : usr=2.00%, sys=10.00%, ctx=1140, majf=0, minf=9 00:12:47.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.630 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.630 00:12:47.630 Run status group 0 (all jobs): 00:12:47.630 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=4096KiB (4194kB), run=101-101msec 00:12:47.630 00:12:47.630 Disk stats (read/write): 00:12:47.630 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:12:47.630 12:32:29 -- lvol/basic.sh@253 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@51 -- # local i 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@41 -- # break 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.630 12:32:29 -- lvol/basic.sh@255 -- # rpc_cmd bdev_lvol_delete 473728b2-1e39-43ab-ae26-d6b3adf89176 00:12:47.630 12:32:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.630 12:32:29 -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 12:32:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.630 12:32:29 -- lvol/basic.sh@256 -- # rpc_cmd bdev_lvol_delete_lvstore -u 9fffcf9a-d431-4dc2-bc01-12a637e21c5f 00:12:47.630 12:32:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.630 12:32:29 -- common/autotest_common.sh@10 -- # set +x 00:12:47.630 12:32:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.630 12:32:29 -- lvol/basic.sh@257 -- # nbd_start_disks /var/tmp/spdk.sock Malloc8 /dev/nbd0 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc8') 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.630 12:32:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@12 -- # local i 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk Malloc8 /dev/nbd0 00:12:47.631 /dev/nbd0 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.631 12:32:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:47.631 12:32:29 -- common/autotest_common.sh@857 -- # local i 00:12:47.631 12:32:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:47.631 12:32:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:47.631 12:32:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:47.631 12:32:29 -- common/autotest_common.sh@861 -- # break 00:12:47.631 12:32:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:47.631 12:32:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:47.631 12:32:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:12:47.631 1+0 records in 00:12:47.631 1+0 records out 00:12:47.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220362 s, 18.6 MB/s 00:12:47.631 12:32:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:47.631 12:32:29 -- common/autotest_common.sh@874 -- # size=4096 00:12:47.631 12:32:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:12:47.631 12:32:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:47.631 12:32:29 -- common/autotest_common.sh@877 -- # return 0 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.631 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.631 12:32:29 -- lvol/basic.sh@259 -- # local metadata_pages 00:12:47.631 12:32:29 -- lvol/basic.sh@260 -- # local last_metadata_lba 00:12:47.631 12:32:29 -- lvol/basic.sh@261 -- # local offset_metadata_end 00:12:47.631 12:32:29 -- lvol/basic.sh@262 -- # local last_cluster_of_metadata 00:12:47.631 12:32:29 -- lvol/basic.sh@263 -- # local offset 00:12:47.631 12:32:29 -- lvol/basic.sh@264 -- # local size_metadata_end 00:12:47.631 12:32:29 -- lvol/basic.sh@266 -- # calc '1 + 63 + ceil(5 + ceil(63 / 8) / 4096) * 3' 00:12:47.631 12:32:29 -- lvol/common.sh@57 -- # bc -l 00:12:47.631 12:32:29 -- lvol/basic.sh@266 -- # metadata_pages=79 00:12:47.631 12:32:29 -- lvol/basic.sh@268 -- # last_metadata_lba=632 00:12:47.631 12:32:29 -- lvol/basic.sh@269 -- # offset_metadata_end=323584 00:12:47.631 12:32:29 -- lvol/basic.sh@270 -- # calc 'ceil(79 / 4194304 / 4096)' 00:12:47.631 12:32:29 -- lvol/common.sh@57 -- # bc -l 00:12:47.631 12:32:29 -- lvol/basic.sh@270 -- # last_cluster_of_metadata=1 00:12:47.631 12:32:29 -- lvol/basic.sh@271 -- # last_cluster_of_metadata=1 00:12:47.631 12:32:29 -- lvol/basic.sh@272 -- # offset=4194304 00:12:47.631 12:32:29 -- lvol/basic.sh@273 -- # size_metadata_end=3870720 00:12:47.631 12:32:29 -- lvol/basic.sh@276 -- # run_fio_test /dev/nbd0 323584 3870720 read 0xdd 00:12:47.631 12:32:29 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:47.631 12:32:29 -- lvol/common.sh@41 -- # local offset=323584 00:12:47.631 12:32:29 -- lvol/common.sh@42 -- # local size=3870720 00:12:47.631 12:32:29 -- lvol/common.sh@43 -- # local rw=read 00:12:47.631 12:32:29 -- lvol/common.sh@44 -- # local pattern=0xdd 00:12:47.631 12:32:29 -- lvol/common.sh@45 -- # local extra_params= 00:12:47.631 12:32:29 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:47.631 12:32:29 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:12:47.631 12:32:29 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:47.631 12:32:29 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=323584 --size=3870720 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:12:47.631 12:32:29 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=323584 --size=3870720 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:12:47.631 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:47.631 fio-3.35 00:12:47.631 Starting 1 process 00:12:47.631 00:12:47.631 fio_test: (groupid=0, jobs=1): err= 0: pid=58621: Tue Oct 1 12:32:30 2024 00:12:47.631 read: IOPS=12.3k, BW=47.9MiB/s (50.3MB/s)(3780KiB/77msec) 00:12:47.631 clat (usec): min=62, max=268, avg=79.07, stdev=15.55 00:12:47.631 lat (usec): min=62, max=268, avg=79.20, stdev=15.58 00:12:47.631 clat percentiles (usec): 00:12:47.631 | 1.00th=[ 65], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 68], 00:12:47.631 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:12:47.631 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 105], 00:12:47.631 | 99.00th=[ 125], 99.50th=[ 139], 99.90th=[ 269], 99.95th=[ 269], 00:12:47.631 | 99.99th=[ 269] 00:12:47.631 lat (usec) : 100=91.32%, 250=8.57%, 500=0.11% 00:12:47.631 cpu : usr=3.95%, sys=10.53%, ctx=946, majf=0, minf=11 00:12:47.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.631 issued rwts: total=945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.631 00:12:47.631 Run status group 0 (all jobs): 00:12:47.631 READ: bw=47.9MiB/s (50.3MB/s), 47.9MiB/s-47.9MiB/s (50.3MB/s-50.3MB/s), io=3780KiB (3871kB), run=77-77msec 00:12:47.631 00:12:47.631 Disk stats (read/write): 00:12:47.631 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:12:47.631 12:32:30 -- lvol/basic.sh@278 -- # run_fio_test /dev/nbd0 4194304 4194304 read 0x00 00:12:47.631 12:32:30 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:12:47.631 12:32:30 -- lvol/common.sh@41 -- # local offset=4194304 00:12:47.631 12:32:30 -- lvol/common.sh@42 -- # local size=4194304 00:12:47.631 12:32:30 -- lvol/common.sh@43 -- # local rw=read 00:12:47.631 12:32:30 -- lvol/common.sh@44 -- # local pattern=0x00 00:12:47.631 12:32:30 -- lvol/common.sh@45 -- # local extra_params= 00:12:47.631 12:32:30 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:47.631 12:32:30 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:12:47.631 12:32:30 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:12:47.631 12:32:30 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:12:47.631 12:32:30 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:12:47.890 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:47.890 fio-3.35 00:12:47.890 Starting 1 process 00:12:48.149 00:12:48.149 fio_test: (groupid=0, jobs=1): err= 0: pid=58634: Tue Oct 1 12:32:30 2024 00:12:48.149 read: IOPS=7314, BW=28.6MiB/s (30.0MB/s)(4096KiB/140msec) 00:12:48.149 clat (usec): min=83, max=292, avg=132.79, stdev=23.01 00:12:48.149 lat (usec): min=83, max=292, avg=133.18, stdev=23.07 00:12:48.149 clat percentiles (usec): 00:12:48.149 | 1.00th=[ 85], 5.00th=[ 94], 10.00th=[ 111], 20.00th=[ 116], 00:12:48.149 | 30.00th=[ 119], 40.00th=[ 125], 50.00th=[ 133], 60.00th=[ 137], 00:12:48.149 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 172], 00:12:48.149 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 247], 99.95th=[ 293], 00:12:48.149 | 99.99th=[ 293] 00:12:48.149 lat (usec) : 100=6.54%, 250=93.36%, 500=0.10% 00:12:48.149 cpu : usr=3.60%, sys=13.67%, ctx=1024, majf=0, minf=9 00:12:48.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.149 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:48.149 00:12:48.149 Run status group 0 (all jobs): 00:12:48.149 READ: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=4096KiB (4194kB), run=140-140msec 00:12:48.149 00:12:48.149 Disk stats (read/write): 00:12:48.149 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:12:48.149 12:32:30 -- lvol/basic.sh@280 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@51 -- # local i 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.149 12:32:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@41 -- # break 00:12:48.408 12:32:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.408 12:32:30 -- lvol/basic.sh@281 -- # rpc_cmd bdev_malloc_delete Malloc8 00:12:48.408 12:32:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.408 12:32:30 -- common/autotest_common.sh@10 -- # set +x 00:12:49.008 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.008 12:32:31 -- lvol/basic.sh@283 -- # check_leftover_devices 00:12:49.008 12:32:31 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:49.008 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.008 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.008 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.008 12:32:31 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:49.008 12:32:31 -- lvol/common.sh@26 -- # jq length 00:12:49.008 12:32:31 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:49.008 12:32:31 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:49.008 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.008 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.008 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.008 12:32:31 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:49.008 12:32:31 -- lvol/common.sh@28 -- # jq length 00:12:49.008 ************************************ 00:12:49.008 END TEST test_construct_lvol_fio_clear_method_unmap 00:12:49.008 ************************************ 00:12:49.008 12:32:31 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:49.008 00:12:49.008 real 0m14.236s 00:12:49.008 user 0m2.458s 00:12:49.008 sys 0m1.224s 00:12:49.008 12:32:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.008 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 12:32:31 -- lvol/basic.sh@584 -- # run_test test_construct_lvol test_construct_lvol 00:12:49.267 12:32:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:49.267 12:32:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:49.267 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 ************************************ 00:12:49.267 START TEST test_construct_lvol 00:12:49.267 ************************************ 00:12:49.267 12:32:31 -- common/autotest_common.sh@1104 -- # test_construct_lvol 00:12:49.267 12:32:31 -- lvol/basic.sh@289 -- # rpc_cmd bdev_malloc_create 128 512 00:12:49.267 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.267 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.267 12:32:31 -- lvol/basic.sh@289 -- # malloc_name=Malloc9 00:12:49.267 12:32:31 -- lvol/basic.sh@290 -- # rpc_cmd bdev_lvol_create_lvstore Malloc9 lvs_test 00:12:49.267 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.267 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.267 12:32:31 -- lvol/basic.sh@290 -- # lvs_uuid=4fc4e1af-6189-46f9-aa6f-a4d69efd6425 00:12:49.267 12:32:31 -- lvol/basic.sh@293 -- # rpc_cmd bdev_lvol_create -u 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 lvol_test 124 00:12:49.267 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.267 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.267 12:32:31 -- lvol/basic.sh@293 -- # lvol_uuid=efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 00:12:49.267 12:32:31 -- lvol/basic.sh@294 -- # rpc_cmd bdev_get_bdevs -b efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 00:12:49.267 12:32:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.267 12:32:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 12:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.267 12:32:31 -- lvol/basic.sh@294 -- # lvol='[ 00:12:49.267 { 00:12:49.267 "name": "efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1", 00:12:49.267 "aliases": [ 00:12:49.267 "lvs_test/lvol_test" 00:12:49.267 ], 00:12:49.267 "product_name": "Logical Volume", 00:12:49.267 "block_size": 512, 00:12:49.267 "num_blocks": 253952, 00:12:49.267 "uuid": "efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1", 00:12:49.267 "assigned_rate_limits": { 00:12:49.267 "rw_ios_per_sec": 0, 00:12:49.267 "rw_mbytes_per_sec": 0, 00:12:49.267 "r_mbytes_per_sec": 0, 00:12:49.267 "w_mbytes_per_sec": 0 00:12:49.267 }, 00:12:49.267 "claimed": false, 00:12:49.267 "zoned": false, 00:12:49.267 "supported_io_types": { 00:12:49.267 "read": true, 00:12:49.267 "write": true, 00:12:49.267 "unmap": true, 00:12:49.267 "write_zeroes": true, 00:12:49.267 "flush": false, 00:12:49.267 "reset": true, 00:12:49.267 "compare": false, 00:12:49.267 "compare_and_write": false, 00:12:49.267 "abort": false, 00:12:49.267 "nvme_admin": false, 00:12:49.267 "nvme_io": false 00:12:49.267 }, 00:12:49.267 "memory_domains": [ 00:12:49.267 { 00:12:49.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.267 "dma_device_type": 2 00:12:49.267 } 00:12:49.267 ], 00:12:49.267 "driver_specific": { 00:12:49.267 "lvol": { 00:12:49.267 "lvol_store_uuid": "4fc4e1af-6189-46f9-aa6f-a4d69efd6425", 00:12:49.267 "base_bdev": "Malloc9", 00:12:49.267 "thin_provision": false, 00:12:49.267 "snapshot": false, 00:12:49.267 "clone": false, 00:12:49.267 "esnap_clone": false 00:12:49.267 } 00:12:49.267 } 00:12:49.267 } 00:12:49.267 ]' 00:12:49.267 12:32:31 -- lvol/basic.sh@296 -- # jq -r '.[0].name' 00:12:49.526 12:32:31 -- lvol/basic.sh@296 -- # '[' efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 = efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 ']' 00:12:49.526 12:32:31 -- lvol/basic.sh@297 -- # jq -r '.[0].uuid' 00:12:49.526 12:32:31 -- lvol/basic.sh@297 -- # '[' efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 = efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 ']' 00:12:49.526 12:32:31 -- lvol/basic.sh@298 -- # jq -r '.[0].aliases[0]' 00:12:49.526 12:32:31 -- lvol/basic.sh@298 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:12:49.526 12:32:31 -- lvol/basic.sh@299 -- # jq -r '.[0].block_size' 00:12:49.526 12:32:31 -- lvol/basic.sh@299 -- # '[' 512 = 512 ']' 00:12:49.526 12:32:31 -- lvol/basic.sh@300 -- # jq -r '.[0].num_blocks' 00:12:49.526 12:32:32 -- lvol/basic.sh@300 -- # '[' 253952 = 253952 ']' 00:12:49.526 12:32:32 -- lvol/basic.sh@301 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:12:49.785 12:32:32 -- lvol/basic.sh@301 -- # '[' 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 = 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 ']' 00:12:49.785 12:32:32 -- lvol/basic.sh@304 -- # rpc_cmd bdev_lvol_delete efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 00:12:49.785 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.785 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.785 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.785 12:32:32 -- lvol/basic.sh@305 -- # rpc_cmd bdev_get_bdevs -b efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 00:12:49.785 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.785 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.785 [2024-10-01 12:32:32.083241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1 00:12:49.785 request: 00:12:49.785 { 00:12:49.785 "name": "efeb33a2-e8ea-4558-8709-cbb2bb0cdbf1", 00:12:49.785 "method": "bdev_get_bdevs", 00:12:49.785 "req_id": 1 00:12:49.785 } 00:12:49.785 Got JSON-RPC error response 00:12:49.785 response: 00:12:49.785 { 00:12:49.785 "code": -19, 00:12:49.785 "message": "No such device" 00:12:49.785 } 00:12:49.785 12:32:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:49.785 12:32:32 -- lvol/basic.sh@306 -- # rpc_cmd bdev_lvol_create -l lvs_test lvol_test 124 00:12:49.785 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.785 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.785 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.785 12:32:32 -- lvol/basic.sh@306 -- # lvol_uuid=dba88cae-263d-4a6a-9fd1-c9aaeac4b31c 00:12:49.785 12:32:32 -- lvol/basic.sh@307 -- # rpc_cmd bdev_get_bdevs -b dba88cae-263d-4a6a-9fd1-c9aaeac4b31c 00:12:49.785 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.785 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.785 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.785 12:32:32 -- lvol/basic.sh@307 -- # lvol='[ 00:12:49.785 { 00:12:49.785 "name": "dba88cae-263d-4a6a-9fd1-c9aaeac4b31c", 00:12:49.785 "aliases": [ 00:12:49.785 "lvs_test/lvol_test" 00:12:49.785 ], 00:12:49.785 "product_name": "Logical Volume", 00:12:49.785 "block_size": 512, 00:12:49.785 "num_blocks": 253952, 00:12:49.785 "uuid": "dba88cae-263d-4a6a-9fd1-c9aaeac4b31c", 00:12:49.785 "assigned_rate_limits": { 00:12:49.785 "rw_ios_per_sec": 0, 00:12:49.785 "rw_mbytes_per_sec": 0, 00:12:49.785 "r_mbytes_per_sec": 0, 00:12:49.785 "w_mbytes_per_sec": 0 00:12:49.785 }, 00:12:49.785 "claimed": false, 00:12:49.785 "zoned": false, 00:12:49.785 "supported_io_types": { 00:12:49.785 "read": true, 00:12:49.785 "write": true, 00:12:49.785 "unmap": true, 00:12:49.785 "write_zeroes": true, 00:12:49.785 "flush": false, 00:12:49.785 "reset": true, 00:12:49.785 "compare": false, 00:12:49.785 "compare_and_write": false, 00:12:49.785 "abort": false, 00:12:49.785 "nvme_admin": false, 00:12:49.785 "nvme_io": false 00:12:49.785 }, 00:12:49.785 "memory_domains": [ 00:12:49.785 { 00:12:49.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.785 "dma_device_type": 2 00:12:49.785 } 00:12:49.785 ], 00:12:49.785 "driver_specific": { 00:12:49.785 "lvol": { 00:12:49.785 "lvol_store_uuid": "4fc4e1af-6189-46f9-aa6f-a4d69efd6425", 00:12:49.785 "base_bdev": "Malloc9", 00:12:49.785 "thin_provision": false, 00:12:49.785 "snapshot": false, 00:12:49.785 "clone": false, 00:12:49.785 "esnap_clone": false 00:12:49.785 } 00:12:49.785 } 00:12:49.785 } 00:12:49.785 ]' 00:12:49.785 12:32:32 -- lvol/basic.sh@309 -- # jq -r '.[0].name' 00:12:49.785 12:32:32 -- lvol/basic.sh@309 -- # '[' dba88cae-263d-4a6a-9fd1-c9aaeac4b31c = dba88cae-263d-4a6a-9fd1-c9aaeac4b31c ']' 00:12:49.785 12:32:32 -- lvol/basic.sh@310 -- # jq -r '.[0].uuid' 00:12:49.785 12:32:32 -- lvol/basic.sh@310 -- # '[' dba88cae-263d-4a6a-9fd1-c9aaeac4b31c = dba88cae-263d-4a6a-9fd1-c9aaeac4b31c ']' 00:12:49.785 12:32:32 -- lvol/basic.sh@311 -- # jq -r '.[0].aliases[0]' 00:12:49.785 12:32:32 -- lvol/basic.sh@311 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:12:49.785 12:32:32 -- lvol/basic.sh@312 -- # jq -r '.[0].block_size' 00:12:50.045 12:32:32 -- lvol/basic.sh@312 -- # '[' 512 = 512 ']' 00:12:50.045 12:32:32 -- lvol/basic.sh@313 -- # jq -r '.[0].num_blocks' 00:12:50.045 12:32:32 -- lvol/basic.sh@313 -- # '[' 253952 = 253952 ']' 00:12:50.045 12:32:32 -- lvol/basic.sh@314 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:12:50.045 12:32:32 -- lvol/basic.sh@314 -- # '[' 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 = 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 ']' 00:12:50.045 12:32:32 -- lvol/basic.sh@317 -- # rpc_cmd bdev_lvol_delete dba88cae-263d-4a6a-9fd1-c9aaeac4b31c 00:12:50.045 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.045 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.045 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.045 12:32:32 -- lvol/basic.sh@318 -- # rpc_cmd bdev_get_bdevs -b dba88cae-263d-4a6a-9fd1-c9aaeac4b31c 00:12:50.045 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.045 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.045 [2024-10-01 12:32:32.457893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: dba88cae-263d-4a6a-9fd1-c9aaeac4b31c 00:12:50.045 request: 00:12:50.045 { 00:12:50.045 "name": "dba88cae-263d-4a6a-9fd1-c9aaeac4b31c", 00:12:50.045 "method": "bdev_get_bdevs", 00:12:50.045 "req_id": 1 00:12:50.045 } 00:12:50.045 Got JSON-RPC error response 00:12:50.045 response: 00:12:50.045 { 00:12:50.045 "code": -19, 00:12:50.045 "message": "No such device" 00:12:50.045 } 00:12:50.045 12:32:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:50.045 12:32:32 -- lvol/basic.sh@319 -- # rpc_cmd bdev_lvol_delete_lvstore -u 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 00:12:50.045 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.045 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.045 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.045 12:32:32 -- lvol/basic.sh@320 -- # rpc_cmd bdev_lvol_get_lvstores -u 4fc4e1af-6189-46f9-aa6f-a4d69efd6425 00:12:50.045 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.045 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.045 request: 00:12:50.045 { 00:12:50.045 "uuid": "4fc4e1af-6189-46f9-aa6f-a4d69efd6425", 00:12:50.045 "method": "bdev_lvol_get_lvstores", 00:12:50.045 "req_id": 1 00:12:50.045 } 00:12:50.045 Got JSON-RPC error response 00:12:50.045 response: 00:12:50.045 { 00:12:50.045 "code": -19, 00:12:50.045 "message": "No such device" 00:12:50.045 } 00:12:50.045 12:32:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:50.045 12:32:32 -- lvol/basic.sh@321 -- # rpc_cmd bdev_malloc_delete Malloc9 00:12:50.045 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.045 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.304 12:32:32 -- lvol/basic.sh@322 -- # check_leftover_devices 00:12:50.304 12:32:32 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:50.304 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.304 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.304 12:32:32 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:50.304 12:32:32 -- lvol/common.sh@26 -- # jq length 00:12:50.563 12:32:32 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:50.563 12:32:32 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:50.563 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.563 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.563 12:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.563 12:32:32 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:50.563 12:32:32 -- lvol/common.sh@28 -- # jq length 00:12:50.563 ************************************ 00:12:50.563 END TEST test_construct_lvol 00:12:50.563 ************************************ 00:12:50.563 12:32:32 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:50.563 00:12:50.563 real 0m1.331s 00:12:50.563 user 0m0.698s 00:12:50.563 sys 0m0.089s 00:12:50.563 12:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.563 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.563 12:32:32 -- lvol/basic.sh@585 -- # run_test test_construct_multi_lvols test_construct_multi_lvols 00:12:50.563 12:32:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:50.563 12:32:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.563 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.563 ************************************ 00:12:50.563 START TEST test_construct_multi_lvols 00:12:50.563 ************************************ 00:12:50.563 12:32:32 -- common/autotest_common.sh@1104 -- # test_construct_multi_lvols 00:12:50.563 12:32:32 -- lvol/basic.sh@328 -- # rpc_cmd bdev_malloc_create 128 512 00:12:50.563 12:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.563 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:12:50.563 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.563 12:32:33 -- lvol/basic.sh@328 -- # malloc_name=Malloc10 00:12:50.563 12:32:33 -- lvol/basic.sh@329 -- # rpc_cmd bdev_lvol_create_lvstore Malloc10 lvs_test 00:12:50.563 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.563 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:50.821 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.821 12:32:33 -- lvol/basic.sh@329 -- # lvs_uuid=d8302065-b3b6-48b6-93ee-3d7c6141db70 00:12:50.821 12:32:33 -- lvol/basic.sh@332 -- # lvol_size_mb=31 00:12:50.821 12:32:33 -- lvol/basic.sh@334 -- # lvol_size_mb=28 00:12:50.821 12:32:33 -- lvol/basic.sh@335 -- # lvol_size=29360128 00:12:50.821 12:32:33 -- lvol/basic.sh@336 -- # seq 1 4 00:12:50.821 12:32:33 -- lvol/basic.sh@336 -- # for i in $(seq 1 4) 00:12:50.821 12:32:33 -- lvol/basic.sh@337 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test1 28 00:12:50.821 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.821 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:50.821 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.821 12:32:33 -- lvol/basic.sh@337 -- # lvol_uuid=420b83ff-0730-4925-b5d9-33f663058712 00:12:50.821 12:32:33 -- lvol/basic.sh@338 -- # rpc_cmd bdev_get_bdevs -b 420b83ff-0730-4925-b5d9-33f663058712 00:12:50.821 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.821 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:50.821 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.821 12:32:33 -- lvol/basic.sh@338 -- # lvol='[ 00:12:50.821 { 00:12:50.821 "name": "420b83ff-0730-4925-b5d9-33f663058712", 00:12:50.821 "aliases": [ 00:12:50.821 "lvs_test/lvol_test1" 00:12:50.821 ], 00:12:50.821 "product_name": "Logical Volume", 00:12:50.821 "block_size": 512, 00:12:50.821 "num_blocks": 57344, 00:12:50.821 "uuid": "420b83ff-0730-4925-b5d9-33f663058712", 00:12:50.821 "assigned_rate_limits": { 00:12:50.821 "rw_ios_per_sec": 0, 00:12:50.821 "rw_mbytes_per_sec": 0, 00:12:50.821 "r_mbytes_per_sec": 0, 00:12:50.821 "w_mbytes_per_sec": 0 00:12:50.822 }, 00:12:50.822 "claimed": false, 00:12:50.822 "zoned": false, 00:12:50.822 "supported_io_types": { 00:12:50.822 "read": true, 00:12:50.822 "write": true, 00:12:50.822 "unmap": true, 00:12:50.822 "write_zeroes": true, 00:12:50.822 "flush": false, 00:12:50.822 "reset": true, 00:12:50.822 "compare": false, 00:12:50.822 "compare_and_write": false, 00:12:50.822 "abort": false, 00:12:50.822 "nvme_admin": false, 00:12:50.822 "nvme_io": false 00:12:50.822 }, 00:12:50.822 "memory_domains": [ 00:12:50.822 { 00:12:50.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.822 "dma_device_type": 2 00:12:50.822 } 00:12:50.822 ], 00:12:50.822 "driver_specific": { 00:12:50.822 "lvol": { 00:12:50.822 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:50.822 "base_bdev": "Malloc10", 00:12:50.822 "thin_provision": false, 00:12:50.822 "snapshot": false, 00:12:50.822 "clone": false, 00:12:50.822 "esnap_clone": false 00:12:50.822 } 00:12:50.822 } 00:12:50.822 } 00:12:50.822 ]' 00:12:50.822 12:32:33 -- lvol/basic.sh@340 -- # jq -r '.[0].name' 00:12:50.822 12:32:33 -- lvol/basic.sh@340 -- # '[' 420b83ff-0730-4925-b5d9-33f663058712 = 420b83ff-0730-4925-b5d9-33f663058712 ']' 00:12:50.822 12:32:33 -- lvol/basic.sh@341 -- # jq -r '.[0].uuid' 00:12:50.822 12:32:33 -- lvol/basic.sh@341 -- # '[' 420b83ff-0730-4925-b5d9-33f663058712 = 420b83ff-0730-4925-b5d9-33f663058712 ']' 00:12:50.822 12:32:33 -- lvol/basic.sh@342 -- # jq -r '.[0].aliases[0]' 00:12:50.822 12:32:33 -- lvol/basic.sh@342 -- # '[' lvs_test/lvol_test1 = lvs_test/lvol_test1 ']' 00:12:50.822 12:32:33 -- lvol/basic.sh@343 -- # jq -r '.[0].block_size' 00:12:51.081 12:32:33 -- lvol/basic.sh@343 -- # '[' 512 = 512 ']' 00:12:51.081 12:32:33 -- lvol/basic.sh@344 -- # jq -r '.[0].num_blocks' 00:12:51.081 12:32:33 -- lvol/basic.sh@344 -- # '[' 57344 = 57344 ']' 00:12:51.081 12:32:33 -- lvol/basic.sh@336 -- # for i in $(seq 1 4) 00:12:51.081 12:32:33 -- lvol/basic.sh@337 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test2 28 00:12:51.081 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.081 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.081 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.081 12:32:33 -- lvol/basic.sh@337 -- # lvol_uuid=f31885f4-1014-45bd-a697-725cf8ad7b2a 00:12:51.081 12:32:33 -- lvol/basic.sh@338 -- # rpc_cmd bdev_get_bdevs -b f31885f4-1014-45bd-a697-725cf8ad7b2a 00:12:51.081 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.081 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.081 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.081 12:32:33 -- lvol/basic.sh@338 -- # lvol='[ 00:12:51.081 { 00:12:51.081 "name": "f31885f4-1014-45bd-a697-725cf8ad7b2a", 00:12:51.081 "aliases": [ 00:12:51.081 "lvs_test/lvol_test2" 00:12:51.081 ], 00:12:51.081 "product_name": "Logical Volume", 00:12:51.081 "block_size": 512, 00:12:51.081 "num_blocks": 57344, 00:12:51.081 "uuid": "f31885f4-1014-45bd-a697-725cf8ad7b2a", 00:12:51.081 "assigned_rate_limits": { 00:12:51.081 "rw_ios_per_sec": 0, 00:12:51.081 "rw_mbytes_per_sec": 0, 00:12:51.081 "r_mbytes_per_sec": 0, 00:12:51.081 "w_mbytes_per_sec": 0 00:12:51.081 }, 00:12:51.081 "claimed": false, 00:12:51.081 "zoned": false, 00:12:51.081 "supported_io_types": { 00:12:51.081 "read": true, 00:12:51.081 "write": true, 00:12:51.081 "unmap": true, 00:12:51.081 "write_zeroes": true, 00:12:51.081 "flush": false, 00:12:51.081 "reset": true, 00:12:51.081 "compare": false, 00:12:51.081 "compare_and_write": false, 00:12:51.081 "abort": false, 00:12:51.081 "nvme_admin": false, 00:12:51.081 "nvme_io": false 00:12:51.081 }, 00:12:51.081 "memory_domains": [ 00:12:51.081 { 00:12:51.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.081 "dma_device_type": 2 00:12:51.081 } 00:12:51.081 ], 00:12:51.081 "driver_specific": { 00:12:51.081 "lvol": { 00:12:51.081 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.081 "base_bdev": "Malloc10", 00:12:51.081 "thin_provision": false, 00:12:51.081 "snapshot": false, 00:12:51.081 "clone": false, 00:12:51.081 "esnap_clone": false 00:12:51.081 } 00:12:51.081 } 00:12:51.081 } 00:12:51.081 ]' 00:12:51.081 12:32:33 -- lvol/basic.sh@340 -- # jq -r '.[0].name' 00:12:51.081 12:32:33 -- lvol/basic.sh@340 -- # '[' f31885f4-1014-45bd-a697-725cf8ad7b2a = f31885f4-1014-45bd-a697-725cf8ad7b2a ']' 00:12:51.082 12:32:33 -- lvol/basic.sh@341 -- # jq -r '.[0].uuid' 00:12:51.082 12:32:33 -- lvol/basic.sh@341 -- # '[' f31885f4-1014-45bd-a697-725cf8ad7b2a = f31885f4-1014-45bd-a697-725cf8ad7b2a ']' 00:12:51.082 12:32:33 -- lvol/basic.sh@342 -- # jq -r '.[0].aliases[0]' 00:12:51.082 12:32:33 -- lvol/basic.sh@342 -- # '[' lvs_test/lvol_test2 = lvs_test/lvol_test2 ']' 00:12:51.082 12:32:33 -- lvol/basic.sh@343 -- # jq -r '.[0].block_size' 00:12:51.340 12:32:33 -- lvol/basic.sh@343 -- # '[' 512 = 512 ']' 00:12:51.340 12:32:33 -- lvol/basic.sh@344 -- # jq -r '.[0].num_blocks' 00:12:51.340 12:32:33 -- lvol/basic.sh@344 -- # '[' 57344 = 57344 ']' 00:12:51.340 12:32:33 -- lvol/basic.sh@336 -- # for i in $(seq 1 4) 00:12:51.340 12:32:33 -- lvol/basic.sh@337 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test3 28 00:12:51.340 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.340 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.340 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.340 12:32:33 -- lvol/basic.sh@337 -- # lvol_uuid=19366aa6-5019-4ae0-b3dc-146a8bc0406c 00:12:51.340 12:32:33 -- lvol/basic.sh@338 -- # rpc_cmd bdev_get_bdevs -b 19366aa6-5019-4ae0-b3dc-146a8bc0406c 00:12:51.340 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.340 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.340 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.340 12:32:33 -- lvol/basic.sh@338 -- # lvol='[ 00:12:51.340 { 00:12:51.340 "name": "19366aa6-5019-4ae0-b3dc-146a8bc0406c", 00:12:51.340 "aliases": [ 00:12:51.340 "lvs_test/lvol_test3" 00:12:51.340 ], 00:12:51.340 "product_name": "Logical Volume", 00:12:51.340 "block_size": 512, 00:12:51.340 "num_blocks": 57344, 00:12:51.340 "uuid": "19366aa6-5019-4ae0-b3dc-146a8bc0406c", 00:12:51.340 "assigned_rate_limits": { 00:12:51.340 "rw_ios_per_sec": 0, 00:12:51.340 "rw_mbytes_per_sec": 0, 00:12:51.340 "r_mbytes_per_sec": 0, 00:12:51.340 "w_mbytes_per_sec": 0 00:12:51.340 }, 00:12:51.340 "claimed": false, 00:12:51.340 "zoned": false, 00:12:51.340 "supported_io_types": { 00:12:51.340 "read": true, 00:12:51.340 "write": true, 00:12:51.340 "unmap": true, 00:12:51.340 "write_zeroes": true, 00:12:51.340 "flush": false, 00:12:51.340 "reset": true, 00:12:51.340 "compare": false, 00:12:51.340 "compare_and_write": false, 00:12:51.340 "abort": false, 00:12:51.340 "nvme_admin": false, 00:12:51.340 "nvme_io": false 00:12:51.340 }, 00:12:51.340 "memory_domains": [ 00:12:51.340 { 00:12:51.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.340 "dma_device_type": 2 00:12:51.340 } 00:12:51.340 ], 00:12:51.340 "driver_specific": { 00:12:51.340 "lvol": { 00:12:51.340 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.340 "base_bdev": "Malloc10", 00:12:51.340 "thin_provision": false, 00:12:51.340 "snapshot": false, 00:12:51.340 "clone": false, 00:12:51.340 "esnap_clone": false 00:12:51.340 } 00:12:51.340 } 00:12:51.340 } 00:12:51.340 ]' 00:12:51.340 12:32:33 -- lvol/basic.sh@340 -- # jq -r '.[0].name' 00:12:51.340 12:32:33 -- lvol/basic.sh@340 -- # '[' 19366aa6-5019-4ae0-b3dc-146a8bc0406c = 19366aa6-5019-4ae0-b3dc-146a8bc0406c ']' 00:12:51.340 12:32:33 -- lvol/basic.sh@341 -- # jq -r '.[0].uuid' 00:12:51.340 12:32:33 -- lvol/basic.sh@341 -- # '[' 19366aa6-5019-4ae0-b3dc-146a8bc0406c = 19366aa6-5019-4ae0-b3dc-146a8bc0406c ']' 00:12:51.340 12:32:33 -- lvol/basic.sh@342 -- # jq -r '.[0].aliases[0]' 00:12:51.599 12:32:33 -- lvol/basic.sh@342 -- # '[' lvs_test/lvol_test3 = lvs_test/lvol_test3 ']' 00:12:51.599 12:32:33 -- lvol/basic.sh@343 -- # jq -r '.[0].block_size' 00:12:51.599 12:32:33 -- lvol/basic.sh@343 -- # '[' 512 = 512 ']' 00:12:51.599 12:32:33 -- lvol/basic.sh@344 -- # jq -r '.[0].num_blocks' 00:12:51.599 12:32:33 -- lvol/basic.sh@344 -- # '[' 57344 = 57344 ']' 00:12:51.599 12:32:33 -- lvol/basic.sh@336 -- # for i in $(seq 1 4) 00:12:51.599 12:32:33 -- lvol/basic.sh@337 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test4 28 00:12:51.599 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.599 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 12:32:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.599 12:32:33 -- lvol/basic.sh@337 -- # lvol_uuid=058568ee-2d46-4a73-ba29-f77fbe7892ea 00:12:51.600 12:32:33 -- lvol/basic.sh@338 -- # rpc_cmd bdev_get_bdevs -b 058568ee-2d46-4a73-ba29-f77fbe7892ea 00:12:51.600 12:32:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.600 12:32:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.600 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.600 12:32:34 -- lvol/basic.sh@338 -- # lvol='[ 00:12:51.600 { 00:12:51.600 "name": "058568ee-2d46-4a73-ba29-f77fbe7892ea", 00:12:51.600 "aliases": [ 00:12:51.600 "lvs_test/lvol_test4" 00:12:51.600 ], 00:12:51.600 "product_name": "Logical Volume", 00:12:51.600 "block_size": 512, 00:12:51.600 "num_blocks": 57344, 00:12:51.600 "uuid": "058568ee-2d46-4a73-ba29-f77fbe7892ea", 00:12:51.600 "assigned_rate_limits": { 00:12:51.600 "rw_ios_per_sec": 0, 00:12:51.600 "rw_mbytes_per_sec": 0, 00:12:51.600 "r_mbytes_per_sec": 0, 00:12:51.600 "w_mbytes_per_sec": 0 00:12:51.600 }, 00:12:51.600 "claimed": false, 00:12:51.600 "zoned": false, 00:12:51.600 "supported_io_types": { 00:12:51.600 "read": true, 00:12:51.600 "write": true, 00:12:51.600 "unmap": true, 00:12:51.600 "write_zeroes": true, 00:12:51.600 "flush": false, 00:12:51.600 "reset": true, 00:12:51.600 "compare": false, 00:12:51.600 "compare_and_write": false, 00:12:51.600 "abort": false, 00:12:51.600 "nvme_admin": false, 00:12:51.600 "nvme_io": false 00:12:51.600 }, 00:12:51.600 "memory_domains": [ 00:12:51.600 { 00:12:51.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.600 "dma_device_type": 2 00:12:51.600 } 00:12:51.600 ], 00:12:51.600 "driver_specific": { 00:12:51.600 "lvol": { 00:12:51.600 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.600 "base_bdev": "Malloc10", 00:12:51.600 "thin_provision": false, 00:12:51.600 "snapshot": false, 00:12:51.600 "clone": false, 00:12:51.600 "esnap_clone": false 00:12:51.600 } 00:12:51.600 } 00:12:51.600 } 00:12:51.600 ]' 00:12:51.600 12:32:34 -- lvol/basic.sh@340 -- # jq -r '.[0].name' 00:12:51.600 12:32:34 -- lvol/basic.sh@340 -- # '[' 058568ee-2d46-4a73-ba29-f77fbe7892ea = 058568ee-2d46-4a73-ba29-f77fbe7892ea ']' 00:12:51.600 12:32:34 -- lvol/basic.sh@341 -- # jq -r '.[0].uuid' 00:12:51.600 12:32:34 -- lvol/basic.sh@341 -- # '[' 058568ee-2d46-4a73-ba29-f77fbe7892ea = 058568ee-2d46-4a73-ba29-f77fbe7892ea ']' 00:12:51.859 12:32:34 -- lvol/basic.sh@342 -- # jq -r '.[0].aliases[0]' 00:12:51.859 12:32:34 -- lvol/basic.sh@342 -- # '[' lvs_test/lvol_test4 = lvs_test/lvol_test4 ']' 00:12:51.859 12:32:34 -- lvol/basic.sh@343 -- # jq -r '.[0].block_size' 00:12:51.859 12:32:34 -- lvol/basic.sh@343 -- # '[' 512 = 512 ']' 00:12:51.859 12:32:34 -- lvol/basic.sh@344 -- # jq -r '.[0].num_blocks' 00:12:51.859 12:32:34 -- lvol/basic.sh@344 -- # '[' 57344 = 57344 ']' 00:12:51.859 12:32:34 -- lvol/basic.sh@347 -- # rpc_cmd bdev_get_bdevs 00:12:51.859 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.859 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:51.859 12:32:34 -- lvol/basic.sh@347 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:12:51.859 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.859 12:32:34 -- lvol/basic.sh@347 -- # lvols='[ 00:12:51.859 { 00:12:51.859 "name": "420b83ff-0730-4925-b5d9-33f663058712", 00:12:51.859 "aliases": [ 00:12:51.859 "lvs_test/lvol_test1" 00:12:51.859 ], 00:12:51.859 "product_name": "Logical Volume", 00:12:51.859 "block_size": 512, 00:12:51.859 "num_blocks": 57344, 00:12:51.859 "uuid": "420b83ff-0730-4925-b5d9-33f663058712", 00:12:51.859 "assigned_rate_limits": { 00:12:51.859 "rw_ios_per_sec": 0, 00:12:51.859 "rw_mbytes_per_sec": 0, 00:12:51.859 "r_mbytes_per_sec": 0, 00:12:51.859 "w_mbytes_per_sec": 0 00:12:51.859 }, 00:12:51.859 "claimed": false, 00:12:51.859 "zoned": false, 00:12:51.859 "supported_io_types": { 00:12:51.859 "read": true, 00:12:51.859 "write": true, 00:12:51.859 "unmap": true, 00:12:51.859 "write_zeroes": true, 00:12:51.859 "flush": false, 00:12:51.859 "reset": true, 00:12:51.859 "compare": false, 00:12:51.859 "compare_and_write": false, 00:12:51.859 "abort": false, 00:12:51.859 "nvme_admin": false, 00:12:51.859 "nvme_io": false 00:12:51.859 }, 00:12:51.859 "memory_domains": [ 00:12:51.859 { 00:12:51.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.859 "dma_device_type": 2 00:12:51.859 } 00:12:51.859 ], 00:12:51.859 "driver_specific": { 00:12:51.859 "lvol": { 00:12:51.859 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.859 "base_bdev": "Malloc10", 00:12:51.859 "thin_provision": false, 00:12:51.859 "snapshot": false, 00:12:51.859 "clone": false, 00:12:51.859 "esnap_clone": false 00:12:51.859 } 00:12:51.859 } 00:12:51.859 }, 00:12:51.859 { 00:12:51.859 "name": "f31885f4-1014-45bd-a697-725cf8ad7b2a", 00:12:51.859 "aliases": [ 00:12:51.859 "lvs_test/lvol_test2" 00:12:51.859 ], 00:12:51.859 "product_name": "Logical Volume", 00:12:51.859 "block_size": 512, 00:12:51.859 "num_blocks": 57344, 00:12:51.859 "uuid": "f31885f4-1014-45bd-a697-725cf8ad7b2a", 00:12:51.859 "assigned_rate_limits": { 00:12:51.859 "rw_ios_per_sec": 0, 00:12:51.859 "rw_mbytes_per_sec": 0, 00:12:51.859 "r_mbytes_per_sec": 0, 00:12:51.859 "w_mbytes_per_sec": 0 00:12:51.859 }, 00:12:51.860 "claimed": false, 00:12:51.860 "zoned": false, 00:12:51.860 "supported_io_types": { 00:12:51.860 "read": true, 00:12:51.860 "write": true, 00:12:51.860 "unmap": true, 00:12:51.860 "write_zeroes": true, 00:12:51.860 "flush": false, 00:12:51.860 "reset": true, 00:12:51.860 "compare": false, 00:12:51.860 "compare_and_write": false, 00:12:51.860 "abort": false, 00:12:51.860 "nvme_admin": false, 00:12:51.860 "nvme_io": false 00:12:51.860 }, 00:12:51.860 "memory_domains": [ 00:12:51.860 { 00:12:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.860 "dma_device_type": 2 00:12:51.860 } 00:12:51.860 ], 00:12:51.860 "driver_specific": { 00:12:51.860 "lvol": { 00:12:51.860 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.860 "base_bdev": "Malloc10", 00:12:51.860 "thin_provision": false, 00:12:51.860 "snapshot": false, 00:12:51.860 "clone": false, 00:12:51.860 "esnap_clone": false 00:12:51.860 } 00:12:51.860 } 00:12:51.860 }, 00:12:51.860 { 00:12:51.860 "name": "19366aa6-5019-4ae0-b3dc-146a8bc0406c", 00:12:51.860 "aliases": [ 00:12:51.860 "lvs_test/lvol_test3" 00:12:51.860 ], 00:12:51.860 "product_name": "Logical Volume", 00:12:51.860 "block_size": 512, 00:12:51.860 "num_blocks": 57344, 00:12:51.860 "uuid": "19366aa6-5019-4ae0-b3dc-146a8bc0406c", 00:12:51.860 "assigned_rate_limits": { 00:12:51.860 "rw_ios_per_sec": 0, 00:12:51.860 "rw_mbytes_per_sec": 0, 00:12:51.860 "r_mbytes_per_sec": 0, 00:12:51.860 "w_mbytes_per_sec": 0 00:12:51.860 }, 00:12:51.860 "claimed": false, 00:12:51.860 "zoned": false, 00:12:51.860 "supported_io_types": { 00:12:51.860 "read": true, 00:12:51.860 "write": true, 00:12:51.860 "unmap": true, 00:12:51.860 "write_zeroes": true, 00:12:51.860 "flush": false, 00:12:51.860 "reset": true, 00:12:51.860 "compare": false, 00:12:51.860 "compare_and_write": false, 00:12:51.860 "abort": false, 00:12:51.860 "nvme_admin": false, 00:12:51.860 "nvme_io": false 00:12:51.860 }, 00:12:51.860 "memory_domains": [ 00:12:51.860 { 00:12:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.860 "dma_device_type": 2 00:12:51.860 } 00:12:51.860 ], 00:12:51.860 "driver_specific": { 00:12:51.860 "lvol": { 00:12:51.860 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.860 "base_bdev": "Malloc10", 00:12:51.860 "thin_provision": false, 00:12:51.860 "snapshot": false, 00:12:51.860 "clone": false, 00:12:51.860 "esnap_clone": false 00:12:51.860 } 00:12:51.860 } 00:12:51.860 }, 00:12:51.860 { 00:12:51.860 "name": "058568ee-2d46-4a73-ba29-f77fbe7892ea", 00:12:51.860 "aliases": [ 00:12:51.860 "lvs_test/lvol_test4" 00:12:51.860 ], 00:12:51.860 "product_name": "Logical Volume", 00:12:51.860 "block_size": 512, 00:12:51.860 "num_blocks": 57344, 00:12:51.860 "uuid": "058568ee-2d46-4a73-ba29-f77fbe7892ea", 00:12:51.860 "assigned_rate_limits": { 00:12:51.860 "rw_ios_per_sec": 0, 00:12:51.860 "rw_mbytes_per_sec": 0, 00:12:51.860 "r_mbytes_per_sec": 0, 00:12:51.860 "w_mbytes_per_sec": 0 00:12:51.860 }, 00:12:51.860 "claimed": false, 00:12:51.860 "zoned": false, 00:12:51.860 "supported_io_types": { 00:12:51.860 "read": true, 00:12:51.860 "write": true, 00:12:51.860 "unmap": true, 00:12:51.860 "write_zeroes": true, 00:12:51.860 "flush": false, 00:12:51.860 "reset": true, 00:12:51.860 "compare": false, 00:12:51.860 "compare_and_write": false, 00:12:51.860 "abort": false, 00:12:51.860 "nvme_admin": false, 00:12:51.860 "nvme_io": false 00:12:51.860 }, 00:12:51.860 "memory_domains": [ 00:12:51.860 { 00:12:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.860 "dma_device_type": 2 00:12:51.860 } 00:12:51.860 ], 00:12:51.860 "driver_specific": { 00:12:51.860 "lvol": { 00:12:51.860 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:51.860 "base_bdev": "Malloc10", 00:12:51.860 "thin_provision": false, 00:12:51.860 "snapshot": false, 00:12:51.860 "clone": false, 00:12:51.860 "esnap_clone": false 00:12:51.860 } 00:12:51.860 } 00:12:51.860 } 00:12:51.860 ]' 00:12:51.860 12:32:34 -- lvol/basic.sh@348 -- # jq length 00:12:51.860 12:32:34 -- lvol/basic.sh@348 -- # '[' 4 == 4 ']' 00:12:51.860 12:32:34 -- lvol/basic.sh@351 -- # seq 0 3 00:12:51.860 12:32:34 -- lvol/basic.sh@351 -- # for i in $(seq 0 3) 00:12:51.860 12:32:34 -- lvol/basic.sh@352 -- # jq -r '.[0].name' 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # lvol_uuid=420b83ff-0730-4925-b5d9-33f663058712 00:12:52.120 12:32:34 -- lvol/basic.sh@353 -- # rpc_cmd bdev_lvol_delete 420b83ff-0730-4925-b5d9-33f663058712 00:12:52.120 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.120 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.120 12:32:34 -- lvol/basic.sh@351 -- # for i in $(seq 0 3) 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # jq -r '.[1].name' 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # lvol_uuid=f31885f4-1014-45bd-a697-725cf8ad7b2a 00:12:52.120 12:32:34 -- lvol/basic.sh@353 -- # rpc_cmd bdev_lvol_delete f31885f4-1014-45bd-a697-725cf8ad7b2a 00:12:52.120 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.120 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.120 12:32:34 -- lvol/basic.sh@351 -- # for i in $(seq 0 3) 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # jq -r '.[2].name' 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # lvol_uuid=19366aa6-5019-4ae0-b3dc-146a8bc0406c 00:12:52.120 12:32:34 -- lvol/basic.sh@353 -- # rpc_cmd bdev_lvol_delete 19366aa6-5019-4ae0-b3dc-146a8bc0406c 00:12:52.120 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.120 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.120 12:32:34 -- lvol/basic.sh@351 -- # for i in $(seq 0 3) 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # jq -r '.[3].name' 00:12:52.120 12:32:34 -- lvol/basic.sh@352 -- # lvol_uuid=058568ee-2d46-4a73-ba29-f77fbe7892ea 00:12:52.120 12:32:34 -- lvol/basic.sh@353 -- # rpc_cmd bdev_lvol_delete 058568ee-2d46-4a73-ba29-f77fbe7892ea 00:12:52.120 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.120 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.120 12:32:34 -- lvol/basic.sh@355 -- # rpc_cmd bdev_get_bdevs 00:12:52.120 12:32:34 -- lvol/basic.sh@355 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:12:52.120 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.120 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.379 12:32:34 -- lvol/basic.sh@355 -- # lvols='[]' 00:12:52.379 12:32:34 -- lvol/basic.sh@356 -- # jq length 00:12:52.379 12:32:34 -- lvol/basic.sh@356 -- # '[' 0 == 0 ']' 00:12:52.379 12:32:34 -- lvol/basic.sh@359 -- # seq 1 4 00:12:52.379 12:32:34 -- lvol/basic.sh@359 -- # for i in $(seq 1 4) 00:12:52.379 12:32:34 -- lvol/basic.sh@360 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test1 28 00:12:52.379 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.379 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.379 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.379 12:32:34 -- lvol/basic.sh@360 -- # lvol_uuid=f61ac75c-2243-4477-98c7-dab237cc02a1 00:12:52.379 12:32:34 -- lvol/basic.sh@361 -- # rpc_cmd bdev_get_bdevs -b f61ac75c-2243-4477-98c7-dab237cc02a1 00:12:52.379 12:32:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.379 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.379 12:32:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.379 12:32:34 -- lvol/basic.sh@361 -- # lvol='[ 00:12:52.379 { 00:12:52.379 "name": "f61ac75c-2243-4477-98c7-dab237cc02a1", 00:12:52.379 "aliases": [ 00:12:52.379 "lvs_test/lvol_test1" 00:12:52.379 ], 00:12:52.379 "product_name": "Logical Volume", 00:12:52.379 "block_size": 512, 00:12:52.379 "num_blocks": 57344, 00:12:52.379 "uuid": "f61ac75c-2243-4477-98c7-dab237cc02a1", 00:12:52.379 "assigned_rate_limits": { 00:12:52.379 "rw_ios_per_sec": 0, 00:12:52.379 "rw_mbytes_per_sec": 0, 00:12:52.379 "r_mbytes_per_sec": 0, 00:12:52.379 "w_mbytes_per_sec": 0 00:12:52.379 }, 00:12:52.379 "claimed": false, 00:12:52.379 "zoned": false, 00:12:52.379 "supported_io_types": { 00:12:52.379 "read": true, 00:12:52.379 "write": true, 00:12:52.379 "unmap": true, 00:12:52.379 "write_zeroes": true, 00:12:52.379 "flush": false, 00:12:52.379 "reset": true, 00:12:52.379 "compare": false, 00:12:52.379 "compare_and_write": false, 00:12:52.379 "abort": false, 00:12:52.379 "nvme_admin": false, 00:12:52.379 "nvme_io": false 00:12:52.379 }, 00:12:52.379 "memory_domains": [ 00:12:52.379 { 00:12:52.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.379 "dma_device_type": 2 00:12:52.379 } 00:12:52.379 ], 00:12:52.379 "driver_specific": { 00:12:52.379 "lvol": { 00:12:52.379 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:52.379 "base_bdev": "Malloc10", 00:12:52.379 "thin_provision": false, 00:12:52.379 "snapshot": false, 00:12:52.379 "clone": false, 00:12:52.379 "esnap_clone": false 00:12:52.379 } 00:12:52.379 } 00:12:52.379 } 00:12:52.379 ]' 00:12:52.379 12:32:34 -- lvol/basic.sh@363 -- # jq -r '.[0].name' 00:12:52.379 12:32:34 -- lvol/basic.sh@363 -- # '[' f61ac75c-2243-4477-98c7-dab237cc02a1 = f61ac75c-2243-4477-98c7-dab237cc02a1 ']' 00:12:52.379 12:32:34 -- lvol/basic.sh@364 -- # jq -r '.[0].uuid' 00:12:52.379 12:32:34 -- lvol/basic.sh@364 -- # '[' f61ac75c-2243-4477-98c7-dab237cc02a1 = f61ac75c-2243-4477-98c7-dab237cc02a1 ']' 00:12:52.379 12:32:34 -- lvol/basic.sh@365 -- # jq -r '.[0].aliases[0]' 00:12:52.638 12:32:34 -- lvol/basic.sh@365 -- # '[' lvs_test/lvol_test1 = lvs_test/lvol_test1 ']' 00:12:52.638 12:32:34 -- lvol/basic.sh@366 -- # jq -r '.[0].block_size' 00:12:52.638 12:32:34 -- lvol/basic.sh@366 -- # '[' 512 = 512 ']' 00:12:52.638 12:32:34 -- lvol/basic.sh@367 -- # jq -r '.[0].num_blocks' 00:12:52.638 12:32:35 -- lvol/basic.sh@367 -- # '[' 57344 = 57344 ']' 00:12:52.638 12:32:35 -- lvol/basic.sh@359 -- # for i in $(seq 1 4) 00:12:52.638 12:32:35 -- lvol/basic.sh@360 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test2 28 00:12:52.638 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.638 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:52.638 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.638 12:32:35 -- lvol/basic.sh@360 -- # lvol_uuid=9d06cb16-1664-4805-9719-2449299b47a5 00:12:52.638 12:32:35 -- lvol/basic.sh@361 -- # rpc_cmd bdev_get_bdevs -b 9d06cb16-1664-4805-9719-2449299b47a5 00:12:52.638 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.638 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:52.638 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.638 12:32:35 -- lvol/basic.sh@361 -- # lvol='[ 00:12:52.638 { 00:12:52.638 "name": "9d06cb16-1664-4805-9719-2449299b47a5", 00:12:52.638 "aliases": [ 00:12:52.638 "lvs_test/lvol_test2" 00:12:52.638 ], 00:12:52.638 "product_name": "Logical Volume", 00:12:52.638 "block_size": 512, 00:12:52.638 "num_blocks": 57344, 00:12:52.638 "uuid": "9d06cb16-1664-4805-9719-2449299b47a5", 00:12:52.638 "assigned_rate_limits": { 00:12:52.638 "rw_ios_per_sec": 0, 00:12:52.638 "rw_mbytes_per_sec": 0, 00:12:52.638 "r_mbytes_per_sec": 0, 00:12:52.638 "w_mbytes_per_sec": 0 00:12:52.638 }, 00:12:52.638 "claimed": false, 00:12:52.638 "zoned": false, 00:12:52.638 "supported_io_types": { 00:12:52.638 "read": true, 00:12:52.638 "write": true, 00:12:52.638 "unmap": true, 00:12:52.638 "write_zeroes": true, 00:12:52.638 "flush": false, 00:12:52.638 "reset": true, 00:12:52.638 "compare": false, 00:12:52.638 "compare_and_write": false, 00:12:52.638 "abort": false, 00:12:52.638 "nvme_admin": false, 00:12:52.638 "nvme_io": false 00:12:52.638 }, 00:12:52.638 "memory_domains": [ 00:12:52.638 { 00:12:52.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.638 "dma_device_type": 2 00:12:52.638 } 00:12:52.638 ], 00:12:52.638 "driver_specific": { 00:12:52.638 "lvol": { 00:12:52.638 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:52.638 "base_bdev": "Malloc10", 00:12:52.638 "thin_provision": false, 00:12:52.638 "snapshot": false, 00:12:52.638 "clone": false, 00:12:52.638 "esnap_clone": false 00:12:52.638 } 00:12:52.638 } 00:12:52.638 } 00:12:52.638 ]' 00:12:52.638 12:32:35 -- lvol/basic.sh@363 -- # jq -r '.[0].name' 00:12:52.638 12:32:35 -- lvol/basic.sh@363 -- # '[' 9d06cb16-1664-4805-9719-2449299b47a5 = 9d06cb16-1664-4805-9719-2449299b47a5 ']' 00:12:52.638 12:32:35 -- lvol/basic.sh@364 -- # jq -r '.[0].uuid' 00:12:52.638 12:32:35 -- lvol/basic.sh@364 -- # '[' 9d06cb16-1664-4805-9719-2449299b47a5 = 9d06cb16-1664-4805-9719-2449299b47a5 ']' 00:12:52.638 12:32:35 -- lvol/basic.sh@365 -- # jq -r '.[0].aliases[0]' 00:12:52.897 12:32:35 -- lvol/basic.sh@365 -- # '[' lvs_test/lvol_test2 = lvs_test/lvol_test2 ']' 00:12:52.897 12:32:35 -- lvol/basic.sh@366 -- # jq -r '.[0].block_size' 00:12:52.897 12:32:35 -- lvol/basic.sh@366 -- # '[' 512 = 512 ']' 00:12:52.897 12:32:35 -- lvol/basic.sh@367 -- # jq -r '.[0].num_blocks' 00:12:52.897 12:32:35 -- lvol/basic.sh@367 -- # '[' 57344 = 57344 ']' 00:12:52.897 12:32:35 -- lvol/basic.sh@359 -- # for i in $(seq 1 4) 00:12:52.897 12:32:35 -- lvol/basic.sh@360 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test3 28 00:12:52.897 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.897 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:52.897 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.897 12:32:35 -- lvol/basic.sh@360 -- # lvol_uuid=fea50b05-521f-4df0-81d5-29929d90c173 00:12:52.898 12:32:35 -- lvol/basic.sh@361 -- # rpc_cmd bdev_get_bdevs -b fea50b05-521f-4df0-81d5-29929d90c173 00:12:52.898 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.898 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:52.898 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.898 12:32:35 -- lvol/basic.sh@361 -- # lvol='[ 00:12:52.898 { 00:12:52.898 "name": "fea50b05-521f-4df0-81d5-29929d90c173", 00:12:52.898 "aliases": [ 00:12:52.898 "lvs_test/lvol_test3" 00:12:52.898 ], 00:12:52.898 "product_name": "Logical Volume", 00:12:52.898 "block_size": 512, 00:12:52.898 "num_blocks": 57344, 00:12:52.898 "uuid": "fea50b05-521f-4df0-81d5-29929d90c173", 00:12:52.898 "assigned_rate_limits": { 00:12:52.898 "rw_ios_per_sec": 0, 00:12:52.898 "rw_mbytes_per_sec": 0, 00:12:52.898 "r_mbytes_per_sec": 0, 00:12:52.898 "w_mbytes_per_sec": 0 00:12:52.898 }, 00:12:52.898 "claimed": false, 00:12:52.898 "zoned": false, 00:12:52.898 "supported_io_types": { 00:12:52.898 "read": true, 00:12:52.898 "write": true, 00:12:52.898 "unmap": true, 00:12:52.898 "write_zeroes": true, 00:12:52.898 "flush": false, 00:12:52.898 "reset": true, 00:12:52.898 "compare": false, 00:12:52.898 "compare_and_write": false, 00:12:52.898 "abort": false, 00:12:52.898 "nvme_admin": false, 00:12:52.898 "nvme_io": false 00:12:52.898 }, 00:12:52.898 "memory_domains": [ 00:12:52.898 { 00:12:52.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.898 "dma_device_type": 2 00:12:52.898 } 00:12:52.898 ], 00:12:52.898 "driver_specific": { 00:12:52.898 "lvol": { 00:12:52.898 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:52.898 "base_bdev": "Malloc10", 00:12:52.898 "thin_provision": false, 00:12:52.898 "snapshot": false, 00:12:52.898 "clone": false, 00:12:52.898 "esnap_clone": false 00:12:52.898 } 00:12:52.898 } 00:12:52.898 } 00:12:52.898 ]' 00:12:52.898 12:32:35 -- lvol/basic.sh@363 -- # jq -r '.[0].name' 00:12:52.898 12:32:35 -- lvol/basic.sh@363 -- # '[' fea50b05-521f-4df0-81d5-29929d90c173 = fea50b05-521f-4df0-81d5-29929d90c173 ']' 00:12:52.898 12:32:35 -- lvol/basic.sh@364 -- # jq -r '.[0].uuid' 00:12:53.157 12:32:35 -- lvol/basic.sh@364 -- # '[' fea50b05-521f-4df0-81d5-29929d90c173 = fea50b05-521f-4df0-81d5-29929d90c173 ']' 00:12:53.157 12:32:35 -- lvol/basic.sh@365 -- # jq -r '.[0].aliases[0]' 00:12:53.157 12:32:35 -- lvol/basic.sh@365 -- # '[' lvs_test/lvol_test3 = lvs_test/lvol_test3 ']' 00:12:53.157 12:32:35 -- lvol/basic.sh@366 -- # jq -r '.[0].block_size' 00:12:53.157 12:32:35 -- lvol/basic.sh@366 -- # '[' 512 = 512 ']' 00:12:53.157 12:32:35 -- lvol/basic.sh@367 -- # jq -r '.[0].num_blocks' 00:12:53.157 12:32:35 -- lvol/basic.sh@367 -- # '[' 57344 = 57344 ']' 00:12:53.157 12:32:35 -- lvol/basic.sh@359 -- # for i in $(seq 1 4) 00:12:53.157 12:32:35 -- lvol/basic.sh@360 -- # rpc_cmd bdev_lvol_create -u d8302065-b3b6-48b6-93ee-3d7c6141db70 lvol_test4 28 00:12:53.157 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.157 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:53.157 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.157 12:32:35 -- lvol/basic.sh@360 -- # lvol_uuid=c94a9b2a-4384-492d-8e1f-9d7334828f62 00:12:53.157 12:32:35 -- lvol/basic.sh@361 -- # rpc_cmd bdev_get_bdevs -b c94a9b2a-4384-492d-8e1f-9d7334828f62 00:12:53.157 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.157 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:53.157 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.157 12:32:35 -- lvol/basic.sh@361 -- # lvol='[ 00:12:53.157 { 00:12:53.157 "name": "c94a9b2a-4384-492d-8e1f-9d7334828f62", 00:12:53.157 "aliases": [ 00:12:53.157 "lvs_test/lvol_test4" 00:12:53.157 ], 00:12:53.157 "product_name": "Logical Volume", 00:12:53.157 "block_size": 512, 00:12:53.157 "num_blocks": 57344, 00:12:53.157 "uuid": "c94a9b2a-4384-492d-8e1f-9d7334828f62", 00:12:53.157 "assigned_rate_limits": { 00:12:53.157 "rw_ios_per_sec": 0, 00:12:53.157 "rw_mbytes_per_sec": 0, 00:12:53.157 "r_mbytes_per_sec": 0, 00:12:53.157 "w_mbytes_per_sec": 0 00:12:53.157 }, 00:12:53.157 "claimed": false, 00:12:53.157 "zoned": false, 00:12:53.157 "supported_io_types": { 00:12:53.157 "read": true, 00:12:53.157 "write": true, 00:12:53.157 "unmap": true, 00:12:53.157 "write_zeroes": true, 00:12:53.157 "flush": false, 00:12:53.157 "reset": true, 00:12:53.157 "compare": false, 00:12:53.157 "compare_and_write": false, 00:12:53.157 "abort": false, 00:12:53.157 "nvme_admin": false, 00:12:53.157 "nvme_io": false 00:12:53.157 }, 00:12:53.157 "memory_domains": [ 00:12:53.157 { 00:12:53.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.157 "dma_device_type": 2 00:12:53.157 } 00:12:53.157 ], 00:12:53.157 "driver_specific": { 00:12:53.157 "lvol": { 00:12:53.157 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.157 "base_bdev": "Malloc10", 00:12:53.157 "thin_provision": false, 00:12:53.157 "snapshot": false, 00:12:53.157 "clone": false, 00:12:53.157 "esnap_clone": false 00:12:53.157 } 00:12:53.157 } 00:12:53.157 } 00:12:53.157 ]' 00:12:53.157 12:32:35 -- lvol/basic.sh@363 -- # jq -r '.[0].name' 00:12:53.417 12:32:35 -- lvol/basic.sh@363 -- # '[' c94a9b2a-4384-492d-8e1f-9d7334828f62 = c94a9b2a-4384-492d-8e1f-9d7334828f62 ']' 00:12:53.417 12:32:35 -- lvol/basic.sh@364 -- # jq -r '.[0].uuid' 00:12:53.417 12:32:35 -- lvol/basic.sh@364 -- # '[' c94a9b2a-4384-492d-8e1f-9d7334828f62 = c94a9b2a-4384-492d-8e1f-9d7334828f62 ']' 00:12:53.417 12:32:35 -- lvol/basic.sh@365 -- # jq -r '.[0].aliases[0]' 00:12:53.417 12:32:35 -- lvol/basic.sh@365 -- # '[' lvs_test/lvol_test4 = lvs_test/lvol_test4 ']' 00:12:53.417 12:32:35 -- lvol/basic.sh@366 -- # jq -r '.[0].block_size' 00:12:53.417 12:32:35 -- lvol/basic.sh@366 -- # '[' 512 = 512 ']' 00:12:53.417 12:32:35 -- lvol/basic.sh@367 -- # jq -r '.[0].num_blocks' 00:12:53.417 12:32:35 -- lvol/basic.sh@367 -- # '[' 57344 = 57344 ']' 00:12:53.417 12:32:35 -- lvol/basic.sh@370 -- # rpc_cmd bdev_get_bdevs 00:12:53.417 12:32:35 -- lvol/basic.sh@370 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:12:53.417 12:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.417 12:32:35 -- common/autotest_common.sh@10 -- # set +x 00:12:53.417 12:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.677 12:32:35 -- lvol/basic.sh@370 -- # lvols='[ 00:12:53.677 { 00:12:53.677 "name": "f61ac75c-2243-4477-98c7-dab237cc02a1", 00:12:53.677 "aliases": [ 00:12:53.677 "lvs_test/lvol_test1" 00:12:53.677 ], 00:12:53.677 "product_name": "Logical Volume", 00:12:53.677 "block_size": 512, 00:12:53.677 "num_blocks": 57344, 00:12:53.677 "uuid": "f61ac75c-2243-4477-98c7-dab237cc02a1", 00:12:53.677 "assigned_rate_limits": { 00:12:53.677 "rw_ios_per_sec": 0, 00:12:53.677 "rw_mbytes_per_sec": 0, 00:12:53.677 "r_mbytes_per_sec": 0, 00:12:53.677 "w_mbytes_per_sec": 0 00:12:53.677 }, 00:12:53.677 "claimed": false, 00:12:53.677 "zoned": false, 00:12:53.677 "supported_io_types": { 00:12:53.677 "read": true, 00:12:53.677 "write": true, 00:12:53.677 "unmap": true, 00:12:53.677 "write_zeroes": true, 00:12:53.677 "flush": false, 00:12:53.677 "reset": true, 00:12:53.677 "compare": false, 00:12:53.677 "compare_and_write": false, 00:12:53.677 "abort": false, 00:12:53.677 "nvme_admin": false, 00:12:53.677 "nvme_io": false 00:12:53.677 }, 00:12:53.677 "memory_domains": [ 00:12:53.677 { 00:12:53.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.677 "dma_device_type": 2 00:12:53.677 } 00:12:53.677 ], 00:12:53.677 "driver_specific": { 00:12:53.677 "lvol": { 00:12:53.677 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.677 "base_bdev": "Malloc10", 00:12:53.677 "thin_provision": false, 00:12:53.677 "snapshot": false, 00:12:53.677 "clone": false, 00:12:53.677 "esnap_clone": false 00:12:53.677 } 00:12:53.677 } 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "name": "9d06cb16-1664-4805-9719-2449299b47a5", 00:12:53.677 "aliases": [ 00:12:53.677 "lvs_test/lvol_test2" 00:12:53.677 ], 00:12:53.677 "product_name": "Logical Volume", 00:12:53.677 "block_size": 512, 00:12:53.677 "num_blocks": 57344, 00:12:53.677 "uuid": "9d06cb16-1664-4805-9719-2449299b47a5", 00:12:53.677 "assigned_rate_limits": { 00:12:53.677 "rw_ios_per_sec": 0, 00:12:53.677 "rw_mbytes_per_sec": 0, 00:12:53.677 "r_mbytes_per_sec": 0, 00:12:53.677 "w_mbytes_per_sec": 0 00:12:53.677 }, 00:12:53.677 "claimed": false, 00:12:53.677 "zoned": false, 00:12:53.677 "supported_io_types": { 00:12:53.677 "read": true, 00:12:53.677 "write": true, 00:12:53.677 "unmap": true, 00:12:53.677 "write_zeroes": true, 00:12:53.677 "flush": false, 00:12:53.677 "reset": true, 00:12:53.677 "compare": false, 00:12:53.677 "compare_and_write": false, 00:12:53.677 "abort": false, 00:12:53.677 "nvme_admin": false, 00:12:53.677 "nvme_io": false 00:12:53.677 }, 00:12:53.677 "memory_domains": [ 00:12:53.677 { 00:12:53.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.677 "dma_device_type": 2 00:12:53.677 } 00:12:53.677 ], 00:12:53.677 "driver_specific": { 00:12:53.677 "lvol": { 00:12:53.677 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.677 "base_bdev": "Malloc10", 00:12:53.677 "thin_provision": false, 00:12:53.677 "snapshot": false, 00:12:53.677 "clone": false, 00:12:53.677 "esnap_clone": false 00:12:53.677 } 00:12:53.677 } 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "name": "fea50b05-521f-4df0-81d5-29929d90c173", 00:12:53.677 "aliases": [ 00:12:53.677 "lvs_test/lvol_test3" 00:12:53.677 ], 00:12:53.677 "product_name": "Logical Volume", 00:12:53.677 "block_size": 512, 00:12:53.677 "num_blocks": 57344, 00:12:53.677 "uuid": "fea50b05-521f-4df0-81d5-29929d90c173", 00:12:53.677 "assigned_rate_limits": { 00:12:53.677 "rw_ios_per_sec": 0, 00:12:53.677 "rw_mbytes_per_sec": 0, 00:12:53.677 "r_mbytes_per_sec": 0, 00:12:53.677 "w_mbytes_per_sec": 0 00:12:53.677 }, 00:12:53.677 "claimed": false, 00:12:53.677 "zoned": false, 00:12:53.677 "supported_io_types": { 00:12:53.677 "read": true, 00:12:53.677 "write": true, 00:12:53.677 "unmap": true, 00:12:53.677 "write_zeroes": true, 00:12:53.677 "flush": false, 00:12:53.677 "reset": true, 00:12:53.678 "compare": false, 00:12:53.678 "compare_and_write": false, 00:12:53.678 "abort": false, 00:12:53.678 "nvme_admin": false, 00:12:53.678 "nvme_io": false 00:12:53.678 }, 00:12:53.678 "memory_domains": [ 00:12:53.678 { 00:12:53.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.678 "dma_device_type": 2 00:12:53.678 } 00:12:53.678 ], 00:12:53.678 "driver_specific": { 00:12:53.678 "lvol": { 00:12:53.678 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.678 "base_bdev": "Malloc10", 00:12:53.678 "thin_provision": false, 00:12:53.678 "snapshot": false, 00:12:53.678 "clone": false, 00:12:53.678 "esnap_clone": false 00:12:53.678 } 00:12:53.678 } 00:12:53.678 }, 00:12:53.678 { 00:12:53.678 "name": "c94a9b2a-4384-492d-8e1f-9d7334828f62", 00:12:53.678 "aliases": [ 00:12:53.678 "lvs_test/lvol_test4" 00:12:53.678 ], 00:12:53.678 "product_name": "Logical Volume", 00:12:53.678 "block_size": 512, 00:12:53.678 "num_blocks": 57344, 00:12:53.678 "uuid": "c94a9b2a-4384-492d-8e1f-9d7334828f62", 00:12:53.678 "assigned_rate_limits": { 00:12:53.678 "rw_ios_per_sec": 0, 00:12:53.678 "rw_mbytes_per_sec": 0, 00:12:53.678 "r_mbytes_per_sec": 0, 00:12:53.678 "w_mbytes_per_sec": 0 00:12:53.678 }, 00:12:53.678 "claimed": false, 00:12:53.678 "zoned": false, 00:12:53.678 "supported_io_types": { 00:12:53.678 "read": true, 00:12:53.678 "write": true, 00:12:53.678 "unmap": true, 00:12:53.678 "write_zeroes": true, 00:12:53.678 "flush": false, 00:12:53.678 "reset": true, 00:12:53.678 "compare": false, 00:12:53.678 "compare_and_write": false, 00:12:53.678 "abort": false, 00:12:53.678 "nvme_admin": false, 00:12:53.678 "nvme_io": false 00:12:53.678 }, 00:12:53.678 "memory_domains": [ 00:12:53.678 { 00:12:53.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.678 "dma_device_type": 2 00:12:53.678 } 00:12:53.678 ], 00:12:53.678 "driver_specific": { 00:12:53.678 "lvol": { 00:12:53.678 "lvol_store_uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.678 "base_bdev": "Malloc10", 00:12:53.678 "thin_provision": false, 00:12:53.678 "snapshot": false, 00:12:53.678 "clone": false, 00:12:53.678 "esnap_clone": false 00:12:53.678 } 00:12:53.678 } 00:12:53.678 } 00:12:53.678 ]' 00:12:53.678 12:32:35 -- lvol/basic.sh@371 -- # jq length 00:12:53.678 12:32:36 -- lvol/basic.sh@371 -- # '[' 4 == 4 ']' 00:12:53.678 12:32:36 -- lvol/basic.sh@374 -- # seq 0 3 00:12:53.678 12:32:36 -- lvol/basic.sh@374 -- # for i in $(seq 0 3) 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # jq -r '.[0].name' 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # lvol_uuid=f61ac75c-2243-4477-98c7-dab237cc02a1 00:12:53.678 12:32:36 -- lvol/basic.sh@376 -- # rpc_cmd bdev_lvol_delete f61ac75c-2243-4477-98c7-dab237cc02a1 00:12:53.678 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.678 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.678 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.678 12:32:36 -- lvol/basic.sh@374 -- # for i in $(seq 0 3) 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # jq -r '.[1].name' 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # lvol_uuid=9d06cb16-1664-4805-9719-2449299b47a5 00:12:53.678 12:32:36 -- lvol/basic.sh@376 -- # rpc_cmd bdev_lvol_delete 9d06cb16-1664-4805-9719-2449299b47a5 00:12:53.678 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.678 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.678 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.678 12:32:36 -- lvol/basic.sh@374 -- # for i in $(seq 0 3) 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # jq -r '.[2].name' 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # lvol_uuid=fea50b05-521f-4df0-81d5-29929d90c173 00:12:53.678 12:32:36 -- lvol/basic.sh@376 -- # rpc_cmd bdev_lvol_delete fea50b05-521f-4df0-81d5-29929d90c173 00:12:53.678 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.678 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.678 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.678 12:32:36 -- lvol/basic.sh@374 -- # for i in $(seq 0 3) 00:12:53.678 12:32:36 -- lvol/basic.sh@375 -- # jq -r '.[3].name' 00:12:53.938 12:32:36 -- lvol/basic.sh@375 -- # lvol_uuid=c94a9b2a-4384-492d-8e1f-9d7334828f62 00:12:53.938 12:32:36 -- lvol/basic.sh@376 -- # rpc_cmd bdev_lvol_delete c94a9b2a-4384-492d-8e1f-9d7334828f62 00:12:53.938 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.938 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.938 12:32:36 -- lvol/basic.sh@378 -- # rpc_cmd bdev_get_bdevs 00:12:53.938 12:32:36 -- lvol/basic.sh@378 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:12:53.938 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.938 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.938 12:32:36 -- lvol/basic.sh@378 -- # lvols='[]' 00:12:53.938 12:32:36 -- lvol/basic.sh@379 -- # jq length 00:12:53.938 12:32:36 -- lvol/basic.sh@379 -- # '[' 0 == 0 ']' 00:12:53.938 12:32:36 -- lvol/basic.sh@381 -- # rpc_cmd bdev_lvol_delete_lvstore -u d8302065-b3b6-48b6-93ee-3d7c6141db70 00:12:53.938 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.938 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.938 12:32:36 -- lvol/basic.sh@382 -- # rpc_cmd bdev_lvol_get_lvstores -u d8302065-b3b6-48b6-93ee-3d7c6141db70 00:12:53.938 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.938 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 request: 00:12:53.938 { 00:12:53.938 "uuid": "d8302065-b3b6-48b6-93ee-3d7c6141db70", 00:12:53.938 "method": "bdev_lvol_get_lvstores", 00:12:53.938 "req_id": 1 00:12:53.938 } 00:12:53.938 Got JSON-RPC error response 00:12:53.938 response: 00:12:53.938 { 00:12:53.938 "code": -19, 00:12:53.938 "message": "No such device" 00:12:53.938 } 00:12:53.938 12:32:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:53.938 12:32:36 -- lvol/basic.sh@383 -- # rpc_cmd bdev_malloc_delete Malloc10 00:12:53.938 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.938 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.197 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.197 12:32:36 -- lvol/basic.sh@384 -- # check_leftover_devices 00:12:54.197 12:32:36 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:54.197 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.197 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.197 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.197 12:32:36 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:54.197 12:32:36 -- lvol/common.sh@26 -- # jq length 00:12:54.456 12:32:36 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:54.456 12:32:36 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:54.456 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.456 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.456 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.456 12:32:36 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:54.456 12:32:36 -- lvol/common.sh@28 -- # jq length 00:12:54.456 ************************************ 00:12:54.456 END TEST test_construct_multi_lvols 00:12:54.456 ************************************ 00:12:54.456 12:32:36 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:54.456 00:12:54.456 real 0m3.851s 00:12:54.456 user 0m2.885s 00:12:54.456 sys 0m0.379s 00:12:54.456 12:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.456 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.456 12:32:36 -- lvol/basic.sh@586 -- # run_test test_construct_lvols_conflict_alias test_construct_lvols_conflict_alias 00:12:54.456 12:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:54.456 12:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.456 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.456 ************************************ 00:12:54.456 START TEST test_construct_lvols_conflict_alias 00:12:54.456 ************************************ 00:12:54.456 12:32:36 -- common/autotest_common.sh@1104 -- # test_construct_lvols_conflict_alias 00:12:54.456 12:32:36 -- lvol/basic.sh@392 -- # rpc_cmd bdev_malloc_create 128 512 00:12:54.456 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.456 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.456 12:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.456 12:32:36 -- lvol/basic.sh@392 -- # malloc1_name=Malloc11 00:12:54.789 12:32:36 -- lvol/basic.sh@393 -- # rpc_cmd bdev_lvol_create_lvstore Malloc11 lvs_test1 00:12:54.789 12:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.789 12:32:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.789 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.789 12:32:37 -- lvol/basic.sh@393 -- # lvs1_uuid=0b234f16-580f-4317-9479-9c2399a2fba8 00:12:54.789 12:32:37 -- lvol/basic.sh@396 -- # rpc_cmd bdev_lvol_create -l lvs_test1 lvol_test 124 00:12:54.789 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.789 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.789 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.789 12:32:37 -- lvol/basic.sh@396 -- # lvol1_uuid=147b73aa-0618-4052-a707-c212012ac8ea 00:12:54.789 12:32:37 -- lvol/basic.sh@397 -- # rpc_cmd bdev_get_bdevs -b 147b73aa-0618-4052-a707-c212012ac8ea 00:12:54.789 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.789 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.789 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.789 12:32:37 -- lvol/basic.sh@397 -- # lvol1='[ 00:12:54.789 { 00:12:54.789 "name": "147b73aa-0618-4052-a707-c212012ac8ea", 00:12:54.789 "aliases": [ 00:12:54.789 "lvs_test1/lvol_test" 00:12:54.789 ], 00:12:54.789 "product_name": "Logical Volume", 00:12:54.789 "block_size": 512, 00:12:54.789 "num_blocks": 253952, 00:12:54.789 "uuid": "147b73aa-0618-4052-a707-c212012ac8ea", 00:12:54.789 "assigned_rate_limits": { 00:12:54.789 "rw_ios_per_sec": 0, 00:12:54.789 "rw_mbytes_per_sec": 0, 00:12:54.789 "r_mbytes_per_sec": 0, 00:12:54.789 "w_mbytes_per_sec": 0 00:12:54.789 }, 00:12:54.789 "claimed": false, 00:12:54.789 "zoned": false, 00:12:54.789 "supported_io_types": { 00:12:54.789 "read": true, 00:12:54.789 "write": true, 00:12:54.789 "unmap": true, 00:12:54.789 "write_zeroes": true, 00:12:54.789 "flush": false, 00:12:54.789 "reset": true, 00:12:54.789 "compare": false, 00:12:54.789 "compare_and_write": false, 00:12:54.789 "abort": false, 00:12:54.789 "nvme_admin": false, 00:12:54.789 "nvme_io": false 00:12:54.789 }, 00:12:54.789 "memory_domains": [ 00:12:54.789 { 00:12:54.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.789 "dma_device_type": 2 00:12:54.789 } 00:12:54.789 ], 00:12:54.789 "driver_specific": { 00:12:54.789 "lvol": { 00:12:54.789 "lvol_store_uuid": "0b234f16-580f-4317-9479-9c2399a2fba8", 00:12:54.789 "base_bdev": "Malloc11", 00:12:54.789 "thin_provision": false, 00:12:54.789 "snapshot": false, 00:12:54.789 "clone": false, 00:12:54.789 "esnap_clone": false 00:12:54.789 } 00:12:54.789 } 00:12:54.789 } 00:12:54.789 ]' 00:12:54.789 12:32:37 -- lvol/basic.sh@400 -- # malloc2_size_mb=64 00:12:54.789 12:32:37 -- lvol/basic.sh@403 -- # rpc_cmd bdev_malloc_create 64 512 00:12:54.790 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.790 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.790 12:32:37 -- lvol/basic.sh@403 -- # malloc2_name=Malloc12 00:12:54.790 12:32:37 -- lvol/basic.sh@404 -- # rpc_cmd bdev_lvol_create_lvstore Malloc12 lvs_test2 00:12:54.790 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.790 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.790 12:32:37 -- lvol/basic.sh@404 -- # lvs2_uuid=0c3e50bd-d00d-4d85-b651-571c9aa9e7a1 00:12:54.790 12:32:37 -- lvol/basic.sh@406 -- # round_down 62 00:12:54.790 12:32:37 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:12:54.790 12:32:37 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:12:54.790 12:32:37 -- lvol/common.sh@36 -- # echo 60 00:12:54.790 12:32:37 -- lvol/basic.sh@406 -- # lvol2_size_mb=60 00:12:54.790 12:32:37 -- lvol/basic.sh@409 -- # rpc_cmd bdev_lvol_create -l lvs_test2 lvol_test 60 00:12:54.790 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.790 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.790 12:32:37 -- lvol/basic.sh@409 -- # lvol2_uuid=f8a45965-7460-420f-8380-e101e62f3707 00:12:54.790 12:32:37 -- lvol/basic.sh@410 -- # rpc_cmd bdev_get_bdevs -b f8a45965-7460-420f-8380-e101e62f3707 00:12:54.790 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.790 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.790 12:32:37 -- lvol/basic.sh@410 -- # lvol2='[ 00:12:54.790 { 00:12:54.790 "name": "f8a45965-7460-420f-8380-e101e62f3707", 00:12:54.790 "aliases": [ 00:12:54.790 "lvs_test2/lvol_test" 00:12:54.790 ], 00:12:54.790 "product_name": "Logical Volume", 00:12:54.790 "block_size": 512, 00:12:54.790 "num_blocks": 122880, 00:12:54.790 "uuid": "f8a45965-7460-420f-8380-e101e62f3707", 00:12:54.790 "assigned_rate_limits": { 00:12:54.790 "rw_ios_per_sec": 0, 00:12:54.790 "rw_mbytes_per_sec": 0, 00:12:54.790 "r_mbytes_per_sec": 0, 00:12:54.790 "w_mbytes_per_sec": 0 00:12:54.790 }, 00:12:54.790 "claimed": false, 00:12:54.790 "zoned": false, 00:12:54.790 "supported_io_types": { 00:12:54.790 "read": true, 00:12:54.790 "write": true, 00:12:54.790 "unmap": true, 00:12:54.790 "write_zeroes": true, 00:12:54.790 "flush": false, 00:12:54.790 "reset": true, 00:12:54.790 "compare": false, 00:12:54.790 "compare_and_write": false, 00:12:54.790 "abort": false, 00:12:54.790 "nvme_admin": false, 00:12:54.790 "nvme_io": false 00:12:54.790 }, 00:12:54.790 "memory_domains": [ 00:12:54.790 { 00:12:54.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.790 "dma_device_type": 2 00:12:54.790 } 00:12:54.790 ], 00:12:54.790 "driver_specific": { 00:12:54.790 "lvol": { 00:12:54.790 "lvol_store_uuid": "0c3e50bd-d00d-4d85-b651-571c9aa9e7a1", 00:12:54.790 "base_bdev": "Malloc12", 00:12:54.790 "thin_provision": false, 00:12:54.790 "snapshot": false, 00:12:54.790 "clone": false, 00:12:54.790 "esnap_clone": false 00:12:54.790 } 00:12:54.790 } 00:12:54.790 } 00:12:54.790 ]' 00:12:54.790 12:32:37 -- lvol/basic.sh@412 -- # jq -r '.[0].name' 00:12:54.790 12:32:37 -- lvol/basic.sh@412 -- # '[' 147b73aa-0618-4052-a707-c212012ac8ea = 147b73aa-0618-4052-a707-c212012ac8ea ']' 00:12:54.790 12:32:37 -- lvol/basic.sh@413 -- # jq -r '.[0].uuid' 00:12:54.790 12:32:37 -- lvol/basic.sh@413 -- # '[' 147b73aa-0618-4052-a707-c212012ac8ea = 147b73aa-0618-4052-a707-c212012ac8ea ']' 00:12:54.790 12:32:37 -- lvol/basic.sh@414 -- # jq -r '.[0].aliases[0]' 00:12:55.048 12:32:37 -- lvol/basic.sh@414 -- # '[' lvs_test1/lvol_test = lvs_test1/lvol_test ']' 00:12:55.048 12:32:37 -- lvol/basic.sh@415 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:12:55.048 12:32:37 -- lvol/basic.sh@415 -- # '[' 0b234f16-580f-4317-9479-9c2399a2fba8 = 0b234f16-580f-4317-9479-9c2399a2fba8 ']' 00:12:55.048 12:32:37 -- lvol/basic.sh@417 -- # jq -r '.[0].name' 00:12:55.048 12:32:37 -- lvol/basic.sh@417 -- # '[' f8a45965-7460-420f-8380-e101e62f3707 = f8a45965-7460-420f-8380-e101e62f3707 ']' 00:12:55.048 12:32:37 -- lvol/basic.sh@418 -- # jq -r '.[0].uuid' 00:12:55.048 12:32:37 -- lvol/basic.sh@418 -- # '[' f8a45965-7460-420f-8380-e101e62f3707 = f8a45965-7460-420f-8380-e101e62f3707 ']' 00:12:55.048 12:32:37 -- lvol/basic.sh@419 -- # jq -r '.[0].aliases[0]' 00:12:55.048 12:32:37 -- lvol/basic.sh@419 -- # '[' lvs_test2/lvol_test = lvs_test2/lvol_test ']' 00:12:55.048 12:32:37 -- lvol/basic.sh@420 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:12:55.049 12:32:37 -- lvol/basic.sh@420 -- # '[' 0c3e50bd-d00d-4d85-b651-571c9aa9e7a1 = 0c3e50bd-d00d-4d85-b651-571c9aa9e7a1 ']' 00:12:55.049 12:32:37 -- lvol/basic.sh@423 -- # rpc_cmd bdev_lvol_delete_lvstore -u 0b234f16-580f-4317-9479-9c2399a2fba8 00:12:55.049 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.049 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.307 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.307 12:32:37 -- lvol/basic.sh@424 -- # rpc_cmd bdev_lvol_get_lvstores -u 0b234f16-580f-4317-9479-9c2399a2fba8 00:12:55.307 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.307 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.307 request: 00:12:55.307 { 00:12:55.307 "uuid": "0b234f16-580f-4317-9479-9c2399a2fba8", 00:12:55.307 "method": "bdev_lvol_get_lvstores", 00:12:55.307 "req_id": 1 00:12:55.307 } 00:12:55.307 Got JSON-RPC error response 00:12:55.307 response: 00:12:55.307 { 00:12:55.307 "code": -19, 00:12:55.307 "message": "No such device" 00:12:55.307 } 00:12:55.307 12:32:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:55.307 12:32:37 -- lvol/basic.sh@425 -- # rpc_cmd bdev_lvol_delete_lvstore -u 0c3e50bd-d00d-4d85-b651-571c9aa9e7a1 00:12:55.307 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.307 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.307 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.307 12:32:37 -- lvol/basic.sh@426 -- # rpc_cmd bdev_lvol_get_lvstores -u 0c3e50bd-d00d-4d85-b651-571c9aa9e7a1 00:12:55.307 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.307 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.307 request: 00:12:55.307 { 00:12:55.307 "uuid": "0c3e50bd-d00d-4d85-b651-571c9aa9e7a1", 00:12:55.307 "method": "bdev_lvol_get_lvstores", 00:12:55.307 "req_id": 1 00:12:55.307 } 00:12:55.308 Got JSON-RPC error response 00:12:55.308 response: 00:12:55.308 { 00:12:55.308 "code": -19, 00:12:55.308 "message": "No such device" 00:12:55.308 } 00:12:55.308 12:32:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:55.308 12:32:37 -- lvol/basic.sh@427 -- # rpc_cmd bdev_malloc_delete Malloc11 00:12:55.308 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.308 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.566 12:32:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.566 12:32:37 -- lvol/basic.sh@428 -- # rpc_cmd bdev_get_bdevs -b Malloc11 00:12:55.566 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.566 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.566 [2024-10-01 12:32:37.941544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc11 00:12:55.566 request: 00:12:55.566 { 00:12:55.566 "name": "Malloc11", 00:12:55.566 "method": "bdev_get_bdevs", 00:12:55.566 "req_id": 1 00:12:55.566 } 00:12:55.566 Got JSON-RPC error response 00:12:55.566 response: 00:12:55.566 { 00:12:55.566 "code": -19, 00:12:55.566 "message": "No such device" 00:12:55.566 } 00:12:55.566 12:32:37 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:55.566 12:32:37 -- lvol/basic.sh@429 -- # rpc_cmd bdev_malloc_delete Malloc12 00:12:55.566 12:32:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.566 12:32:37 -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.825 12:32:38 -- lvol/basic.sh@430 -- # check_leftover_devices 00:12:55.825 12:32:38 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:55.825 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.825 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.825 12:32:38 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:55.825 12:32:38 -- lvol/common.sh@26 -- # jq length 00:12:55.825 12:32:38 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:55.825 12:32:38 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:55.825 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.825 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.825 12:32:38 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:55.825 12:32:38 -- lvol/common.sh@28 -- # jq length 00:12:55.825 ************************************ 00:12:55.825 END TEST test_construct_lvols_conflict_alias 00:12:55.825 ************************************ 00:12:55.825 12:32:38 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:55.825 00:12:55.825 real 0m1.379s 00:12:55.825 user 0m0.505s 00:12:55.825 sys 0m0.082s 00:12:55.825 12:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.825 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 12:32:38 -- lvol/basic.sh@587 -- # run_test test_construct_lvol_inexistent_lvs test_construct_lvol_inexistent_lvs 00:12:55.825 12:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.825 12:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.825 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 ************************************ 00:12:55.825 START TEST test_construct_lvol_inexistent_lvs 00:12:55.825 ************************************ 00:12:55.825 12:32:38 -- common/autotest_common.sh@1104 -- # test_construct_lvol_inexistent_lvs 00:12:55.825 12:32:38 -- lvol/basic.sh@436 -- # rpc_cmd bdev_malloc_create 128 512 00:12:55.825 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.825 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.084 12:32:38 -- lvol/basic.sh@436 -- # malloc_name=Malloc13 00:12:56.084 12:32:38 -- lvol/basic.sh@437 -- # rpc_cmd bdev_lvol_create_lvstore Malloc13 lvs_test 00:12:56.084 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.084 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.084 12:32:38 -- lvol/basic.sh@437 -- # lvs_uuid=eda07056-ef93-47df-8330-19b7744a7eb6 00:12:56.084 12:32:38 -- lvol/basic.sh@440 -- # dummy_uuid=00000000-0000-0000-0000-000000000000 00:12:56.084 12:32:38 -- lvol/basic.sh@441 -- # rpc_cmd bdev_lvol_create -u 00000000-0000-0000-0000-000000000000 lvol_test 124 00:12:56.084 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.084 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 request: 00:12:56.084 { 00:12:56.084 "lvol_name": "lvol_test", 00:12:56.084 "size_in_mib": 124, 00:12:56.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.084 "method": "bdev_lvol_create", 00:12:56.084 "req_id": 1 00:12:56.084 } 00:12:56.084 Got JSON-RPC error response 00:12:56.084 response: 00:12:56.084 { 00:12:56.084 "code": -19, 00:12:56.084 "message": "No such device" 00:12:56.084 } 00:12:56.084 12:32:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.084 12:32:38 -- lvol/basic.sh@443 -- # rpc_cmd bdev_get_bdevs 00:12:56.084 12:32:38 -- lvol/basic.sh@443 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:12:56.084 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.084 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.084 12:32:38 -- lvol/basic.sh@443 -- # lvols='[]' 00:12:56.084 12:32:38 -- lvol/basic.sh@444 -- # jq length 00:12:56.084 12:32:38 -- lvol/basic.sh@444 -- # '[' 0 == 0 ']' 00:12:56.084 12:32:38 -- lvol/basic.sh@447 -- # rpc_cmd bdev_lvol_delete_lvstore -u eda07056-ef93-47df-8330-19b7744a7eb6 00:12:56.084 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.084 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.084 12:32:38 -- lvol/basic.sh@448 -- # rpc_cmd bdev_lvol_get_lvstores -u eda07056-ef93-47df-8330-19b7744a7eb6 00:12:56.084 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.084 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.084 request: 00:12:56.084 { 00:12:56.084 "uuid": "eda07056-ef93-47df-8330-19b7744a7eb6", 00:12:56.084 "method": "bdev_lvol_get_lvstores", 00:12:56.084 "req_id": 1 00:12:56.084 } 00:12:56.084 Got JSON-RPC error response 00:12:56.084 response: 00:12:56.085 { 00:12:56.085 "code": -19, 00:12:56.085 "message": "No such device" 00:12:56.085 } 00:12:56.085 12:32:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.085 12:32:38 -- lvol/basic.sh@449 -- # rpc_cmd bdev_malloc_delete Malloc13 00:12:56.085 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.085 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.656 12:32:38 -- lvol/basic.sh@450 -- # check_leftover_devices 00:12:56.656 12:32:38 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:56.656 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.656 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.656 12:32:38 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:56.656 12:32:38 -- lvol/common.sh@26 -- # jq length 00:12:56.656 12:32:38 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:56.656 12:32:38 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:56.656 12:32:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.656 12:32:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 12:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.656 12:32:38 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:56.656 12:32:38 -- lvol/common.sh@28 -- # jq length 00:12:56.656 ************************************ 00:12:56.656 END TEST test_construct_lvol_inexistent_lvs 00:12:56.656 ************************************ 00:12:56.656 12:32:39 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:56.656 00:12:56.656 real 0m0.735s 00:12:56.656 user 0m0.225s 00:12:56.656 sys 0m0.034s 00:12:56.656 12:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.656 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 12:32:39 -- lvol/basic.sh@588 -- # run_test test_construct_lvol_full_lvs test_construct_lvol_full_lvs 00:12:56.656 12:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:56.656 12:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.656 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.656 ************************************ 00:12:56.656 START TEST test_construct_lvol_full_lvs 00:12:56.656 ************************************ 00:12:56.656 12:32:39 -- common/autotest_common.sh@1104 -- # test_construct_lvol_full_lvs 00:12:56.656 12:32:39 -- lvol/basic.sh@456 -- # rpc_cmd bdev_malloc_create 128 512 00:12:56.656 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.656 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@456 -- # malloc_name=Malloc14 00:12:56.917 12:32:39 -- lvol/basic.sh@457 -- # rpc_cmd bdev_lvol_create_lvstore Malloc14 lvs_test 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@457 -- # lvs_uuid=a18b84fb-a723-48b3-97a2-8ebdf7dbdf51 00:12:56.917 12:32:39 -- lvol/basic.sh@460 -- # rpc_cmd bdev_lvol_create -l lvs_test lvol_test1 124 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@460 -- # lvol1_uuid=6d1aa386-cff6-4fbe-b1ff-f37ca1c97d67 00:12:56.917 12:32:39 -- lvol/basic.sh@461 -- # rpc_cmd bdev_get_bdevs -b 6d1aa386-cff6-4fbe-b1ff-f37ca1c97d67 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@461 -- # lvol1='[ 00:12:56.917 { 00:12:56.917 "name": "6d1aa386-cff6-4fbe-b1ff-f37ca1c97d67", 00:12:56.917 "aliases": [ 00:12:56.917 "lvs_test/lvol_test1" 00:12:56.917 ], 00:12:56.917 "product_name": "Logical Volume", 00:12:56.917 "block_size": 512, 00:12:56.917 "num_blocks": 253952, 00:12:56.917 "uuid": "6d1aa386-cff6-4fbe-b1ff-f37ca1c97d67", 00:12:56.917 "assigned_rate_limits": { 00:12:56.917 "rw_ios_per_sec": 0, 00:12:56.917 "rw_mbytes_per_sec": 0, 00:12:56.917 "r_mbytes_per_sec": 0, 00:12:56.917 "w_mbytes_per_sec": 0 00:12:56.917 }, 00:12:56.917 "claimed": false, 00:12:56.917 "zoned": false, 00:12:56.917 "supported_io_types": { 00:12:56.917 "read": true, 00:12:56.917 "write": true, 00:12:56.917 "unmap": true, 00:12:56.917 "write_zeroes": true, 00:12:56.917 "flush": false, 00:12:56.917 "reset": true, 00:12:56.917 "compare": false, 00:12:56.917 "compare_and_write": false, 00:12:56.917 "abort": false, 00:12:56.917 "nvme_admin": false, 00:12:56.917 "nvme_io": false 00:12:56.917 }, 00:12:56.917 "memory_domains": [ 00:12:56.917 { 00:12:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.917 "dma_device_type": 2 00:12:56.917 } 00:12:56.917 ], 00:12:56.917 "driver_specific": { 00:12:56.917 "lvol": { 00:12:56.917 "lvol_store_uuid": "a18b84fb-a723-48b3-97a2-8ebdf7dbdf51", 00:12:56.917 "base_bdev": "Malloc14", 00:12:56.917 "thin_provision": false, 00:12:56.917 "snapshot": false, 00:12:56.917 "clone": false, 00:12:56.917 "esnap_clone": false 00:12:56.917 } 00:12:56.917 } 00:12:56.917 } 00:12:56.917 ]' 00:12:56.917 12:32:39 -- lvol/basic.sh@464 -- # rpc_cmd bdev_lvol_create -l lvs_test lvol_test2 1 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 [2024-10-01 12:32:39.269971] blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 1 (clusters) 00:12:56.917 request: 00:12:56.917 { 00:12:56.917 "lvol_name": "lvol_test2", 00:12:56.917 "size_in_mib": 1, 00:12:56.917 "lvs_name": "lvs_test", 00:12:56.917 "method": "bdev_lvol_create", 00:12:56.917 "req_id": 1 00:12:56.917 } 00:12:56.917 Got JSON-RPC error response 00:12:56.917 response: 00:12:56.917 { 00:12:56.917 "code": -32602, 00:12:56.917 "message": "No space left on device" 00:12:56.917 } 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@467 -- # rpc_cmd bdev_lvol_delete_lvstore -u a18b84fb-a723-48b3-97a2-8ebdf7dbdf51 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.917 12:32:39 -- lvol/basic.sh@468 -- # rpc_cmd bdev_lvol_get_lvstores -u a18b84fb-a723-48b3-97a2-8ebdf7dbdf51 00:12:56.917 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.917 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:56.917 request: 00:12:56.917 { 00:12:56.917 "uuid": "a18b84fb-a723-48b3-97a2-8ebdf7dbdf51", 00:12:56.917 "method": "bdev_lvol_get_lvstores", 00:12:56.917 "req_id": 1 00:12:56.917 } 00:12:56.917 Got JSON-RPC error response 00:12:56.917 response: 00:12:56.917 { 00:12:56.917 "code": -19, 00:12:56.917 "message": "No such device" 00:12:56.917 } 00:12:56.918 12:32:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.918 12:32:39 -- lvol/basic.sh@469 -- # rpc_cmd bdev_malloc_delete Malloc14 00:12:56.918 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.918 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.176 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.176 12:32:39 -- lvol/basic.sh@470 -- # check_leftover_devices 00:12:57.176 12:32:39 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:57.176 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.176 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.176 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.176 12:32:39 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:57.176 12:32:39 -- lvol/common.sh@26 -- # jq length 00:12:57.176 12:32:39 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:57.176 12:32:39 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:57.176 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.176 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.176 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.176 12:32:39 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:57.176 12:32:39 -- lvol/common.sh@28 -- # jq length 00:12:57.435 ************************************ 00:12:57.435 END TEST test_construct_lvol_full_lvs 00:12:57.435 ************************************ 00:12:57.435 12:32:39 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:57.435 00:12:57.435 real 0m0.657s 00:12:57.435 user 0m0.122s 00:12:57.435 sys 0m0.028s 00:12:57.435 12:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 12:32:39 -- lvol/basic.sh@589 -- # run_test test_construct_lvol_alias_conflict test_construct_lvol_alias_conflict 00:12:57.435 12:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:57.435 12:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 ************************************ 00:12:57.435 START TEST test_construct_lvol_alias_conflict 00:12:57.435 ************************************ 00:12:57.435 12:32:39 -- common/autotest_common.sh@1104 -- # test_construct_lvol_alias_conflict 00:12:57.435 12:32:39 -- lvol/basic.sh@476 -- # rpc_cmd bdev_malloc_create 128 512 00:12:57.435 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.435 12:32:39 -- lvol/basic.sh@476 -- # malloc_name=Malloc15 00:12:57.435 12:32:39 -- lvol/basic.sh@477 -- # rpc_cmd bdev_lvol_create_lvstore Malloc15 lvs_test 00:12:57.435 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.435 12:32:39 -- lvol/basic.sh@477 -- # lvs_uuid=7b0cc0f6-0bf4-4223-ba33-2aff40f72e24 00:12:57.435 12:32:39 -- lvol/basic.sh@480 -- # round_down 62 00:12:57.435 12:32:39 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:12:57.435 12:32:39 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:12:57.435 12:32:39 -- lvol/common.sh@36 -- # echo 60 00:12:57.435 12:32:39 -- lvol/basic.sh@480 -- # lvol_size_mb=60 00:12:57.435 12:32:39 -- lvol/basic.sh@481 -- # rpc_cmd bdev_lvol_create -l lvs_test lvol_test 60 00:12:57.435 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.435 12:32:39 -- lvol/basic.sh@481 -- # lvol1_uuid=d9e8fdd8-23cd-4afc-9f23-0e0d08489031 00:12:57.435 12:32:39 -- lvol/basic.sh@482 -- # rpc_cmd bdev_get_bdevs -b d9e8fdd8-23cd-4afc-9f23-0e0d08489031 00:12:57.435 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.435 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.694 12:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.694 12:32:39 -- lvol/basic.sh@482 -- # lvol1='[ 00:12:57.694 { 00:12:57.694 "name": "d9e8fdd8-23cd-4afc-9f23-0e0d08489031", 00:12:57.694 "aliases": [ 00:12:57.694 "lvs_test/lvol_test" 00:12:57.694 ], 00:12:57.694 "product_name": "Logical Volume", 00:12:57.694 "block_size": 512, 00:12:57.694 "num_blocks": 122880, 00:12:57.694 "uuid": "d9e8fdd8-23cd-4afc-9f23-0e0d08489031", 00:12:57.694 "assigned_rate_limits": { 00:12:57.694 "rw_ios_per_sec": 0, 00:12:57.694 "rw_mbytes_per_sec": 0, 00:12:57.694 "r_mbytes_per_sec": 0, 00:12:57.694 "w_mbytes_per_sec": 0 00:12:57.694 }, 00:12:57.694 "claimed": false, 00:12:57.694 "zoned": false, 00:12:57.694 "supported_io_types": { 00:12:57.694 "read": true, 00:12:57.694 "write": true, 00:12:57.694 "unmap": true, 00:12:57.694 "write_zeroes": true, 00:12:57.694 "flush": false, 00:12:57.694 "reset": true, 00:12:57.694 "compare": false, 00:12:57.694 "compare_and_write": false, 00:12:57.694 "abort": false, 00:12:57.694 "nvme_admin": false, 00:12:57.694 "nvme_io": false 00:12:57.694 }, 00:12:57.694 "memory_domains": [ 00:12:57.694 { 00:12:57.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.694 "dma_device_type": 2 00:12:57.694 } 00:12:57.694 ], 00:12:57.694 "driver_specific": { 00:12:57.694 "lvol": { 00:12:57.694 "lvol_store_uuid": "7b0cc0f6-0bf4-4223-ba33-2aff40f72e24", 00:12:57.694 "base_bdev": "Malloc15", 00:12:57.694 "thin_provision": false, 00:12:57.694 "snapshot": false, 00:12:57.694 "clone": false, 00:12:57.694 "esnap_clone": false 00:12:57.694 } 00:12:57.694 } 00:12:57.694 } 00:12:57.694 ]' 00:12:57.694 12:32:39 -- lvol/basic.sh@485 -- # rpc_cmd bdev_lvol_create -l lvs_test lvol_test 60 00:12:57.694 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.694 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.694 [2024-10-01 12:32:39.980071] lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol_test already exists 00:12:57.694 request: 00:12:57.694 { 00:12:57.694 "lvol_name": "lvol_test", 00:12:57.694 "size_in_mib": 60, 00:12:57.694 "lvs_name": "lvs_test", 00:12:57.694 "method": "bdev_lvol_create", 00:12:57.694 "req_id": 1 00:12:57.694 } 00:12:57.694 Got JSON-RPC error response 00:12:57.694 response: 00:12:57.694 { 00:12:57.694 "code": -17, 00:12:57.694 "message": "File exists" 00:12:57.694 } 00:12:57.694 12:32:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:57.694 12:32:39 -- lvol/basic.sh@488 -- # rpc_cmd bdev_lvol_delete_lvstore -u 7b0cc0f6-0bf4-4223-ba33-2aff40f72e24 00:12:57.694 12:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.694 12:32:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.694 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.694 12:32:40 -- lvol/basic.sh@489 -- # rpc_cmd bdev_lvol_get_lvstores -u 7b0cc0f6-0bf4-4223-ba33-2aff40f72e24 00:12:57.694 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.694 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.694 request: 00:12:57.694 { 00:12:57.694 "uuid": "7b0cc0f6-0bf4-4223-ba33-2aff40f72e24", 00:12:57.694 "method": "bdev_lvol_get_lvstores", 00:12:57.694 "req_id": 1 00:12:57.694 } 00:12:57.694 Got JSON-RPC error response 00:12:57.694 response: 00:12:57.694 { 00:12:57.694 "code": -19, 00:12:57.694 "message": "No such device" 00:12:57.694 } 00:12:57.694 12:32:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:57.694 12:32:40 -- lvol/basic.sh@490 -- # rpc_cmd bdev_malloc_delete Malloc15 00:12:57.694 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.694 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.953 12:32:40 -- lvol/basic.sh@491 -- # rpc_cmd bdev_get_bdevs -b Malloc15 00:12:57.953 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.953 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 [2024-10-01 12:32:40.322378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc15 00:12:57.953 request: 00:12:57.953 { 00:12:57.953 "name": "Malloc15", 00:12:57.953 "method": "bdev_get_bdevs", 00:12:57.953 "req_id": 1 00:12:57.953 } 00:12:57.953 Got JSON-RPC error response 00:12:57.953 response: 00:12:57.953 { 00:12:57.953 "code": -19, 00:12:57.953 "message": "No such device" 00:12:57.953 } 00:12:57.953 12:32:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:57.953 12:32:40 -- lvol/basic.sh@492 -- # check_leftover_devices 00:12:57.953 12:32:40 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:57.953 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.953 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.953 12:32:40 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:57.953 12:32:40 -- lvol/common.sh@26 -- # jq length 00:12:57.953 12:32:40 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:57.953 12:32:40 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:57.953 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.953 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.953 12:32:40 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:57.953 12:32:40 -- lvol/common.sh@28 -- # jq length 00:12:57.953 ************************************ 00:12:57.953 END TEST test_construct_lvol_alias_conflict 00:12:57.953 ************************************ 00:12:57.953 12:32:40 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:57.953 00:12:57.953 real 0m0.667s 00:12:57.953 user 0m0.123s 00:12:57.953 sys 0m0.030s 00:12:57.953 12:32:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.953 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.211 12:32:40 -- lvol/basic.sh@590 -- # run_test test_construct_nested_lvol test_construct_nested_lvol 00:12:58.211 12:32:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:58.211 12:32:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.211 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.211 ************************************ 00:12:58.211 START TEST test_construct_nested_lvol 00:12:58.211 ************************************ 00:12:58.211 12:32:40 -- common/autotest_common.sh@1104 -- # test_construct_nested_lvol 00:12:58.212 12:32:40 -- lvol/basic.sh@498 -- # rpc_cmd bdev_malloc_create 128 512 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@498 -- # malloc_name=Malloc16 00:12:58.212 12:32:40 -- lvol/basic.sh@499 -- # rpc_cmd bdev_lvol_create_lvstore Malloc16 lvs_test 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@499 -- # lvs_uuid=82084fb7-8fd5-4e52-84f0-fc0a9571890d 00:12:58.212 12:32:40 -- lvol/basic.sh@502 -- # rpc_cmd bdev_lvol_create -u 82084fb7-8fd5-4e52-84f0-fc0a9571890d lvol_test 124 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@502 -- # lvol_uuid=a80c94bc-90a1-4797-81ae-e21fc747e9c1 00:12:58.212 12:32:40 -- lvol/basic.sh@504 -- # rpc_cmd bdev_lvol_create_lvstore a80c94bc-90a1-4797-81ae-e21fc747e9c1 nested_lvs 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@504 -- # nested_lvs_uuid=f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f 00:12:58.212 12:32:40 -- lvol/basic.sh@506 -- # nested_lvol_size_mb=120 00:12:58.212 12:32:40 -- lvol/basic.sh@507 -- # nested_lvol_size=125829120 00:12:58.212 12:32:40 -- lvol/basic.sh@510 -- # rpc_cmd bdev_lvol_create -u f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f nested_lvol1 120 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@510 -- # nested_lvol1_uuid=6bf1ac60-3872-4cf2-83c6-785e076850a3 00:12:58.212 12:32:40 -- lvol/basic.sh@511 -- # rpc_cmd bdev_get_bdevs -b 6bf1ac60-3872-4cf2-83c6-785e076850a3 00:12:58.212 12:32:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.212 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.212 12:32:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.212 12:32:40 -- lvol/basic.sh@511 -- # nested_lvol1='[ 00:12:58.212 { 00:12:58.212 "name": "6bf1ac60-3872-4cf2-83c6-785e076850a3", 00:12:58.212 "aliases": [ 00:12:58.212 "nested_lvs/nested_lvol1" 00:12:58.212 ], 00:12:58.212 "product_name": "Logical Volume", 00:12:58.212 "block_size": 512, 00:12:58.212 "num_blocks": 245760, 00:12:58.212 "uuid": "6bf1ac60-3872-4cf2-83c6-785e076850a3", 00:12:58.212 "assigned_rate_limits": { 00:12:58.212 "rw_ios_per_sec": 0, 00:12:58.212 "rw_mbytes_per_sec": 0, 00:12:58.212 "r_mbytes_per_sec": 0, 00:12:58.212 "w_mbytes_per_sec": 0 00:12:58.212 }, 00:12:58.212 "claimed": false, 00:12:58.212 "zoned": false, 00:12:58.212 "supported_io_types": { 00:12:58.212 "read": true, 00:12:58.212 "write": true, 00:12:58.212 "unmap": true, 00:12:58.212 "write_zeroes": true, 00:12:58.212 "flush": false, 00:12:58.212 "reset": true, 00:12:58.212 "compare": false, 00:12:58.212 "compare_and_write": false, 00:12:58.212 "abort": false, 00:12:58.212 "nvme_admin": false, 00:12:58.212 "nvme_io": false 00:12:58.212 }, 00:12:58.212 "memory_domains": [ 00:12:58.212 { 00:12:58.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.212 "dma_device_type": 2 00:12:58.212 } 00:12:58.212 ], 00:12:58.212 "driver_specific": { 00:12:58.212 "lvol": { 00:12:58.212 "lvol_store_uuid": "f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f", 00:12:58.212 "base_bdev": "a80c94bc-90a1-4797-81ae-e21fc747e9c1", 00:12:58.212 "thin_provision": false, 00:12:58.212 "snapshot": false, 00:12:58.212 "clone": false, 00:12:58.212 "esnap_clone": false 00:12:58.212 } 00:12:58.212 } 00:12:58.212 } 00:12:58.212 ]' 00:12:58.212 12:32:40 -- lvol/basic.sh@513 -- # jq -r '.[0].name' 00:12:58.470 12:32:40 -- lvol/basic.sh@513 -- # '[' 6bf1ac60-3872-4cf2-83c6-785e076850a3 = 6bf1ac60-3872-4cf2-83c6-785e076850a3 ']' 00:12:58.470 12:32:40 -- lvol/basic.sh@514 -- # jq -r '.[0].uuid' 00:12:58.470 12:32:40 -- lvol/basic.sh@514 -- # '[' 6bf1ac60-3872-4cf2-83c6-785e076850a3 = 6bf1ac60-3872-4cf2-83c6-785e076850a3 ']' 00:12:58.470 12:32:40 -- lvol/basic.sh@515 -- # jq -r '.[0].aliases[0]' 00:12:58.470 12:32:40 -- lvol/basic.sh@515 -- # '[' nested_lvs/nested_lvol1 = nested_lvs/nested_lvol1 ']' 00:12:58.470 12:32:40 -- lvol/basic.sh@516 -- # jq -r '.[0].block_size' 00:12:58.470 12:32:40 -- lvol/basic.sh@516 -- # '[' 512 = 512 ']' 00:12:58.470 12:32:40 -- lvol/basic.sh@517 -- # jq -r '.[0].num_blocks' 00:12:58.470 12:32:40 -- lvol/basic.sh@517 -- # '[' 245760 = 245760 ']' 00:12:58.470 12:32:40 -- lvol/basic.sh@518 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:12:58.729 12:32:41 -- lvol/basic.sh@518 -- # '[' f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f = f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f ']' 00:12:58.729 12:32:41 -- lvol/basic.sh@521 -- # rpc_cmd bdev_lvol_create -u f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f nested_lvol2 120 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 [2024-10-01 12:32:41.044502] blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 30 (clusters) 00:12:58.729 request: 00:12:58.729 { 00:12:58.729 "lvol_name": "nested_lvol2", 00:12:58.729 "size_in_mib": 120, 00:12:58.729 "uuid": "f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f", 00:12:58.729 "method": "bdev_lvol_create", 00:12:58.729 "req_id": 1 00:12:58.729 } 00:12:58.729 Got JSON-RPC error response 00:12:58.729 response: 00:12:58.729 { 00:12:58.729 "code": -32602, 00:12:58.729 "message": "No space left on device" 00:12:58.729 } 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@524 -- # rpc_cmd bdev_lvol_delete 6bf1ac60-3872-4cf2-83c6-785e076850a3 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@525 -- # rpc_cmd bdev_get_bdevs -b 6bf1ac60-3872-4cf2-83c6-785e076850a3 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 [2024-10-01 12:32:41.081075] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 6bf1ac60-3872-4cf2-83c6-785e076850a3 00:12:58.729 request: 00:12:58.729 { 00:12:58.729 "name": "6bf1ac60-3872-4cf2-83c6-785e076850a3", 00:12:58.729 "method": "bdev_get_bdevs", 00:12:58.729 "req_id": 1 00:12:58.729 } 00:12:58.729 Got JSON-RPC error response 00:12:58.729 response: 00:12:58.729 { 00:12:58.729 "code": -19, 00:12:58.729 "message": "No such device" 00:12:58.729 } 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@526 -- # rpc_cmd bdev_lvol_delete_lvstore -u f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@527 -- # rpc_cmd bdev_lvol_get_lvstores -u f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 request: 00:12:58.729 { 00:12:58.729 "uuid": "f3cd084e-db7a-4ac4-8e4d-78a68f5ed78f", 00:12:58.729 "method": "bdev_lvol_get_lvstores", 00:12:58.729 "req_id": 1 00:12:58.729 } 00:12:58.729 Got JSON-RPC error response 00:12:58.729 response: 00:12:58.729 { 00:12:58.729 "code": -19, 00:12:58.729 "message": "No such device" 00:12:58.729 } 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@528 -- # rpc_cmd bdev_lvol_delete a80c94bc-90a1-4797-81ae-e21fc747e9c1 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@529 -- # rpc_cmd bdev_get_bdevs -b a80c94bc-90a1-4797-81ae-e21fc747e9c1 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 [2024-10-01 12:32:41.139554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: a80c94bc-90a1-4797-81ae-e21fc747e9c1 00:12:58.729 request: 00:12:58.729 { 00:12:58.729 "name": "a80c94bc-90a1-4797-81ae-e21fc747e9c1", 00:12:58.729 "method": "bdev_get_bdevs", 00:12:58.729 "req_id": 1 00:12:58.729 } 00:12:58.729 Got JSON-RPC error response 00:12:58.729 response: 00:12:58.729 { 00:12:58.729 "code": -19, 00:12:58.729 "message": "No such device" 00:12:58.729 } 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@530 -- # rpc_cmd bdev_lvol_delete_lvstore -u 82084fb7-8fd5-4e52-84f0-fc0a9571890d 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.729 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.729 12:32:41 -- lvol/basic.sh@531 -- # rpc_cmd bdev_lvol_get_lvstores -u 82084fb7-8fd5-4e52-84f0-fc0a9571890d 00:12:58.729 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.730 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 request: 00:12:58.730 { 00:12:58.730 "uuid": "82084fb7-8fd5-4e52-84f0-fc0a9571890d", 00:12:58.730 "method": "bdev_lvol_get_lvstores", 00:12:58.730 "req_id": 1 00:12:58.730 } 00:12:58.730 Got JSON-RPC error response 00:12:58.730 response: 00:12:58.730 { 00:12:58.730 "code": -19, 00:12:58.730 "message": "No such device" 00:12:58.730 } 00:12:58.730 12:32:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:58.730 12:32:41 -- lvol/basic.sh@532 -- # rpc_cmd bdev_malloc_delete Malloc16 00:12:58.730 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.730 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.988 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.988 12:32:41 -- lvol/basic.sh@533 -- # check_leftover_devices 00:12:58.988 12:32:41 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:58.988 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.988 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:58.988 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.988 12:32:41 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:58.988 12:32:41 -- lvol/common.sh@26 -- # jq length 00:12:59.248 12:32:41 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:59.248 12:32:41 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:59.248 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.248 12:32:41 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:59.248 12:32:41 -- lvol/common.sh@28 -- # jq length 00:12:59.248 ************************************ 00:12:59.248 END TEST test_construct_nested_lvol 00:12:59.248 ************************************ 00:12:59.248 12:32:41 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:59.248 00:12:59.248 real 0m1.076s 00:12:59.248 user 0m0.411s 00:12:59.248 sys 0m0.074s 00:12:59.248 12:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 12:32:41 -- lvol/basic.sh@591 -- # run_test test_lvol_list test_lvol_list 00:12:59.248 12:32:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:59.248 12:32:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 ************************************ 00:12:59.248 START TEST test_lvol_list 00:12:59.248 ************************************ 00:12:59.248 12:32:41 -- common/autotest_common.sh@1104 -- # test_lvol_list 00:12:59.248 12:32:41 -- lvol/basic.sh@539 -- # rpc_cmd bdev_malloc_create 128 512 00:12:59.248 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.248 12:32:41 -- lvol/basic.sh@539 -- # malloc_name=Malloc17 00:12:59.248 12:32:41 -- lvol/basic.sh@540 -- # rpc_cmd bdev_lvol_create_lvstore Malloc17 lvs_test 00:12:59.248 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.248 12:32:41 -- lvol/basic.sh@540 -- # lvs_uuid=ca808f6f-ff23-491e-8d92-2630842c1bba 00:12:59.248 12:32:41 -- lvol/basic.sh@543 -- # rpc_cmd bdev_lvol_get_lvols 00:12:59.248 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.248 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.507 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.507 12:32:41 -- lvol/basic.sh@543 -- # lvols='[]' 00:12:59.507 12:32:41 -- lvol/basic.sh@544 -- # jq -r '. | length' 00:12:59.507 12:32:41 -- lvol/basic.sh@544 -- # '[' 0 == 0 ']' 00:12:59.507 12:32:41 -- lvol/basic.sh@547 -- # rpc_cmd bdev_lvol_create -u ca808f6f-ff23-491e-8d92-2630842c1bba lvol_test 124 00:12:59.507 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.507 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.507 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.508 12:32:41 -- lvol/basic.sh@547 -- # lvol_uuid=f579b780-103c-4857-8967-1df0673d2993 00:12:59.508 12:32:41 -- lvol/basic.sh@548 -- # rpc_cmd bdev_lvol_get_lvols 00:12:59.508 12:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.508 12:32:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.508 12:32:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.508 12:32:41 -- lvol/basic.sh@548 -- # lvols='[ 00:12:59.508 { 00:12:59.508 "alias": "lvs_test/lvol_test", 00:12:59.508 "uuid": "f579b780-103c-4857-8967-1df0673d2993", 00:12:59.508 "name": "lvol_test", 00:12:59.508 "is_thin_provisioned": false, 00:12:59.508 "is_snapshot": false, 00:12:59.508 "is_clone": false, 00:12:59.508 "is_esnap_clone": false, 00:12:59.508 "is_degraded": false, 00:12:59.508 "lvs": { 00:12:59.508 "name": "lvs_test", 00:12:59.508 "uuid": "ca808f6f-ff23-491e-8d92-2630842c1bba" 00:12:59.508 } 00:12:59.508 } 00:12:59.508 ]' 00:12:59.508 12:32:41 -- lvol/basic.sh@549 -- # jq -r '. | length' 00:12:59.508 12:32:41 -- lvol/basic.sh@549 -- # '[' 1 == 1 ']' 00:12:59.508 12:32:41 -- lvol/basic.sh@550 -- # jq -r '.[0].uuid' 00:12:59.508 12:32:41 -- lvol/basic.sh@550 -- # '[' f579b780-103c-4857-8967-1df0673d2993 == f579b780-103c-4857-8967-1df0673d2993 ']' 00:12:59.508 12:32:41 -- lvol/basic.sh@551 -- # jq -r '.[0].name' 00:12:59.508 12:32:42 -- lvol/basic.sh@551 -- # '[' lvol_test == lvol_test ']' 00:12:59.508 12:32:42 -- lvol/basic.sh@552 -- # jq -r '.[0].alias' 00:12:59.766 12:32:42 -- lvol/basic.sh@552 -- # '[' lvs_test/lvol_test == lvs_test/lvol_test ']' 00:12:59.766 12:32:42 -- lvol/basic.sh@553 -- # jq -r '.[0].lvs.name' 00:12:59.766 12:32:42 -- lvol/basic.sh@553 -- # '[' lvs_test == lvs_test ']' 00:12:59.766 12:32:42 -- lvol/basic.sh@554 -- # jq -r '.[0].lvs.uuid' 00:12:59.766 12:32:42 -- lvol/basic.sh@554 -- # '[' ca808f6f-ff23-491e-8d92-2630842c1bba == ca808f6f-ff23-491e-8d92-2630842c1bba ']' 00:12:59.766 12:32:42 -- lvol/basic.sh@556 -- # rpc_cmd bdev_lvol_delete_lvstore -u ca808f6f-ff23-491e-8d92-2630842c1bba 00:12:59.766 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.766 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:12:59.766 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.766 12:32:42 -- lvol/basic.sh@557 -- # rpc_cmd bdev_malloc_delete Malloc17 00:12:59.766 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.766 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.024 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.024 12:32:42 -- lvol/basic.sh@558 -- # check_leftover_devices 00:13:00.025 12:32:42 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:00.025 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.025 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.025 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.025 12:32:42 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:00.025 12:32:42 -- lvol/common.sh@26 -- # jq length 00:13:00.025 12:32:42 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:00.025 12:32:42 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:00.025 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.025 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.025 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.025 12:32:42 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:00.284 12:32:42 -- lvol/common.sh@28 -- # jq length 00:13:00.284 ************************************ 00:13:00.284 END TEST test_lvol_list 00:13:00.284 ************************************ 00:13:00.284 12:32:42 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:00.284 00:13:00.284 real 0m0.967s 00:13:00.284 user 0m0.439s 00:13:00.284 sys 0m0.056s 00:13:00.284 12:32:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.284 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.284 12:32:42 -- lvol/basic.sh@592 -- # run_test test_sigterm test_sigterm 00:13:00.284 12:32:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:00.284 12:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.284 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.284 ************************************ 00:13:00.284 START TEST test_sigterm 00:13:00.284 ************************************ 00:13:00.284 12:32:42 -- common/autotest_common.sh@1104 -- # test_sigterm 00:13:00.284 12:32:42 -- lvol/basic.sh@564 -- # rpc_cmd bdev_malloc_create 128 512 00:13:00.284 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.284 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.284 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.284 12:32:42 -- lvol/basic.sh@564 -- # malloc_name=Malloc18 00:13:00.284 12:32:42 -- lvol/basic.sh@565 -- # rpc_cmd bdev_lvol_create_lvstore Malloc18 lvs_test 00:13:00.284 12:32:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.284 12:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.284 12:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.284 12:32:42 -- lvol/basic.sh@565 -- # lvs_uuid=4d8fa50f-1107-42f5-bdc9-1bdfc950f47e 00:13:00.284 12:32:42 -- lvol/basic.sh@568 -- # killprocess 58025 00:13:00.285 12:32:42 -- common/autotest_common.sh@926 -- # '[' -z 58025 ']' 00:13:00.285 12:32:42 -- common/autotest_common.sh@930 -- # kill -0 58025 00:13:00.285 12:32:42 -- common/autotest_common.sh@931 -- # uname 00:13:00.285 12:32:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:00.285 12:32:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58025 00:13:00.544 killing process with pid 58025 00:13:00.544 12:32:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:00.544 12:32:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:00.544 12:32:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58025' 00:13:00.544 12:32:42 -- common/autotest_common.sh@945 -- # kill 58025 00:13:00.544 12:32:42 -- common/autotest_common.sh@950 -- # wait 58025 00:13:03.080 ************************************ 00:13:03.080 END TEST test_sigterm 00:13:03.080 ************************************ 00:13:03.080 00:13:03.080 real 0m2.451s 00:13:03.080 user 0m33.158s 00:13:03.080 sys 0m4.257s 00:13:03.080 12:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.080 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:32:45 -- lvol/basic.sh@594 -- # trap - SIGINT SIGTERM EXIT 00:13:03.080 12:32:45 -- lvol/basic.sh@595 -- # ps -p 58025 00:13:03.080 PID TTY TIME CMD 00:13:03.080 00:13:03.080 real 0m39.447s 00:13:03.080 user 0m45.369s 00:13:03.080 sys 0m7.536s 00:13:03.080 ************************************ 00:13:03.080 END TEST lvol_basic 00:13:03.080 ************************************ 00:13:03.080 12:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.080 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:32:45 -- lvol/lvol.sh@15 -- # run_test lvol_resize /home/vagrant/spdk_repo/spdk/test/lvol/resize.sh 00:13:03.080 12:32:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:03.080 12:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.080 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 ************************************ 00:13:03.080 START TEST lvol_resize 00:13:03.080 ************************************ 00:13:03.080 12:32:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/resize.sh 00:13:03.080 * Looking for test storage... 00:13:03.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:13:03.080 12:32:45 -- lvol/resize.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:03.080 12:32:45 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:03.080 12:32:45 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:03.080 12:32:45 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:03.080 12:32:45 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:03.080 12:32:45 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:03.080 12:32:45 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:03.080 12:32:45 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:03.080 12:32:45 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:03.080 12:32:45 -- lvol/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:03.080 12:32:45 -- bdev/nbd_common.sh@6 -- # set -e 00:13:03.080 12:32:45 -- lvol/resize.sh@210 -- # modprobe nbd 00:13:03.081 12:32:45 -- lvol/resize.sh@212 -- # spdk_pid=59315 00:13:03.081 12:32:45 -- lvol/resize.sh@213 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:03.081 12:32:45 -- lvol/resize.sh@214 -- # waitforlisten 59315 00:13:03.081 12:32:45 -- lvol/resize.sh@211 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:03.081 12:32:45 -- common/autotest_common.sh@819 -- # '[' -z 59315 ']' 00:13:03.081 12:32:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.081 12:32:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:03.081 12:32:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.081 12:32:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:03.081 12:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:03.081 [2024-10-01 12:32:45.427706] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:03.081 [2024-10-01 12:32:45.427860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:13:03.081 [2024-10-01 12:32:45.600609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.352 [2024-10-01 12:32:45.820636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.352 [2024-10-01 12:32:45.821212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.744 12:32:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.744 12:32:47 -- common/autotest_common.sh@852 -- # return 0 00:13:04.744 12:32:47 -- lvol/resize.sh@216 -- # run_test test_resize_lvol test_resize_lvol 00:13:04.744 12:32:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:04.744 12:32:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:04.744 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:04.744 ************************************ 00:13:04.744 START TEST test_resize_lvol 00:13:04.744 ************************************ 00:13:04.744 12:32:47 -- common/autotest_common.sh@1104 -- # test_resize_lvol 00:13:04.744 12:32:47 -- lvol/resize.sh@15 -- # rpc_cmd bdev_malloc_create 128 512 00:13:04.744 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.744 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:04.744 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.744 12:32:47 -- lvol/resize.sh@15 -- # malloc_name=Malloc0 00:13:04.744 12:32:47 -- lvol/resize.sh@16 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:13:04.744 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.744 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:04.744 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.744 12:32:47 -- lvol/resize.sh@16 -- # lvs_uuid=e7e6277b-0335-4077-bc43-744f10d87b86 00:13:04.744 12:32:47 -- lvol/resize.sh@19 -- # round_down 31 00:13:04.744 12:32:47 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:04.744 12:32:47 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:04.744 12:32:47 -- lvol/common.sh@36 -- # echo 28 00:13:04.744 12:32:47 -- lvol/resize.sh@19 -- # lvol_size_mb=28 00:13:04.744 12:32:47 -- lvol/resize.sh@20 -- # lvol_size=29360128 00:13:04.744 12:32:47 -- lvol/resize.sh@23 -- # rpc_cmd bdev_lvol_create -u e7e6277b-0335-4077-bc43-744f10d87b86 lvol_test 28 00:13:04.744 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.744 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:04.744 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.744 12:32:47 -- lvol/resize.sh@23 -- # lvol_uuid=39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:04.744 12:32:47 -- lvol/resize.sh@24 -- # rpc_cmd bdev_get_bdevs -b 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:04.744 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.744 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.004 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.004 12:32:47 -- lvol/resize.sh@24 -- # lvol='[ 00:13:05.004 { 00:13:05.004 "name": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.004 "aliases": [ 00:13:05.004 "lvs_test/lvol_test" 00:13:05.004 ], 00:13:05.004 "product_name": "Logical Volume", 00:13:05.004 "block_size": 512, 00:13:05.004 "num_blocks": 57344, 00:13:05.004 "uuid": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.004 "assigned_rate_limits": { 00:13:05.004 "rw_ios_per_sec": 0, 00:13:05.004 "rw_mbytes_per_sec": 0, 00:13:05.004 "r_mbytes_per_sec": 0, 00:13:05.004 "w_mbytes_per_sec": 0 00:13:05.004 }, 00:13:05.004 "claimed": false, 00:13:05.004 "zoned": false, 00:13:05.004 "supported_io_types": { 00:13:05.004 "read": true, 00:13:05.004 "write": true, 00:13:05.004 "unmap": true, 00:13:05.004 "write_zeroes": true, 00:13:05.004 "flush": false, 00:13:05.004 "reset": true, 00:13:05.004 "compare": false, 00:13:05.004 "compare_and_write": false, 00:13:05.004 "abort": false, 00:13:05.004 "nvme_admin": false, 00:13:05.004 "nvme_io": false 00:13:05.004 }, 00:13:05.004 "memory_domains": [ 00:13:05.004 { 00:13:05.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.004 "dma_device_type": 2 00:13:05.004 } 00:13:05.004 ], 00:13:05.004 "driver_specific": { 00:13:05.004 "lvol": { 00:13:05.004 "lvol_store_uuid": "e7e6277b-0335-4077-bc43-744f10d87b86", 00:13:05.004 "base_bdev": "Malloc0", 00:13:05.004 "thin_provision": false, 00:13:05.004 "snapshot": false, 00:13:05.004 "clone": false, 00:13:05.004 "esnap_clone": false 00:13:05.004 } 00:13:05.004 } 00:13:05.004 } 00:13:05.004 ]' 00:13:05.004 12:32:47 -- lvol/resize.sh@25 -- # jq -r '.[0].name' 00:13:05.004 12:32:47 -- lvol/resize.sh@25 -- # '[' 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 = 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 ']' 00:13:05.004 12:32:47 -- lvol/resize.sh@26 -- # jq -r '.[0].uuid' 00:13:05.004 12:32:47 -- lvol/resize.sh@26 -- # '[' 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 = 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 ']' 00:13:05.004 12:32:47 -- lvol/resize.sh@27 -- # jq -r '.[0].aliases[0]' 00:13:05.004 12:32:47 -- lvol/resize.sh@27 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:13:05.004 12:32:47 -- lvol/resize.sh@28 -- # jq -r '.[0].block_size' 00:13:05.004 12:32:47 -- lvol/resize.sh@28 -- # '[' 512 = 512 ']' 00:13:05.004 12:32:47 -- lvol/resize.sh@29 -- # jq -r '.[0].num_blocks' 00:13:05.264 12:32:47 -- lvol/resize.sh@29 -- # '[' 57344 = 57344 ']' 00:13:05.264 12:32:47 -- lvol/resize.sh@32 -- # lvol_size_mb=56 00:13:05.264 12:32:47 -- lvol/resize.sh@33 -- # lvol_size=58720256 00:13:05.264 12:32:47 -- lvol/resize.sh@34 -- # rpc_cmd bdev_lvol_resize 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 56 00:13:05.264 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.264 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.264 12:32:47 -- lvol/resize.sh@35 -- # rpc_cmd bdev_get_bdevs -b 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.264 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.264 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.264 12:32:47 -- lvol/resize.sh@35 -- # lvol='[ 00:13:05.264 { 00:13:05.264 "name": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.264 "aliases": [ 00:13:05.264 "lvs_test/lvol_test" 00:13:05.264 ], 00:13:05.264 "product_name": "Logical Volume", 00:13:05.264 "block_size": 512, 00:13:05.264 "num_blocks": 114688, 00:13:05.264 "uuid": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.264 "assigned_rate_limits": { 00:13:05.264 "rw_ios_per_sec": 0, 00:13:05.264 "rw_mbytes_per_sec": 0, 00:13:05.264 "r_mbytes_per_sec": 0, 00:13:05.264 "w_mbytes_per_sec": 0 00:13:05.264 }, 00:13:05.264 "claimed": false, 00:13:05.264 "zoned": false, 00:13:05.264 "supported_io_types": { 00:13:05.264 "read": true, 00:13:05.264 "write": true, 00:13:05.264 "unmap": true, 00:13:05.264 "write_zeroes": true, 00:13:05.264 "flush": false, 00:13:05.264 "reset": true, 00:13:05.264 "compare": false, 00:13:05.264 "compare_and_write": false, 00:13:05.264 "abort": false, 00:13:05.264 "nvme_admin": false, 00:13:05.264 "nvme_io": false 00:13:05.264 }, 00:13:05.264 "memory_domains": [ 00:13:05.264 { 00:13:05.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.264 "dma_device_type": 2 00:13:05.264 } 00:13:05.264 ], 00:13:05.264 "driver_specific": { 00:13:05.264 "lvol": { 00:13:05.264 "lvol_store_uuid": "e7e6277b-0335-4077-bc43-744f10d87b86", 00:13:05.264 "base_bdev": "Malloc0", 00:13:05.264 "thin_provision": false, 00:13:05.264 "snapshot": false, 00:13:05.264 "clone": false, 00:13:05.264 "esnap_clone": false 00:13:05.264 } 00:13:05.264 } 00:13:05.264 } 00:13:05.264 ]' 00:13:05.264 12:32:47 -- lvol/resize.sh@36 -- # jq -r '.[0].num_blocks' 00:13:05.264 12:32:47 -- lvol/resize.sh@36 -- # '[' 114688 = 114688 ']' 00:13:05.264 12:32:47 -- lvol/resize.sh@39 -- # lvol_size_mb=112 00:13:05.264 12:32:47 -- lvol/resize.sh@40 -- # lvol_size=117440512 00:13:05.264 12:32:47 -- lvol/resize.sh@41 -- # rpc_cmd bdev_lvol_resize lvs_test/lvol_test 112 00:13:05.264 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.264 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.264 12:32:47 -- lvol/resize.sh@42 -- # rpc_cmd bdev_get_bdevs -b 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.264 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.264 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.264 12:32:47 -- lvol/resize.sh@42 -- # lvol='[ 00:13:05.264 { 00:13:05.264 "name": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.265 "aliases": [ 00:13:05.265 "lvs_test/lvol_test" 00:13:05.265 ], 00:13:05.265 "product_name": "Logical Volume", 00:13:05.265 "block_size": 512, 00:13:05.265 "num_blocks": 229376, 00:13:05.265 "uuid": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.265 "assigned_rate_limits": { 00:13:05.265 "rw_ios_per_sec": 0, 00:13:05.265 "rw_mbytes_per_sec": 0, 00:13:05.265 "r_mbytes_per_sec": 0, 00:13:05.265 "w_mbytes_per_sec": 0 00:13:05.265 }, 00:13:05.265 "claimed": false, 00:13:05.265 "zoned": false, 00:13:05.265 "supported_io_types": { 00:13:05.265 "read": true, 00:13:05.265 "write": true, 00:13:05.265 "unmap": true, 00:13:05.265 "write_zeroes": true, 00:13:05.265 "flush": false, 00:13:05.265 "reset": true, 00:13:05.265 "compare": false, 00:13:05.265 "compare_and_write": false, 00:13:05.265 "abort": false, 00:13:05.265 "nvme_admin": false, 00:13:05.265 "nvme_io": false 00:13:05.265 }, 00:13:05.265 "memory_domains": [ 00:13:05.265 { 00:13:05.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.265 "dma_device_type": 2 00:13:05.265 } 00:13:05.265 ], 00:13:05.265 "driver_specific": { 00:13:05.265 "lvol": { 00:13:05.265 "lvol_store_uuid": "e7e6277b-0335-4077-bc43-744f10d87b86", 00:13:05.265 "base_bdev": "Malloc0", 00:13:05.265 "thin_provision": false, 00:13:05.265 "snapshot": false, 00:13:05.265 "clone": false, 00:13:05.265 "esnap_clone": false 00:13:05.265 } 00:13:05.265 } 00:13:05.265 } 00:13:05.265 ]' 00:13:05.265 12:32:47 -- lvol/resize.sh@43 -- # jq -r '.[0].num_blocks' 00:13:05.265 12:32:47 -- lvol/resize.sh@43 -- # '[' 229376 = 229376 ']' 00:13:05.265 12:32:47 -- lvol/resize.sh@46 -- # lvol_size_mb=0 00:13:05.265 12:32:47 -- lvol/resize.sh@47 -- # lvol_size=0 00:13:05.265 12:32:47 -- lvol/resize.sh@48 -- # rpc_cmd bdev_lvol_resize lvs_test/lvol_test 0 00:13:05.265 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.265 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.265 [2024-10-01 12:32:47.712214] vbdev_lvol_rpc.c: 875:rpc_bdev_lvol_resize: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:13:05.265 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.265 12:32:47 -- lvol/resize.sh@49 -- # rpc_cmd bdev_get_bdevs -b 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.265 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.265 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.265 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.265 12:32:47 -- lvol/resize.sh@49 -- # lvol='[ 00:13:05.265 { 00:13:05.265 "name": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.265 "aliases": [ 00:13:05.265 "lvs_test/lvol_test" 00:13:05.265 ], 00:13:05.265 "product_name": "Logical Volume", 00:13:05.265 "block_size": 512, 00:13:05.265 "num_blocks": 0, 00:13:05.265 "uuid": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.265 "assigned_rate_limits": { 00:13:05.265 "rw_ios_per_sec": 0, 00:13:05.265 "rw_mbytes_per_sec": 0, 00:13:05.265 "r_mbytes_per_sec": 0, 00:13:05.265 "w_mbytes_per_sec": 0 00:13:05.265 }, 00:13:05.265 "claimed": false, 00:13:05.265 "zoned": false, 00:13:05.265 "supported_io_types": { 00:13:05.265 "read": true, 00:13:05.265 "write": true, 00:13:05.265 "unmap": true, 00:13:05.265 "write_zeroes": true, 00:13:05.265 "flush": false, 00:13:05.265 "reset": true, 00:13:05.265 "compare": false, 00:13:05.265 "compare_and_write": false, 00:13:05.265 "abort": false, 00:13:05.265 "nvme_admin": false, 00:13:05.265 "nvme_io": false 00:13:05.265 }, 00:13:05.265 "memory_domains": [ 00:13:05.265 { 00:13:05.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.265 "dma_device_type": 2 00:13:05.265 } 00:13:05.265 ], 00:13:05.265 "driver_specific": { 00:13:05.265 "lvol": { 00:13:05.265 "lvol_store_uuid": "e7e6277b-0335-4077-bc43-744f10d87b86", 00:13:05.265 "base_bdev": "Malloc0", 00:13:05.265 "thin_provision": false, 00:13:05.265 "snapshot": false, 00:13:05.265 "clone": false, 00:13:05.265 "esnap_clone": false 00:13:05.265 } 00:13:05.265 } 00:13:05.265 } 00:13:05.265 ]' 00:13:05.265 12:32:47 -- lvol/resize.sh@50 -- # jq -r '.[0].num_blocks' 00:13:05.524 12:32:47 -- lvol/resize.sh@50 -- # '[' 0 = 0 ']' 00:13:05.524 12:32:47 -- lvol/resize.sh@53 -- # rpc_cmd bdev_lvol_delete 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.524 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.524 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.524 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.524 12:32:47 -- lvol/resize.sh@54 -- # rpc_cmd bdev_get_bdevs -b 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.524 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.524 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.524 [2024-10-01 12:32:47.812944] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 39bf8e4b-eaaf-4555-b442-ae86c0c6a959 00:13:05.524 request: 00:13:05.524 { 00:13:05.524 "name": "39bf8e4b-eaaf-4555-b442-ae86c0c6a959", 00:13:05.524 "method": "bdev_get_bdevs", 00:13:05.524 "req_id": 1 00:13:05.524 } 00:13:05.524 Got JSON-RPC error response 00:13:05.524 response: 00:13:05.524 { 00:13:05.524 "code": -19, 00:13:05.524 "message": "No such device" 00:13:05.524 } 00:13:05.524 12:32:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:05.524 12:32:47 -- lvol/resize.sh@55 -- # rpc_cmd bdev_lvol_delete_lvstore -u e7e6277b-0335-4077-bc43-744f10d87b86 00:13:05.524 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.524 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.524 12:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.524 12:32:47 -- lvol/resize.sh@56 -- # rpc_cmd bdev_lvol_get_lvstores -u e7e6277b-0335-4077-bc43-744f10d87b86 00:13:05.524 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.524 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.524 request: 00:13:05.524 { 00:13:05.524 "uuid": "e7e6277b-0335-4077-bc43-744f10d87b86", 00:13:05.524 "method": "bdev_lvol_get_lvstores", 00:13:05.524 "req_id": 1 00:13:05.524 } 00:13:05.524 Got JSON-RPC error response 00:13:05.524 response: 00:13:05.524 { 00:13:05.524 "code": -19, 00:13:05.524 "message": "No such device" 00:13:05.524 } 00:13:05.524 12:32:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:05.524 12:32:47 -- lvol/resize.sh@57 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:05.524 12:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.524 12:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 ************************************ 00:13:05.784 END TEST test_resize_lvol 00:13:05.784 ************************************ 00:13:05.784 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.784 00:13:05.784 real 0m0.994s 00:13:05.784 user 0m0.423s 00:13:05.784 sys 0m0.060s 00:13:05.784 12:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 12:32:48 -- lvol/resize.sh@217 -- # run_test test_resize_lvol_negative test_resize_lvol_negative 00:13:05.784 12:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:05.784 12:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 ************************************ 00:13:05.784 START TEST test_resize_lvol_negative 00:13:05.784 ************************************ 00:13:05.784 12:32:48 -- common/autotest_common.sh@1104 -- # test_resize_lvol_negative 00:13:05.784 12:32:48 -- lvol/resize.sh@65 -- # rpc_cmd bdev_malloc_create 128 512 00:13:05.784 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.784 12:32:48 -- lvol/resize.sh@65 -- # malloc_name=Malloc1 00:13:05.784 12:32:48 -- lvol/resize.sh@66 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test 00:13:05.784 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.784 12:32:48 -- lvol/resize.sh@66 -- # lvs_uuid=79891217-2e36-4721-b1e6-7aa4865e51ac 00:13:05.784 12:32:48 -- lvol/resize.sh@69 -- # rpc_cmd bdev_lvol_create -u 79891217-2e36-4721-b1e6-7aa4865e51ac lvol_test 124 00:13:05.784 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.784 12:32:48 -- lvol/resize.sh@69 -- # lvol_uuid=8123bf20-0133-43fa-8d46-757088833ed7 00:13:05.784 12:32:48 -- lvol/resize.sh@72 -- # dummy_uuid=00000000-0000-0000-0000-000000000000 00:13:05.784 12:32:48 -- lvol/resize.sh@73 -- # rpc_cmd bdev_lvol_resize 00000000-0000-0000-0000-000000000000 0 00:13:05.784 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.784 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.044 [2024-10-01 12:32:48.312114] vbdev_lvol_rpc.c: 875:rpc_bdev_lvol_resize: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:13:06.044 [2024-10-01 12:32:48.312182] vbdev_lvol_rpc.c: 881:rpc_bdev_lvol_resize: *ERROR*: no bdev for provided name 00000000-0000-0000-0000-000000000000 00:13:06.044 request: 00:13:06.044 { 00:13:06.044 "name": "00000000-0000-0000-0000-000000000000", 00:13:06.044 "size_in_mib": 0, 00:13:06.044 "method": "bdev_lvol_resize", 00:13:06.044 "req_id": 1 00:13:06.044 } 00:13:06.044 Got JSON-RPC error response 00:13:06.044 response: 00:13:06.044 { 00:13:06.044 "code": -19, 00:13:06.044 "message": "No such device" 00:13:06.044 } 00:13:06.044 12:32:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:06.044 12:32:48 -- lvol/resize.sh@75 -- # rpc_cmd bdev_get_bdevs -b 8123bf20-0133-43fa-8d46-757088833ed7 00:13:06.044 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.044 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.044 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.044 12:32:48 -- lvol/resize.sh@75 -- # lvol='[ 00:13:06.044 { 00:13:06.044 "name": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.044 "aliases": [ 00:13:06.044 "lvs_test/lvol_test" 00:13:06.044 ], 00:13:06.044 "product_name": "Logical Volume", 00:13:06.044 "block_size": 512, 00:13:06.044 "num_blocks": 253952, 00:13:06.044 "uuid": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.044 "assigned_rate_limits": { 00:13:06.044 "rw_ios_per_sec": 0, 00:13:06.044 "rw_mbytes_per_sec": 0, 00:13:06.044 "r_mbytes_per_sec": 0, 00:13:06.044 "w_mbytes_per_sec": 0 00:13:06.044 }, 00:13:06.044 "claimed": false, 00:13:06.044 "zoned": false, 00:13:06.044 "supported_io_types": { 00:13:06.044 "read": true, 00:13:06.044 "write": true, 00:13:06.044 "unmap": true, 00:13:06.044 "write_zeroes": true, 00:13:06.044 "flush": false, 00:13:06.044 "reset": true, 00:13:06.044 "compare": false, 00:13:06.044 "compare_and_write": false, 00:13:06.044 "abort": false, 00:13:06.044 "nvme_admin": false, 00:13:06.044 "nvme_io": false 00:13:06.044 }, 00:13:06.044 "memory_domains": [ 00:13:06.044 { 00:13:06.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.044 "dma_device_type": 2 00:13:06.044 } 00:13:06.044 ], 00:13:06.044 "driver_specific": { 00:13:06.044 "lvol": { 00:13:06.044 "lvol_store_uuid": "79891217-2e36-4721-b1e6-7aa4865e51ac", 00:13:06.044 "base_bdev": "Malloc1", 00:13:06.044 "thin_provision": false, 00:13:06.044 "snapshot": false, 00:13:06.044 "clone": false, 00:13:06.044 "esnap_clone": false 00:13:06.044 } 00:13:06.044 } 00:13:06.044 } 00:13:06.044 ]' 00:13:06.044 12:32:48 -- lvol/resize.sh@76 -- # jq -r '.[0].num_blocks' 00:13:06.044 12:32:48 -- lvol/resize.sh@76 -- # '[' 253952 = 253952 ']' 00:13:06.044 12:32:48 -- lvol/resize.sh@79 -- # rpc_cmd bdev_lvol_resize 8123bf20-0133-43fa-8d46-757088833ed7 128 00:13:06.044 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.044 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 [2024-10-01 12:32:48.396202] blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:13:06.045 [2024-10-01 12:32:48.396262] vbdev_lvol.c:1366:_vbdev_lvol_resize_cb: *ERROR*: CB function for bdev lvol lvol_test receive error no: -28. 00:13:06.045 request: 00:13:06.045 { 00:13:06.045 "name": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.045 "size_in_mib": 128, 00:13:06.045 "method": "bdev_lvol_resize", 00:13:06.045 "req_id": 1 00:13:06.045 } 00:13:06.045 Got JSON-RPC error response 00:13:06.045 response: 00:13:06.045 { 00:13:06.045 "code": -32602, 00:13:06.045 "message": "No space left on device" 00:13:06.045 } 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@81 -- # rpc_cmd bdev_get_bdevs -b 8123bf20-0133-43fa-8d46-757088833ed7 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@81 -- # lvol='[ 00:13:06.045 { 00:13:06.045 "name": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.045 "aliases": [ 00:13:06.045 "lvs_test/lvol_test" 00:13:06.045 ], 00:13:06.045 "product_name": "Logical Volume", 00:13:06.045 "block_size": 512, 00:13:06.045 "num_blocks": 253952, 00:13:06.045 "uuid": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.045 "assigned_rate_limits": { 00:13:06.045 "rw_ios_per_sec": 0, 00:13:06.045 "rw_mbytes_per_sec": 0, 00:13:06.045 "r_mbytes_per_sec": 0, 00:13:06.045 "w_mbytes_per_sec": 0 00:13:06.045 }, 00:13:06.045 "claimed": false, 00:13:06.045 "zoned": false, 00:13:06.045 "supported_io_types": { 00:13:06.045 "read": true, 00:13:06.045 "write": true, 00:13:06.045 "unmap": true, 00:13:06.045 "write_zeroes": true, 00:13:06.045 "flush": false, 00:13:06.045 "reset": true, 00:13:06.045 "compare": false, 00:13:06.045 "compare_and_write": false, 00:13:06.045 "abort": false, 00:13:06.045 "nvme_admin": false, 00:13:06.045 "nvme_io": false 00:13:06.045 }, 00:13:06.045 "memory_domains": [ 00:13:06.045 { 00:13:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.045 "dma_device_type": 2 00:13:06.045 } 00:13:06.045 ], 00:13:06.045 "driver_specific": { 00:13:06.045 "lvol": { 00:13:06.045 "lvol_store_uuid": "79891217-2e36-4721-b1e6-7aa4865e51ac", 00:13:06.045 "base_bdev": "Malloc1", 00:13:06.045 "thin_provision": false, 00:13:06.045 "snapshot": false, 00:13:06.045 "clone": false, 00:13:06.045 "esnap_clone": false 00:13:06.045 } 00:13:06.045 } 00:13:06.045 } 00:13:06.045 ]' 00:13:06.045 12:32:48 -- lvol/resize.sh@82 -- # jq -r '.[0].num_blocks' 00:13:06.045 12:32:48 -- lvol/resize.sh@82 -- # '[' 253952 = 253952 ']' 00:13:06.045 12:32:48 -- lvol/resize.sh@85 -- # rpc_cmd bdev_lvol_delete 8123bf20-0133-43fa-8d46-757088833ed7 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@86 -- # rpc_cmd bdev_get_bdevs -b 8123bf20-0133-43fa-8d46-757088833ed7 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 [2024-10-01 12:32:48.502366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 8123bf20-0133-43fa-8d46-757088833ed7 00:13:06.045 request: 00:13:06.045 { 00:13:06.045 "name": "8123bf20-0133-43fa-8d46-757088833ed7", 00:13:06.045 "method": "bdev_get_bdevs", 00:13:06.045 "req_id": 1 00:13:06.045 } 00:13:06.045 Got JSON-RPC error response 00:13:06.045 response: 00:13:06.045 { 00:13:06.045 "code": -19, 00:13:06.045 "message": "No such device" 00:13:06.045 } 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@87 -- # rpc_cmd bdev_lvol_delete_lvstore -u 79891217-2e36-4721-b1e6-7aa4865e51ac 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@88 -- # rpc_cmd bdev_lvol_get_lvstores -u 79891217-2e36-4721-b1e6-7aa4865e51ac 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.045 request: 00:13:06.045 { 00:13:06.045 "uuid": "79891217-2e36-4721-b1e6-7aa4865e51ac", 00:13:06.045 "method": "bdev_lvol_get_lvstores", 00:13:06.045 "req_id": 1 00:13:06.045 } 00:13:06.045 Got JSON-RPC error response 00:13:06.045 response: 00:13:06.045 { 00:13:06.045 "code": -19, 00:13:06.045 "message": "No such device" 00:13:06.045 } 00:13:06.045 12:32:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:06.045 12:32:48 -- lvol/resize.sh@89 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:06.045 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.045 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.305 ************************************ 00:13:06.305 END TEST test_resize_lvol_negative 00:13:06.305 ************************************ 00:13:06.305 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.305 00:13:06.305 real 0m0.634s 00:13:06.305 user 0m0.130s 00:13:06.305 sys 0m0.029s 00:13:06.305 12:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.305 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 12:32:48 -- lvol/resize.sh@218 -- # run_test test_resize_lvol_with_io_traffic test_resize_lvol_with_io_traffic 00:13:06.567 12:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:06.567 12:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.567 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 ************************************ 00:13:06.567 START TEST test_resize_lvol_with_io_traffic 00:13:06.567 ************************************ 00:13:06.567 12:32:48 -- common/autotest_common.sh@1104 -- # test_resize_lvol_with_io_traffic 00:13:06.567 12:32:48 -- lvol/resize.sh@95 -- # rpc_cmd bdev_malloc_create 128 512 00:13:06.567 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.567 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.567 12:32:48 -- lvol/resize.sh@95 -- # malloc_name=Malloc2 00:13:06.567 12:32:48 -- lvol/resize.sh@96 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test 00:13:06.567 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.567 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 12:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.567 12:32:48 -- lvol/resize.sh@96 -- # lvs_uuid=ecfe2ad3-2612-45ae-8a20-306056d2c4de 00:13:06.567 12:32:48 -- lvol/resize.sh@99 -- # round_down 62 00:13:06.567 12:32:48 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:06.567 12:32:48 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:06.567 12:32:48 -- lvol/common.sh@36 -- # echo 60 00:13:06.567 12:32:48 -- lvol/resize.sh@99 -- # lvol_size_mb=60 00:13:06.567 12:32:48 -- lvol/resize.sh@100 -- # lvol_size=62914560 00:13:06.567 12:32:48 -- lvol/resize.sh@103 -- # rpc_cmd bdev_lvol_create -u ecfe2ad3-2612-45ae-8a20-306056d2c4de lvol_test 60 00:13:06.567 12:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.567 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 12:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.567 12:32:49 -- lvol/resize.sh@103 -- # lvol_uuid=536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:06.567 12:32:49 -- lvol/resize.sh@104 -- # rpc_cmd bdev_get_bdevs -b 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:06.567 12:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.567 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 12:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.568 12:32:49 -- lvol/resize.sh@104 -- # lvol='[ 00:13:06.568 { 00:13:06.568 "name": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:06.568 "aliases": [ 00:13:06.568 "lvs_test/lvol_test" 00:13:06.568 ], 00:13:06.568 "product_name": "Logical Volume", 00:13:06.568 "block_size": 512, 00:13:06.568 "num_blocks": 122880, 00:13:06.568 "uuid": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:06.568 "assigned_rate_limits": { 00:13:06.568 "rw_ios_per_sec": 0, 00:13:06.568 "rw_mbytes_per_sec": 0, 00:13:06.568 "r_mbytes_per_sec": 0, 00:13:06.568 "w_mbytes_per_sec": 0 00:13:06.568 }, 00:13:06.568 "claimed": false, 00:13:06.568 "zoned": false, 00:13:06.568 "supported_io_types": { 00:13:06.568 "read": true, 00:13:06.568 "write": true, 00:13:06.568 "unmap": true, 00:13:06.568 "write_zeroes": true, 00:13:06.568 "flush": false, 00:13:06.568 "reset": true, 00:13:06.568 "compare": false, 00:13:06.568 "compare_and_write": false, 00:13:06.568 "abort": false, 00:13:06.568 "nvme_admin": false, 00:13:06.568 "nvme_io": false 00:13:06.568 }, 00:13:06.568 "memory_domains": [ 00:13:06.568 { 00:13:06.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.568 "dma_device_type": 2 00:13:06.568 } 00:13:06.568 ], 00:13:06.568 "driver_specific": { 00:13:06.568 "lvol": { 00:13:06.568 "lvol_store_uuid": "ecfe2ad3-2612-45ae-8a20-306056d2c4de", 00:13:06.568 "base_bdev": "Malloc2", 00:13:06.568 "thin_provision": false, 00:13:06.568 "snapshot": false, 00:13:06.568 "clone": false, 00:13:06.568 "esnap_clone": false 00:13:06.568 } 00:13:06.568 } 00:13:06.568 } 00:13:06.568 ]' 00:13:06.568 12:32:49 -- lvol/resize.sh@105 -- # jq -r '.[0].name' 00:13:06.568 12:32:49 -- lvol/resize.sh@105 -- # '[' 536f77a8-1308-4538-9bb9-a06e1ddefe3d = 536f77a8-1308-4538-9bb9-a06e1ddefe3d ']' 00:13:06.568 12:32:49 -- lvol/resize.sh@106 -- # jq -r '.[0].uuid' 00:13:06.826 12:32:49 -- lvol/resize.sh@106 -- # '[' 536f77a8-1308-4538-9bb9-a06e1ddefe3d = 536f77a8-1308-4538-9bb9-a06e1ddefe3d ']' 00:13:06.826 12:32:49 -- lvol/resize.sh@107 -- # jq -r '.[0].aliases[0]' 00:13:06.826 12:32:49 -- lvol/resize.sh@107 -- # '[' lvs_test/lvol_test = lvs_test/lvol_test ']' 00:13:06.826 12:32:49 -- lvol/resize.sh@108 -- # jq -r '.[0].block_size' 00:13:06.826 12:32:49 -- lvol/resize.sh@108 -- # '[' 512 = 512 ']' 00:13:06.826 12:32:49 -- lvol/resize.sh@109 -- # jq -r '.[0].num_blocks' 00:13:06.826 12:32:49 -- lvol/resize.sh@109 -- # '[' 122880 = 122880 ']' 00:13:06.826 12:32:49 -- lvol/resize.sh@112 -- # trap 'nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0; exit 1' SIGINT SIGTERM EXIT 00:13:06.826 12:32:49 -- lvol/resize.sh@113 -- # nbd_start_disks /var/tmp/spdk.sock 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('536f77a8-1308-4538-9bb9-a06e1ddefe3d') 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@12 -- # local i 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.826 12:32:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:07.084 /dev/nbd0 00:13:07.084 12:32:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.084 12:32:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.084 12:32:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:07.084 12:32:49 -- common/autotest_common.sh@857 -- # local i 00:13:07.084 12:32:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:07.084 12:32:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:07.084 12:32:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:07.343 12:32:49 -- common/autotest_common.sh@861 -- # break 00:13:07.343 12:32:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:07.343 12:32:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:07.343 12:32:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:07.343 1+0 records in 00:13:07.343 1+0 records out 00:13:07.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600086 s, 6.8 MB/s 00:13:07.343 12:32:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:07.343 12:32:49 -- common/autotest_common.sh@874 -- # size=4096 00:13:07.343 12:32:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:07.343 12:32:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:07.343 12:32:49 -- common/autotest_common.sh@877 -- # return 0 00:13:07.343 12:32:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.343 12:32:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.343 12:32:49 -- lvol/resize.sh@116 -- # count=15 00:13:07.343 12:32:49 -- lvol/resize.sh@117 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 count=15 00:13:07.602 15+0 records in 00:13:07.602 15+0 records out 00:13:07.602 62914560 bytes (63 MB, 60 MiB) copied, 0.370027 s, 170 MB/s 00:13:07.602 12:32:49 -- lvol/resize.sh@120 -- # offset=16 00:13:07.602 12:32:49 -- lvol/resize.sh@121 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 seek=16 count=1 00:13:07.602 dd: /dev/nbd0: cannot seek: Invalid argument 00:13:07.602 0+0 records in 00:13:07.602 0+0 records out 00:13:07.602 0 bytes copied, 0.00160897 s, 0.0 kB/s 00:13:07.602 12:32:50 -- lvol/resize.sh@124 -- # lvol_size_mb=120 00:13:07.602 12:32:50 -- lvol/resize.sh@125 -- # lvol_size=125829120 00:13:07.602 12:32:50 -- lvol/resize.sh@126 -- # rpc_cmd bdev_lvol_resize 536f77a8-1308-4538-9bb9-a06e1ddefe3d 120 00:13:07.602 12:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.602 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 [2024-10-01 12:32:50.014553] nbd.c: 877:nbd_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:07.602 12:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.602 12:32:50 -- lvol/resize.sh@127 -- # rpc_cmd bdev_get_bdevs -b 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:07.602 12:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.602 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 12:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.602 12:32:50 -- lvol/resize.sh@127 -- # lvol='[ 00:13:07.602 { 00:13:07.602 "name": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:07.602 "aliases": [ 00:13:07.602 "lvs_test/lvol_test" 00:13:07.602 ], 00:13:07.602 "product_name": "Logical Volume", 00:13:07.602 "block_size": 512, 00:13:07.602 "num_blocks": 245760, 00:13:07.602 "uuid": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:07.602 "assigned_rate_limits": { 00:13:07.602 "rw_ios_per_sec": 0, 00:13:07.602 "rw_mbytes_per_sec": 0, 00:13:07.602 "r_mbytes_per_sec": 0, 00:13:07.602 "w_mbytes_per_sec": 0 00:13:07.602 }, 00:13:07.602 "claimed": false, 00:13:07.602 "zoned": false, 00:13:07.602 "supported_io_types": { 00:13:07.602 "read": true, 00:13:07.602 "write": true, 00:13:07.602 "unmap": true, 00:13:07.602 "write_zeroes": true, 00:13:07.602 "flush": false, 00:13:07.602 "reset": true, 00:13:07.602 "compare": false, 00:13:07.602 "compare_and_write": false, 00:13:07.602 "abort": false, 00:13:07.602 "nvme_admin": false, 00:13:07.602 "nvme_io": false 00:13:07.602 }, 00:13:07.602 "memory_domains": [ 00:13:07.602 { 00:13:07.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.602 "dma_device_type": 2 00:13:07.602 } 00:13:07.602 ], 00:13:07.602 "driver_specific": { 00:13:07.602 "lvol": { 00:13:07.602 "lvol_store_uuid": "ecfe2ad3-2612-45ae-8a20-306056d2c4de", 00:13:07.602 "base_bdev": "Malloc2", 00:13:07.602 "thin_provision": false, 00:13:07.602 "snapshot": false, 00:13:07.602 "clone": false, 00:13:07.602 "esnap_clone": false 00:13:07.602 } 00:13:07.602 } 00:13:07.602 } 00:13:07.602 ]' 00:13:07.602 12:32:50 -- lvol/resize.sh@128 -- # jq -r '.[0].num_blocks' 00:13:07.602 12:32:50 -- lvol/resize.sh@128 -- # '[' 245760 = 245760 ']' 00:13:07.602 12:32:50 -- lvol/resize.sh@132 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@51 -- # local i 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.602 12:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.861 12:32:50 -- lvol/resize.sh@133 -- # nbd_start_disks /var/tmp/spdk.sock 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('536f77a8-1308-4538-9bb9-a06e1ddefe3d') 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@12 -- # local i 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.861 12:32:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:08.119 /dev/nbd0 00:13:08.119 12:32:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.119 12:32:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.119 12:32:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:08.119 12:32:50 -- common/autotest_common.sh@857 -- # local i 00:13:08.119 12:32:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:08.119 12:32:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:08.119 12:32:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:08.119 12:32:50 -- common/autotest_common.sh@861 -- # break 00:13:08.119 12:32:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:08.119 12:32:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:08.119 12:32:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:08.119 1+0 records in 00:13:08.119 1+0 records out 00:13:08.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618169 s, 6.6 MB/s 00:13:08.119 12:32:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:08.376 12:32:50 -- common/autotest_common.sh@874 -- # size=4096 00:13:08.376 12:32:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:08.376 12:32:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:08.376 12:32:50 -- common/autotest_common.sh@877 -- # return 0 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.376 12:32:50 -- lvol/resize.sh@134 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 seek=16 count=1 00:13:08.376 1+0 records in 00:13:08.376 1+0 records out 00:13:08.376 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0329885 s, 127 MB/s 00:13:08.376 12:32:50 -- lvol/resize.sh@137 -- # trap - SIGINT SIGTERM EXIT 00:13:08.376 12:32:50 -- lvol/resize.sh@138 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@51 -- # local i 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.376 12:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:08.634 12:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.634 12:32:50 -- lvol/resize.sh@141 -- # rpc_cmd bdev_lvol_resize 536f77a8-1308-4538-9bb9-a06e1ddefe3d 4 00:13:08.634 12:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.634 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:08.634 12:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.634 12:32:50 -- lvol/resize.sh@142 -- # rpc_cmd bdev_get_bdevs -b 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:08.634 12:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.634 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:08.634 12:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.634 12:32:50 -- lvol/resize.sh@142 -- # lvol='[ 00:13:08.634 { 00:13:08.634 "name": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:08.634 "aliases": [ 00:13:08.634 "lvs_test/lvol_test" 00:13:08.634 ], 00:13:08.634 "product_name": "Logical Volume", 00:13:08.634 "block_size": 512, 00:13:08.634 "num_blocks": 8192, 00:13:08.634 "uuid": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:08.634 "assigned_rate_limits": { 00:13:08.634 "rw_ios_per_sec": 0, 00:13:08.634 "rw_mbytes_per_sec": 0, 00:13:08.634 "r_mbytes_per_sec": 0, 00:13:08.634 "w_mbytes_per_sec": 0 00:13:08.634 }, 00:13:08.634 "claimed": false, 00:13:08.634 "zoned": false, 00:13:08.634 "supported_io_types": { 00:13:08.634 "read": true, 00:13:08.634 "write": true, 00:13:08.634 "unmap": true, 00:13:08.634 "write_zeroes": true, 00:13:08.634 "flush": false, 00:13:08.634 "reset": true, 00:13:08.634 "compare": false, 00:13:08.634 "compare_and_write": false, 00:13:08.634 "abort": false, 00:13:08.634 "nvme_admin": false, 00:13:08.634 "nvme_io": false 00:13:08.634 }, 00:13:08.634 "memory_domains": [ 00:13:08.634 { 00:13:08.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.634 "dma_device_type": 2 00:13:08.634 } 00:13:08.634 ], 00:13:08.634 "driver_specific": { 00:13:08.634 "lvol": { 00:13:08.634 "lvol_store_uuid": "ecfe2ad3-2612-45ae-8a20-306056d2c4de", 00:13:08.634 "base_bdev": "Malloc2", 00:13:08.634 "thin_provision": false, 00:13:08.634 "snapshot": false, 00:13:08.634 "clone": false, 00:13:08.634 "esnap_clone": false 00:13:08.634 } 00:13:08.634 } 00:13:08.634 } 00:13:08.634 ]' 00:13:08.634 12:32:50 -- lvol/resize.sh@143 -- # jq -r '.[0].num_blocks' 00:13:08.634 12:32:51 -- lvol/resize.sh@143 -- # '[' 8192 = 8192 ']' 00:13:08.634 12:32:51 -- lvol/resize.sh@146 -- # trap 'nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0; exit 1' SIGINT SIGTERM EXIT 00:13:08.634 12:32:51 -- lvol/resize.sh@147 -- # nbd_start_disks /var/tmp/spdk.sock 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('536f77a8-1308-4538-9bb9-a06e1ddefe3d') 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@12 -- # local i 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.634 12:32:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 536f77a8-1308-4538-9bb9-a06e1ddefe3d /dev/nbd0 00:13:08.892 /dev/nbd0 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.892 12:32:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:08.892 12:32:51 -- common/autotest_common.sh@857 -- # local i 00:13:08.892 12:32:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:08.892 12:32:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:08.892 12:32:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:08.892 12:32:51 -- common/autotest_common.sh@861 -- # break 00:13:08.892 12:32:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:08.892 12:32:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:08.892 12:32:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:08.892 1+0 records in 00:13:08.892 1+0 records out 00:13:08.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329658 s, 12.4 MB/s 00:13:08.892 12:32:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:08.892 12:32:51 -- common/autotest_common.sh@874 -- # size=4096 00:13:08.892 12:32:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:08.892 12:32:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:08.892 12:32:51 -- common/autotest_common.sh@877 -- # return 0 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.892 12:32:51 -- lvol/resize.sh@148 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 seek=1 count=1 00:13:08.892 dd: error writing '/dev/nbd0': No space left on device 00:13:08.892 1+0 records in 00:13:08.892 0+0 records out 00:13:08.892 0 bytes copied, 0.0240719 s, 0.0 kB/s 00:13:08.892 12:32:51 -- lvol/resize.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:13:08.892 12:32:51 -- lvol/resize.sh@152 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@51 -- # local i 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.892 12:32:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@41 -- # break 00:13:09.149 12:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.149 12:32:51 -- lvol/resize.sh@153 -- # rpc_cmd bdev_lvol_delete 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:09.149 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.149 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.149 12:32:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.149 12:32:51 -- lvol/resize.sh@154 -- # rpc_cmd bdev_get_bdevs -b 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:09.149 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.149 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.149 [2024-10-01 12:32:51.593908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 536f77a8-1308-4538-9bb9-a06e1ddefe3d 00:13:09.149 request: 00:13:09.149 { 00:13:09.149 "name": "536f77a8-1308-4538-9bb9-a06e1ddefe3d", 00:13:09.149 "method": "bdev_get_bdevs", 00:13:09.149 "req_id": 1 00:13:09.149 } 00:13:09.149 Got JSON-RPC error response 00:13:09.149 response: 00:13:09.149 { 00:13:09.149 "code": -19, 00:13:09.149 "message": "No such device" 00:13:09.149 } 00:13:09.149 12:32:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:09.149 12:32:51 -- lvol/resize.sh@155 -- # rpc_cmd bdev_lvol_delete_lvstore -u ecfe2ad3-2612-45ae-8a20-306056d2c4de 00:13:09.149 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.149 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.149 12:32:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.149 12:32:51 -- lvol/resize.sh@156 -- # rpc_cmd bdev_lvol_get_lvstores -u ecfe2ad3-2612-45ae-8a20-306056d2c4de 00:13:09.149 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.149 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.149 request: 00:13:09.149 { 00:13:09.149 "uuid": "ecfe2ad3-2612-45ae-8a20-306056d2c4de", 00:13:09.149 "method": "bdev_lvol_get_lvstores", 00:13:09.149 "req_id": 1 00:13:09.149 } 00:13:09.149 Got JSON-RPC error response 00:13:09.149 response: 00:13:09.149 { 00:13:09.149 "code": -19, 00:13:09.149 "message": "No such device" 00:13:09.149 } 00:13:09.149 12:32:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:09.149 12:32:51 -- lvol/resize.sh@157 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:09.149 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.149 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 12:32:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.405 00:13:09.405 real 0m3.039s 00:13:09.405 user 0m1.755s 00:13:09.405 sys 0m0.594s 00:13:09.405 12:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.405 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 ************************************ 00:13:09.405 END TEST test_resize_lvol_with_io_traffic 00:13:09.405 ************************************ 00:13:09.663 12:32:51 -- lvol/resize.sh@219 -- # run_test test_destroy_after_bdev_lvol_resize_positive test_destroy_after_bdev_lvol_resize_positive 00:13:09.664 12:32:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:09.664 12:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.664 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.664 ************************************ 00:13:09.664 START TEST test_destroy_after_bdev_lvol_resize_positive 00:13:09.664 ************************************ 00:13:09.664 12:32:51 -- common/autotest_common.sh@1104 -- # test_destroy_after_bdev_lvol_resize_positive 00:13:09.664 12:32:51 -- lvol/resize.sh@163 -- # local malloc_dev 00:13:09.664 12:32:51 -- lvol/resize.sh@164 -- # local lvstore_name=lvs_test lvstore_uuid 00:13:09.664 12:32:51 -- lvol/resize.sh@165 -- # local lbd_name=lbd_test bdev_uuid bdev_size 00:13:09.664 12:32:51 -- lvol/resize.sh@167 -- # rpc_cmd bdev_malloc_create 256 512 00:13:09.664 12:32:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.664 12:32:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.664 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.664 12:32:52 -- lvol/resize.sh@167 -- # malloc_dev=Malloc3 00:13:09.664 12:32:52 -- lvol/resize.sh@168 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:13:09.664 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.664 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.923 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.923 12:32:52 -- lvol/resize.sh@168 -- # lvstore_uuid=43231b62-bb87-4167-8165-a4d87873dbd0 00:13:09.923 12:32:52 -- lvol/resize.sh@170 -- # get_lvs_jq bdev_lvol_get_lvstores -u 43231b62-bb87-4167-8165-a4d87873dbd0 00:13:09.923 12:32:52 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 43231b62-bb87-4167-8165-a4d87873dbd0 00:13:09.923 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:13:09.923 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:09.923 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:09.923 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:09.923 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:09.923 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:09.923 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:09.923 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:09.923 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:13:09.923 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:09.923 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 43231b62-bb87-4167-8165-a4d87873dbd0 00:13:09.923 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.923 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:13:09.923 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.923 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=43231b62-bb87-4167-8165-a4d87873dbd0 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc3 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:13:09.923 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.923 12:32:52 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:13:09.923 12:32:52 -- lvol/resize.sh@171 -- # [[ 43231b62-bb87-4167-8165-a4d87873dbd0 == \4\3\2\3\1\b\6\2\-\b\b\8\7\-\4\1\6\7\-\8\1\6\5\-\a\4\d\8\7\8\7\3\d\b\d\0 ]] 00:13:09.923 12:32:52 -- lvol/resize.sh@172 -- # [[ lvs_test == \l\v\s\_\t\e\s\t ]] 00:13:09.923 12:32:52 -- lvol/resize.sh@174 -- # round_down 31 00:13:09.923 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:09.923 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:09.923 12:32:52 -- lvol/common.sh@36 -- # echo 28 00:13:09.923 12:32:52 -- lvol/resize.sh@174 -- # bdev_size=28 00:13:09.923 12:32:52 -- lvol/resize.sh@175 -- # rpc_cmd bdev_lvol_create -u 43231b62-bb87-4167-8165-a4d87873dbd0 lbd_test 28 00:13:09.923 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.923 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.923 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.923 12:32:52 -- lvol/resize.sh@175 -- # bdev_uuid=d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.923 12:32:52 -- lvol/resize.sh@183 -- # local resize 00:13:09.923 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:09.923 12:32:52 -- lvol/resize.sh@191 -- # round_down 7 00:13:09.923 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:09.923 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:09.923 12:32:52 -- lvol/common.sh@36 -- # echo 4 00:13:09.923 12:32:52 -- lvol/resize.sh@191 -- # resize=4 00:13:09.923 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 4 00:13:09.923 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.923 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.923 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.923 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.923 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.923 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:09.923 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:09.923 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:09.923 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:09.923 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:09.923 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:09.923 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:09.923 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:09.923 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.923 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:09.923 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:09.924 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:09.924 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.924 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:09.924 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.924 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.924 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=8192 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:09.924 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.924 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:09.924 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:09.924 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:09.924 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:09.924 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:09.924 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:09.924 12:32:52 -- lvol/resize.sh@191 -- # round_down 8 00:13:09.924 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:09.924 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:09.924 12:32:52 -- lvol/common.sh@36 -- # echo 8 00:13:09.924 12:32:52 -- lvol/resize.sh@191 -- # resize=8 00:13:09.924 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 8 00:13:09.924 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.924 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.924 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.924 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.924 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.924 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:09.924 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:09.924 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:09.924 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:09.924 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:09.924 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:09.924 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:09.924 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:09.924 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.924 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:09.924 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.925 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:09.925 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.925 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:09.925 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:09.925 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:09.925 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:09.925 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:09.925 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.925 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.925 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.925 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=16384 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:09.925 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:09.925 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:09.925 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:09.925 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:09.925 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:09.925 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:09.925 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:09.925 12:32:52 -- lvol/resize.sh@191 -- # round_down 14 00:13:09.925 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:09.925 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:09.925 12:32:52 -- lvol/common.sh@36 -- # echo 12 00:13:09.925 12:32:52 -- lvol/resize.sh@191 -- # resize=12 00:13:09.925 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 12 00:13:09.925 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.925 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.925 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.925 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.925 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:09.925 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:09.925 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:09.925 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:09.925 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:09.925 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:09.925 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:09.925 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:09.925 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:09.925 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:10.185 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.185 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:10.185 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:10.185 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:10.185 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.185 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.185 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.185 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=24576 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.185 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.185 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:10.185 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.185 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.185 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:10.185 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:10.185 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:10.185 12:32:52 -- lvol/resize.sh@191 -- # round_down 21 00:13:10.185 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:10.185 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:10.185 12:32:52 -- lvol/common.sh@36 -- # echo 20 00:13:10.185 12:32:52 -- lvol/resize.sh@191 -- # resize=20 00:13:10.185 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 20 00:13:10.185 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.185 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.185 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.185 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.185 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.185 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:10.185 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:10.185 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:10.185 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:10.185 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:10.185 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:10.185 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:10.186 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:10.186 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:10.186 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:10.186 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.186 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:10.186 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.186 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.186 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=40960 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.186 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.186 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:10.186 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.186 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.186 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:10.186 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:10.186 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:10.186 12:32:52 -- lvol/resize.sh@191 -- # round_down 27 00:13:10.186 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:10.186 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:10.186 12:32:52 -- lvol/common.sh@36 -- # echo 24 00:13:10.186 12:32:52 -- lvol/resize.sh@191 -- # resize=24 00:13:10.186 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 24 00:13:10.186 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.186 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.186 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.186 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.186 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.186 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:10.186 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:10.186 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:10.186 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:10.186 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:10.186 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:10.186 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:10.186 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:10.186 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:10.186 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.186 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:10.187 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:10.187 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:10.187 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.187 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.187 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.187 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=49152 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.187 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.187 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:10.187 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.187 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.187 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:10.187 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:10.187 12:32:52 -- lvol/resize.sh@184 -- # for resize in "$bdev_size" $((bdev_size + 4)) $((bdev_size * 2)) $((bdev_size * 3)) $((bdev_size * 4 - 4)) 0 00:13:10.187 12:32:52 -- lvol/resize.sh@191 -- # round_down 0 00:13:10.187 12:32:52 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:10.187 12:32:52 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:10.187 12:32:52 -- lvol/common.sh@36 -- # echo 0 00:13:10.187 12:32:52 -- lvol/resize.sh@191 -- # resize=0 00:13:10.187 12:32:52 -- lvol/resize.sh@192 -- # rpc_cmd bdev_lvol_resize d5074980-ded7-47f8-af3d-a113003370c5 0 00:13:10.187 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.187 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.187 [2024-10-01 12:32:52.668325] vbdev_lvol_rpc.c: 875:rpc_bdev_lvol_resize: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:13:10.187 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.187 12:32:52 -- lvol/resize.sh@194 -- # get_bdev_jq bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.187 12:32:52 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.187 12:32:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:10.187 12:32:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:10.187 12:32:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:10.187 12:32:52 -- common/autotest_common.sh@586 -- # local jq val 00:13:10.187 12:32:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:10.187 12:32:52 -- common/autotest_common.sh@596 -- # local lvs 00:13:10.187 12:32:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:10.187 12:32:52 -- common/autotest_common.sh@611 -- # local bdev 00:13:10.187 12:32:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:10.187 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.187 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:10.188 12:32:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:10.188 12:32:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:10.188 12:32:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:10.188 12:32:52 -- common/autotest_common.sh@620 -- # shift 00:13:10.188 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.188 12:32:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.188 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.188 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.188 12:32:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:10.188 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/lbd_test 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=0 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:10.447 12:32:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:10.447 12:32:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:10.447 12:32:52 -- lvol/resize.sh@195 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@196 -- # [[ d5074980-ded7-47f8-af3d-a113003370c5 == \d\5\0\7\4\9\8\0\-\d\e\d\7\-\4\7\f\8\-\a\f\3\d\-\a\1\1\3\0\0\3\3\7\0\c\5 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@197 -- # (( jq_out[block_size] == MALLOC_BS )) 00:13:10.447 12:32:52 -- lvol/resize.sh@198 -- # (( jq_out[num_blocks] * jq_out[block_size] == resize * 1024 ** 2 )) 00:13:10.447 12:32:52 -- lvol/resize.sh@202 -- # rpc_cmd bdev_lvol_delete d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.447 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.447 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@203 -- # rpc_cmd bdev_lvol_delete_lvstore -u 43231b62-bb87-4167-8165-a4d87873dbd0 00:13:10.447 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.447 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 12:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@204 -- # rpc_cmd bdev_get_bdevs -b d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.447 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.447 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 [2024-10-01 12:32:52.756445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: d5074980-ded7-47f8-af3d-a113003370c5 00:13:10.447 request: 00:13:10.447 { 00:13:10.447 "name": "d5074980-ded7-47f8-af3d-a113003370c5", 00:13:10.447 "method": "bdev_get_bdevs", 00:13:10.447 "req_id": 1 00:13:10.447 } 00:13:10.447 Got JSON-RPC error response 00:13:10.447 response: 00:13:10.447 { 00:13:10.447 "code": -19, 00:13:10.447 "message": "No such device" 00:13:10.447 } 00:13:10.447 12:32:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@205 -- # rpc_cmd bdev_lvol_get_lvstores -u 43231b62-bb87-4167-8165-a4d87873dbd0 00:13:10.447 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.447 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 request: 00:13:10.447 { 00:13:10.447 "uuid": "43231b62-bb87-4167-8165-a4d87873dbd0", 00:13:10.447 "method": "bdev_lvol_get_lvstores", 00:13:10.447 "req_id": 1 00:13:10.447 } 00:13:10.447 Got JSON-RPC error response 00:13:10.447 response: 00:13:10.447 { 00:13:10.447 "code": -19, 00:13:10.447 "message": "No such device" 00:13:10.447 } 00:13:10.447 12:32:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:10.447 12:32:52 -- lvol/resize.sh@206 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:10.447 12:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.447 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:11.015 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.015 12:32:53 -- lvol/resize.sh@207 -- # check_leftover_devices 00:13:11.015 12:32:53 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:11.015 12:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.015 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:11.015 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.016 12:32:53 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:11.016 12:32:53 -- lvol/common.sh@26 -- # jq length 00:13:11.016 12:32:53 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:11.016 12:32:53 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:11.016 12:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.016 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:11.016 12:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.016 12:32:53 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:11.016 12:32:53 -- lvol/common.sh@28 -- # jq length 00:13:11.016 12:32:53 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:11.016 00:13:11.016 real 0m1.544s 00:13:11.016 user 0m0.620s 00:13:11.016 sys 0m0.101s 00:13:11.016 12:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.016 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:11.016 ************************************ 00:13:11.016 END TEST test_destroy_after_bdev_lvol_resize_positive 00:13:11.016 ************************************ 00:13:11.016 12:32:53 -- lvol/resize.sh@221 -- # trap - SIGINT SIGTERM EXIT 00:13:11.016 12:32:53 -- lvol/resize.sh@222 -- # killprocess 59315 00:13:11.016 12:32:53 -- common/autotest_common.sh@926 -- # '[' -z 59315 ']' 00:13:11.016 12:32:53 -- common/autotest_common.sh@930 -- # kill -0 59315 00:13:11.016 12:32:53 -- common/autotest_common.sh@931 -- # uname 00:13:11.277 12:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.277 12:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59315 00:13:11.277 killing process with pid 59315 00:13:11.277 12:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.277 12:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.277 12:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59315' 00:13:11.277 12:32:53 -- common/autotest_common.sh@945 -- # kill 59315 00:13:11.277 [2024-10-01 12:32:53.568038] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 3 times 00:13:11.277 12:32:53 -- common/autotest_common.sh@950 -- # wait 59315 00:13:13.191 00:13:13.191 real 0m10.187s 00:13:13.191 user 0m12.864s 00:13:13.191 sys 0m1.825s 00:13:13.191 12:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.191 ************************************ 00:13:13.191 END TEST lvol_resize 00:13:13.191 ************************************ 00:13:13.191 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.191 12:32:55 -- lvol/lvol.sh@16 -- # run_test lvol_hotremove /home/vagrant/spdk_repo/spdk/test/lvol/hotremove.sh 00:13:13.191 12:32:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:13.191 12:32:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.191 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.191 ************************************ 00:13:13.191 START TEST lvol_hotremove 00:13:13.191 ************************************ 00:13:13.191 12:32:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/hotremove.sh 00:13:13.191 * Looking for test storage... 00:13:13.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:13:13.191 12:32:55 -- lvol/hotremove.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:13.191 12:32:55 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:13.191 12:32:55 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:13.191 12:32:55 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:13.191 12:32:55 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:13.191 12:32:55 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:13.191 12:32:55 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:13.191 12:32:55 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:13.191 12:32:55 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.191 12:32:55 -- lvol/hotremove.sh@207 -- # spdk_pid=59687 00:13:13.191 12:32:55 -- lvol/hotremove.sh@208 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:13.191 12:32:55 -- lvol/hotremove.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:13.191 12:32:55 -- lvol/hotremove.sh@209 -- # waitforlisten 59687 00:13:13.191 12:32:55 -- common/autotest_common.sh@819 -- # '[' -z 59687 ']' 00:13:13.191 12:32:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.191 12:32:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.191 12:32:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.191 12:32:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.191 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:13.191 [2024-10-01 12:32:55.636571] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:13.191 [2024-10-01 12:32:55.636755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:13:13.449 [2024-10-01 12:32:55.792574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.449 [2024-10-01 12:32:55.964911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.449 [2024-10-01 12:32:55.965152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.353 12:32:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.353 12:32:57 -- common/autotest_common.sh@852 -- # return 0 00:13:15.353 12:32:57 -- lvol/hotremove.sh@211 -- # run_test test_hotremove_lvol_store test_hotremove_lvol_store 00:13:15.353 12:32:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:15.353 12:32:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 ************************************ 00:13:15.353 START TEST test_hotremove_lvol_store 00:13:15.353 ************************************ 00:13:15.353 12:32:57 -- common/autotest_common.sh@1104 -- # test_hotremove_lvol_store 00:13:15.353 12:32:57 -- lvol/hotremove.sh@14 -- # rpc_cmd bdev_malloc_create 128 512 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@14 -- # malloc_name=Malloc0 00:13:15.353 12:32:57 -- lvol/hotremove.sh@15 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@15 -- # lvs_uuid=32f2df4b-545d-4a00-b233-cc1b453bc607 00:13:15.353 12:32:57 -- lvol/hotremove.sh@16 -- # rpc_cmd bdev_lvol_create -u 32f2df4b-545d-4a00-b233-cc1b453bc607 lvol_test 124 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@16 -- # lvol_uuid=9c9bf333-f98f-42a1-b9e0-61dd957193f9 00:13:15.353 12:32:57 -- lvol/hotremove.sh@19 -- # rpc_cmd bdev_lvol_delete_lvstore -u 32f2df4b-545d-4a00-b233-cc1b453bc607 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@20 -- # rpc_cmd bdev_lvol_get_lvstores -u 32f2df4b-545d-4a00-b233-cc1b453bc607 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 request: 00:13:15.353 { 00:13:15.353 "uuid": "32f2df4b-545d-4a00-b233-cc1b453bc607", 00:13:15.353 "method": "bdev_lvol_get_lvstores", 00:13:15.353 "req_id": 1 00:13:15.353 } 00:13:15.353 Got JSON-RPC error response 00:13:15.353 response: 00:13:15.353 { 00:13:15.353 "code": -19, 00:13:15.353 "message": "No such device" 00:13:15.353 } 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@21 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@21 -- # lvolstores='[]' 00:13:15.353 12:32:57 -- lvol/hotremove.sh@22 -- # jq length 00:13:15.353 12:32:57 -- lvol/hotremove.sh@22 -- # '[' 0 == 0 ']' 00:13:15.353 12:32:57 -- lvol/hotremove.sh@25 -- # rpc_cmd bdev_lvol_delete_lvstore -u 32f2df4b-545d-4a00-b233-cc1b453bc607 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 request: 00:13:15.353 { 00:13:15.353 "uuid": "32f2df4b-545d-4a00-b233-cc1b453bc607", 00:13:15.353 "method": "bdev_lvol_delete_lvstore", 00:13:15.353 "req_id": 1 00:13:15.353 } 00:13:15.353 Got JSON-RPC error response 00:13:15.353 response: 00:13:15.353 { 00:13:15.353 "code": -19, 00:13:15.353 "message": "No such device" 00:13:15.353 } 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@28 -- # rpc_cmd bdev_get_bdevs -b 9c9bf333-f98f-42a1-b9e0-61dd957193f9 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 [2024-10-01 12:32:57.677782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9c9bf333-f98f-42a1-b9e0-61dd957193f9 00:13:15.353 request: 00:13:15.353 { 00:13:15.353 "name": "9c9bf333-f98f-42a1-b9e0-61dd957193f9", 00:13:15.353 "method": "bdev_get_bdevs", 00:13:15.353 "req_id": 1 00:13:15.353 } 00:13:15.353 Got JSON-RPC error response 00:13:15.353 response: 00:13:15.353 { 00:13:15.353 "code": -19, 00:13:15.353 "message": "No such device" 00:13:15.353 } 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@29 -- # rpc_cmd bdev_get_bdevs 00:13:15.353 12:32:57 -- lvol/hotremove.sh@29 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.353 12:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.353 12:32:57 -- lvol/hotremove.sh@29 -- # lvols='[]' 00:13:15.353 12:32:57 -- lvol/hotremove.sh@30 -- # jq length 00:13:15.353 12:32:57 -- lvol/hotremove.sh@30 -- # '[' 0 == 0 ']' 00:13:15.353 12:32:57 -- lvol/hotremove.sh@33 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:15.353 12:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.353 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.613 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.613 12:32:58 -- lvol/hotremove.sh@34 -- # check_leftover_devices 00:13:15.613 12:32:58 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:15.613 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.613 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.613 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.613 12:32:58 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:15.613 12:32:58 -- lvol/common.sh@26 -- # jq length 00:13:15.613 12:32:58 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:15.871 12:32:58 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:15.871 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.871 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.871 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.871 12:32:58 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:15.871 12:32:58 -- lvol/common.sh@28 -- # jq length 00:13:15.871 ************************************ 00:13:15.871 END TEST test_hotremove_lvol_store 00:13:15.871 ************************************ 00:13:15.871 12:32:58 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:15.871 00:13:15.871 real 0m0.796s 00:13:15.871 user 0m0.262s 00:13:15.871 sys 0m0.037s 00:13:15.871 12:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.871 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.871 12:32:58 -- lvol/hotremove.sh@212 -- # run_test test_hotremove_lvol_store_multiple_lvols test_hotremove_lvol_store_multiple_lvols 00:13:15.871 12:32:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:15.871 12:32:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.871 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.871 ************************************ 00:13:15.871 START TEST test_hotremove_lvol_store_multiple_lvols 00:13:15.871 ************************************ 00:13:15.871 12:32:58 -- common/autotest_common.sh@1104 -- # test_hotremove_lvol_store_multiple_lvols 00:13:15.871 12:32:58 -- lvol/hotremove.sh@40 -- # rpc_cmd bdev_malloc_create 128 512 00:13:15.871 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.871 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.871 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.871 12:32:58 -- lvol/hotremove.sh@40 -- # malloc_name=Malloc1 00:13:15.871 12:32:58 -- lvol/hotremove.sh@41 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test 00:13:15.871 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.871 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.871 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.871 12:32:58 -- lvol/hotremove.sh@41 -- # lvs_uuid=0af68c99-2757-4353-87b3-49e5b7c6b635 00:13:15.871 12:32:58 -- lvol/hotremove.sh@44 -- # round_down 31 00:13:15.871 12:32:58 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:15.871 12:32:58 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:15.871 12:32:58 -- lvol/common.sh@36 -- # echo 28 00:13:15.871 12:32:58 -- lvol/hotremove.sh@44 -- # lvol_size_mb=28 00:13:15.871 12:32:58 -- lvol/hotremove.sh@47 -- # seq 1 4 00:13:16.131 12:32:58 -- lvol/hotremove.sh@47 -- # for i in $(seq 1 4) 00:13:16.131 12:32:58 -- lvol/hotremove.sh@48 -- # rpc_cmd bdev_lvol_create -u 0af68c99-2757-4353-87b3-49e5b7c6b635 lvol_test1 28 00:13:16.131 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.131 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.131 d2ee3c2c-b90c-4b9d-b128-624dcdbd5330 00:13:16.131 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.131 12:32:58 -- lvol/hotremove.sh@47 -- # for i in $(seq 1 4) 00:13:16.131 12:32:58 -- lvol/hotremove.sh@48 -- # rpc_cmd bdev_lvol_create -u 0af68c99-2757-4353-87b3-49e5b7c6b635 lvol_test2 28 00:13:16.131 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.131 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.131 df0d9997-bed6-4382-8ed8-6b92f810d267 00:13:16.131 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.131 12:32:58 -- lvol/hotremove.sh@47 -- # for i in $(seq 1 4) 00:13:16.131 12:32:58 -- lvol/hotremove.sh@48 -- # rpc_cmd bdev_lvol_create -u 0af68c99-2757-4353-87b3-49e5b7c6b635 lvol_test3 28 00:13:16.131 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.131 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.131 01b5827e-1266-48cd-ab52-6f503192661c 00:13:16.131 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.131 12:32:58 -- lvol/hotremove.sh@47 -- # for i in $(seq 1 4) 00:13:16.131 12:32:58 -- lvol/hotremove.sh@48 -- # rpc_cmd bdev_lvol_create -u 0af68c99-2757-4353-87b3-49e5b7c6b635 lvol_test4 28 00:13:16.131 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.131 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.131 0f8f91be-6291-4a9f-9cc3-56dbd580e092 00:13:16.131 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.131 12:32:58 -- lvol/hotremove.sh@51 -- # rpc_cmd bdev_get_bdevs 00:13:16.131 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.131 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.131 12:32:58 -- lvol/hotremove.sh@51 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:13:16.131 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.131 12:32:58 -- lvol/hotremove.sh@51 -- # lvols='[ 00:13:16.131 { 00:13:16.131 "name": "d2ee3c2c-b90c-4b9d-b128-624dcdbd5330", 00:13:16.131 "aliases": [ 00:13:16.131 "lvs_test/lvol_test1" 00:13:16.131 ], 00:13:16.131 "product_name": "Logical Volume", 00:13:16.131 "block_size": 512, 00:13:16.131 "num_blocks": 57344, 00:13:16.131 "uuid": "d2ee3c2c-b90c-4b9d-b128-624dcdbd5330", 00:13:16.131 "assigned_rate_limits": { 00:13:16.131 "rw_ios_per_sec": 0, 00:13:16.131 "rw_mbytes_per_sec": 0, 00:13:16.131 "r_mbytes_per_sec": 0, 00:13:16.131 "w_mbytes_per_sec": 0 00:13:16.131 }, 00:13:16.131 "claimed": false, 00:13:16.131 "zoned": false, 00:13:16.131 "supported_io_types": { 00:13:16.131 "read": true, 00:13:16.131 "write": true, 00:13:16.131 "unmap": true, 00:13:16.131 "write_zeroes": true, 00:13:16.131 "flush": false, 00:13:16.131 "reset": true, 00:13:16.131 "compare": false, 00:13:16.131 "compare_and_write": false, 00:13:16.131 "abort": false, 00:13:16.131 "nvme_admin": false, 00:13:16.131 "nvme_io": false 00:13:16.131 }, 00:13:16.131 "memory_domains": [ 00:13:16.131 { 00:13:16.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.131 "dma_device_type": 2 00:13:16.131 } 00:13:16.131 ], 00:13:16.131 "driver_specific": { 00:13:16.131 "lvol": { 00:13:16.131 "lvol_store_uuid": "0af68c99-2757-4353-87b3-49e5b7c6b635", 00:13:16.131 "base_bdev": "Malloc1", 00:13:16.131 "thin_provision": false, 00:13:16.131 "snapshot": false, 00:13:16.131 "clone": false, 00:13:16.131 "esnap_clone": false 00:13:16.131 } 00:13:16.131 } 00:13:16.131 }, 00:13:16.131 { 00:13:16.131 "name": "df0d9997-bed6-4382-8ed8-6b92f810d267", 00:13:16.131 "aliases": [ 00:13:16.131 "lvs_test/lvol_test2" 00:13:16.131 ], 00:13:16.131 "product_name": "Logical Volume", 00:13:16.131 "block_size": 512, 00:13:16.131 "num_blocks": 57344, 00:13:16.131 "uuid": "df0d9997-bed6-4382-8ed8-6b92f810d267", 00:13:16.131 "assigned_rate_limits": { 00:13:16.131 "rw_ios_per_sec": 0, 00:13:16.131 "rw_mbytes_per_sec": 0, 00:13:16.131 "r_mbytes_per_sec": 0, 00:13:16.131 "w_mbytes_per_sec": 0 00:13:16.131 }, 00:13:16.131 "claimed": false, 00:13:16.131 "zoned": false, 00:13:16.131 "supported_io_types": { 00:13:16.131 "read": true, 00:13:16.131 "write": true, 00:13:16.131 "unmap": true, 00:13:16.131 "write_zeroes": true, 00:13:16.131 "flush": false, 00:13:16.131 "reset": true, 00:13:16.131 "compare": false, 00:13:16.131 "compare_and_write": false, 00:13:16.131 "abort": false, 00:13:16.131 "nvme_admin": false, 00:13:16.131 "nvme_io": false 00:13:16.131 }, 00:13:16.131 "memory_domains": [ 00:13:16.131 { 00:13:16.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.131 "dma_device_type": 2 00:13:16.131 } 00:13:16.131 ], 00:13:16.131 "driver_specific": { 00:13:16.131 "lvol": { 00:13:16.131 "lvol_store_uuid": "0af68c99-2757-4353-87b3-49e5b7c6b635", 00:13:16.131 "base_bdev": "Malloc1", 00:13:16.131 "thin_provision": false, 00:13:16.131 "snapshot": false, 00:13:16.131 "clone": false, 00:13:16.131 "esnap_clone": false 00:13:16.131 } 00:13:16.131 } 00:13:16.131 }, 00:13:16.131 { 00:13:16.131 "name": "01b5827e-1266-48cd-ab52-6f503192661c", 00:13:16.131 "aliases": [ 00:13:16.131 "lvs_test/lvol_test3" 00:13:16.131 ], 00:13:16.131 "product_name": "Logical Volume", 00:13:16.131 "block_size": 512, 00:13:16.131 "num_blocks": 57344, 00:13:16.131 "uuid": "01b5827e-1266-48cd-ab52-6f503192661c", 00:13:16.131 "assigned_rate_limits": { 00:13:16.131 "rw_ios_per_sec": 0, 00:13:16.131 "rw_mbytes_per_sec": 0, 00:13:16.131 "r_mbytes_per_sec": 0, 00:13:16.131 "w_mbytes_per_sec": 0 00:13:16.131 }, 00:13:16.131 "claimed": false, 00:13:16.131 "zoned": false, 00:13:16.131 "supported_io_types": { 00:13:16.131 "read": true, 00:13:16.131 "write": true, 00:13:16.131 "unmap": true, 00:13:16.131 "write_zeroes": true, 00:13:16.131 "flush": false, 00:13:16.131 "reset": true, 00:13:16.131 "compare": false, 00:13:16.131 "compare_and_write": false, 00:13:16.131 "abort": false, 00:13:16.131 "nvme_admin": false, 00:13:16.131 "nvme_io": false 00:13:16.131 }, 00:13:16.131 "memory_domains": [ 00:13:16.131 { 00:13:16.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.131 "dma_device_type": 2 00:13:16.131 } 00:13:16.131 ], 00:13:16.131 "driver_specific": { 00:13:16.131 "lvol": { 00:13:16.131 "lvol_store_uuid": "0af68c99-2757-4353-87b3-49e5b7c6b635", 00:13:16.131 "base_bdev": "Malloc1", 00:13:16.131 "thin_provision": false, 00:13:16.131 "snapshot": false, 00:13:16.131 "clone": false, 00:13:16.131 "esnap_clone": false 00:13:16.131 } 00:13:16.131 } 00:13:16.131 }, 00:13:16.131 { 00:13:16.131 "name": "0f8f91be-6291-4a9f-9cc3-56dbd580e092", 00:13:16.131 "aliases": [ 00:13:16.131 "lvs_test/lvol_test4" 00:13:16.131 ], 00:13:16.131 "product_name": "Logical Volume", 00:13:16.131 "block_size": 512, 00:13:16.131 "num_blocks": 57344, 00:13:16.131 "uuid": "0f8f91be-6291-4a9f-9cc3-56dbd580e092", 00:13:16.131 "assigned_rate_limits": { 00:13:16.131 "rw_ios_per_sec": 0, 00:13:16.131 "rw_mbytes_per_sec": 0, 00:13:16.131 "r_mbytes_per_sec": 0, 00:13:16.131 "w_mbytes_per_sec": 0 00:13:16.131 }, 00:13:16.131 "claimed": false, 00:13:16.131 "zoned": false, 00:13:16.131 "supported_io_types": { 00:13:16.131 "read": true, 00:13:16.132 "write": true, 00:13:16.132 "unmap": true, 00:13:16.132 "write_zeroes": true, 00:13:16.132 "flush": false, 00:13:16.132 "reset": true, 00:13:16.132 "compare": false, 00:13:16.132 "compare_and_write": false, 00:13:16.132 "abort": false, 00:13:16.132 "nvme_admin": false, 00:13:16.132 "nvme_io": false 00:13:16.132 }, 00:13:16.132 "memory_domains": [ 00:13:16.132 { 00:13:16.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.132 "dma_device_type": 2 00:13:16.132 } 00:13:16.132 ], 00:13:16.132 "driver_specific": { 00:13:16.132 "lvol": { 00:13:16.132 "lvol_store_uuid": "0af68c99-2757-4353-87b3-49e5b7c6b635", 00:13:16.132 "base_bdev": "Malloc1", 00:13:16.132 "thin_provision": false, 00:13:16.132 "snapshot": false, 00:13:16.132 "clone": false, 00:13:16.132 "esnap_clone": false 00:13:16.132 } 00:13:16.132 } 00:13:16.132 } 00:13:16.132 ]' 00:13:16.132 12:32:58 -- lvol/hotremove.sh@52 -- # jq length 00:13:16.132 12:32:58 -- lvol/hotremove.sh@52 -- # '[' 4 == 4 ']' 00:13:16.132 12:32:58 -- lvol/hotremove.sh@55 -- # rpc_cmd bdev_lvol_delete_lvstore -u 0af68c99-2757-4353-87b3-49e5b7c6b635 00:13:16.132 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.132 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.132 12:32:58 -- lvol/hotremove.sh@56 -- # rpc_cmd bdev_lvol_get_lvstores -u 0af68c99-2757-4353-87b3-49e5b7c6b635 00:13:16.132 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.132 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 request: 00:13:16.132 { 00:13:16.132 "uuid": "0af68c99-2757-4353-87b3-49e5b7c6b635", 00:13:16.132 "method": "bdev_lvol_get_lvstores", 00:13:16.132 "req_id": 1 00:13:16.132 } 00:13:16.132 Got JSON-RPC error response 00:13:16.132 response: 00:13:16.132 { 00:13:16.132 "code": -19, 00:13:16.132 "message": "No such device" 00:13:16.132 } 00:13:16.132 12:32:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:16.132 12:32:58 -- lvol/hotremove.sh@59 -- # rpc_cmd bdev_get_bdevs 00:13:16.132 12:32:58 -- lvol/hotremove.sh@59 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:13:16.132 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.132 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.132 12:32:58 -- lvol/hotremove.sh@59 -- # lvols='[]' 00:13:16.132 12:32:58 -- lvol/hotremove.sh@60 -- # jq length 00:13:16.391 12:32:58 -- lvol/hotremove.sh@60 -- # '[' 0 == 0 ']' 00:13:16.391 12:32:58 -- lvol/hotremove.sh@63 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:16.391 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.391 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.650 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.650 12:32:58 -- lvol/hotremove.sh@64 -- # check_leftover_devices 00:13:16.650 12:32:58 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:16.650 12:32:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.650 12:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.650 12:32:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.650 12:32:58 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:16.650 12:32:58 -- lvol/common.sh@26 -- # jq length 00:13:16.650 12:32:59 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:16.650 12:32:59 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:16.650 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.650 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.650 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.650 12:32:59 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:16.650 12:32:59 -- lvol/common.sh@28 -- # jq length 00:13:16.650 ************************************ 00:13:16.650 END TEST test_hotremove_lvol_store_multiple_lvols 00:13:16.650 ************************************ 00:13:16.650 12:32:59 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:16.650 00:13:16.650 real 0m0.865s 00:13:16.650 user 0m0.356s 00:13:16.650 sys 0m0.049s 00:13:16.650 12:32:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.651 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.651 12:32:59 -- lvol/hotremove.sh@213 -- # run_test test_hotremove_lvol_store_base test_hotremove_lvol_store_base 00:13:16.651 12:32:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:16.651 12:32:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.651 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.651 ************************************ 00:13:16.651 START TEST test_hotremove_lvol_store_base 00:13:16.651 ************************************ 00:13:16.651 12:32:59 -- common/autotest_common.sh@1104 -- # test_hotremove_lvol_store_base 00:13:16.651 12:32:59 -- lvol/hotremove.sh@70 -- # rpc_cmd bdev_malloc_create 128 512 00:13:16.651 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.651 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.911 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.911 12:32:59 -- lvol/hotremove.sh@70 -- # malloc_name=Malloc2 00:13:16.911 12:32:59 -- lvol/hotremove.sh@71 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test 00:13:16.911 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.911 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.911 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.911 12:32:59 -- lvol/hotremove.sh@71 -- # lvs_uuid=6888b2f1-46a6-431a-88de-9ddf4e48100d 00:13:16.911 12:32:59 -- lvol/hotremove.sh@74 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:16.911 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.911 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.911 [2024-10-01 12:32:59.299527] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Malloc2 being removed: closing lvstore lvs_test 00:13:17.170 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.170 12:32:59 -- lvol/hotremove.sh@76 -- # rpc_cmd bdev_lvol_get_lvstores -u 6888b2f1-46a6-431a-88de-9ddf4e48100d 00:13:17.170 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.170 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.170 request: 00:13:17.170 { 00:13:17.170 "uuid": "6888b2f1-46a6-431a-88de-9ddf4e48100d", 00:13:17.170 "method": "bdev_lvol_get_lvstores", 00:13:17.170 "req_id": 1 00:13:17.170 } 00:13:17.171 Got JSON-RPC error response 00:13:17.171 response: 00:13:17.171 { 00:13:17.171 "code": -19, 00:13:17.171 "message": "No such device" 00:13:17.171 } 00:13:17.171 12:32:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:17.171 12:32:59 -- lvol/hotremove.sh@78 -- # rpc_cmd bdev_lvol_delete_lvstore -u 6888b2f1-46a6-431a-88de-9ddf4e48100d 00:13:17.171 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.171 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 request: 00:13:17.171 { 00:13:17.171 "uuid": "6888b2f1-46a6-431a-88de-9ddf4e48100d", 00:13:17.171 "method": "bdev_lvol_delete_lvstore", 00:13:17.171 "req_id": 1 00:13:17.171 } 00:13:17.171 Got JSON-RPC error response 00:13:17.171 response: 00:13:17.171 { 00:13:17.171 "code": -19, 00:13:17.171 "message": "No such device" 00:13:17.171 } 00:13:17.171 12:32:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:17.171 12:32:59 -- lvol/hotremove.sh@79 -- # check_leftover_devices 00:13:17.171 12:32:59 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:17.171 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.171 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.171 12:32:59 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:17.171 12:32:59 -- lvol/common.sh@26 -- # jq length 00:13:17.171 12:32:59 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:17.171 12:32:59 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:17.171 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.171 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.171 12:32:59 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:17.171 12:32:59 -- lvol/common.sh@28 -- # jq length 00:13:17.430 ************************************ 00:13:17.430 END TEST test_hotremove_lvol_store_base 00:13:17.430 ************************************ 00:13:17.430 12:32:59 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:17.430 00:13:17.430 real 0m0.578s 00:13:17.430 user 0m0.108s 00:13:17.430 sys 0m0.018s 00:13:17.430 12:32:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 12:32:59 -- lvol/hotremove.sh@214 -- # run_test test_hotremove_lvol_store_base_with_lvols test_hotremove_lvol_store_base_with_lvols 00:13:17.430 12:32:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:17.430 12:32:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 ************************************ 00:13:17.430 START TEST test_hotremove_lvol_store_base_with_lvols 00:13:17.430 ************************************ 00:13:17.430 12:32:59 -- common/autotest_common.sh@1104 -- # test_hotremove_lvol_store_base_with_lvols 00:13:17.430 12:32:59 -- lvol/hotremove.sh@85 -- # rpc_cmd bdev_malloc_create 128 512 00:13:17.430 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.430 12:32:59 -- lvol/hotremove.sh@85 -- # malloc_name=Malloc3 00:13:17.430 12:32:59 -- lvol/hotremove.sh@86 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:13:17.430 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.430 12:32:59 -- lvol/hotremove.sh@86 -- # lvs_uuid=a3589d4a-1455-4f6e-808b-0fba1f8769d0 00:13:17.430 12:32:59 -- lvol/hotremove.sh@87 -- # rpc_cmd bdev_lvol_create -u a3589d4a-1455-4f6e-808b-0fba1f8769d0 lvol_test 124 00:13:17.430 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.430 12:32:59 -- lvol/hotremove.sh@87 -- # lvol_uuid=3647b16b-e973-47f4-9371-1cf1999fa754 00:13:17.430 12:32:59 -- lvol/hotremove.sh@89 -- # rpc_cmd bdev_get_bdevs -b 3647b16b-e973-47f4-9371-1cf1999fa754 00:13:17.430 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.430 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.430 [ 00:13:17.430 { 00:13:17.430 "name": "3647b16b-e973-47f4-9371-1cf1999fa754", 00:13:17.430 "aliases": [ 00:13:17.430 "lvs_test/lvol_test" 00:13:17.430 ], 00:13:17.430 "product_name": "Logical Volume", 00:13:17.430 "block_size": 512, 00:13:17.430 "num_blocks": 253952, 00:13:17.430 "uuid": "3647b16b-e973-47f4-9371-1cf1999fa754", 00:13:17.430 "assigned_rate_limits": { 00:13:17.430 "rw_ios_per_sec": 0, 00:13:17.430 "rw_mbytes_per_sec": 0, 00:13:17.430 "r_mbytes_per_sec": 0, 00:13:17.430 "w_mbytes_per_sec": 0 00:13:17.430 }, 00:13:17.689 "claimed": false, 00:13:17.689 "zoned": false, 00:13:17.689 "supported_io_types": { 00:13:17.689 "read": true, 00:13:17.689 "write": true, 00:13:17.689 "unmap": true, 00:13:17.689 "write_zeroes": true, 00:13:17.689 "flush": false, 00:13:17.689 "reset": true, 00:13:17.689 "compare": false, 00:13:17.689 "compare_and_write": false, 00:13:17.689 "abort": false, 00:13:17.689 "nvme_admin": false, 00:13:17.689 "nvme_io": false 00:13:17.689 }, 00:13:17.689 "memory_domains": [ 00:13:17.689 { 00:13:17.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.689 "dma_device_type": 2 00:13:17.689 } 00:13:17.689 ], 00:13:17.689 "driver_specific": { 00:13:17.689 "lvol": { 00:13:17.689 "lvol_store_uuid": "a3589d4a-1455-4f6e-808b-0fba1f8769d0", 00:13:17.689 "base_bdev": "Malloc3", 00:13:17.689 "thin_provision": false, 00:13:17.689 "snapshot": false, 00:13:17.689 "clone": false, 00:13:17.689 "esnap_clone": false 00:13:17.689 } 00:13:17.689 } 00:13:17.689 } 00:13:17.689 ] 00:13:17.689 12:32:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.689 12:32:59 -- lvol/hotremove.sh@92 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:17.689 12:32:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.689 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:17.689 [2024-10-01 12:32:59.965806] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Malloc3 being removed: closing lvstore lvs_test 00:13:17.949 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.949 12:33:00 -- lvol/hotremove.sh@94 -- # rpc_cmd bdev_get_bdevs -b 3647b16b-e973-47f4-9371-1cf1999fa754 00:13:17.949 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.949 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.949 [2024-10-01 12:33:00.261126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 3647b16b-e973-47f4-9371-1cf1999fa754 00:13:17.949 request: 00:13:17.949 { 00:13:17.949 "name": "3647b16b-e973-47f4-9371-1cf1999fa754", 00:13:17.949 "method": "bdev_get_bdevs", 00:13:17.949 "req_id": 1 00:13:17.949 } 00:13:17.949 Got JSON-RPC error response 00:13:17.949 response: 00:13:17.949 { 00:13:17.949 "code": -19, 00:13:17.949 "message": "No such device" 00:13:17.949 } 00:13:17.949 12:33:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:17.949 12:33:00 -- lvol/hotremove.sh@96 -- # rpc_cmd bdev_lvol_get_lvstores -u a3589d4a-1455-4f6e-808b-0fba1f8769d0 00:13:17.949 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.949 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.949 request: 00:13:17.949 { 00:13:17.949 "uuid": "a3589d4a-1455-4f6e-808b-0fba1f8769d0", 00:13:17.949 "method": "bdev_lvol_get_lvstores", 00:13:17.949 "req_id": 1 00:13:17.949 } 00:13:17.949 Got JSON-RPC error response 00:13:17.949 response: 00:13:17.949 { 00:13:17.949 "code": -19, 00:13:17.949 "message": "No such device" 00:13:17.949 } 00:13:17.949 12:33:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:17.949 12:33:00 -- lvol/hotremove.sh@99 -- # rpc_cmd bdev_lvol_delete_lvstore -u a3589d4a-1455-4f6e-808b-0fba1f8769d0 00:13:17.949 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.949 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.949 request: 00:13:17.949 { 00:13:17.949 "uuid": "a3589d4a-1455-4f6e-808b-0fba1f8769d0", 00:13:17.949 "method": "bdev_lvol_delete_lvstore", 00:13:17.949 "req_id": 1 00:13:17.949 } 00:13:17.949 Got JSON-RPC error response 00:13:17.949 response: 00:13:17.949 { 00:13:17.949 "code": -19, 00:13:17.949 "message": "No such device" 00:13:17.949 } 00:13:17.949 12:33:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:17.949 12:33:00 -- lvol/hotremove.sh@100 -- # check_leftover_devices 00:13:17.949 12:33:00 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:17.949 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.949 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.949 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.949 12:33:00 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:17.949 12:33:00 -- lvol/common.sh@26 -- # jq length 00:13:17.949 12:33:00 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:17.949 12:33:00 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:17.949 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.949 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.950 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.950 12:33:00 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:17.950 12:33:00 -- lvol/common.sh@28 -- # jq length 00:13:17.950 ************************************ 00:13:17.950 END TEST test_hotremove_lvol_store_base_with_lvols 00:13:17.950 ************************************ 00:13:17.950 12:33:00 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:17.950 00:13:17.950 real 0m0.627s 00:13:17.950 user 0m0.112s 00:13:17.950 sys 0m0.031s 00:13:17.950 12:33:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.950 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.950 12:33:00 -- lvol/hotremove.sh@215 -- # run_test test_bdev_lvol_delete_lvstore_with_clones test_bdev_lvol_delete_lvstore_with_clones 00:13:17.950 12:33:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:17.950 12:33:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:17.950 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.950 ************************************ 00:13:17.950 START TEST test_bdev_lvol_delete_lvstore_with_clones 00:13:17.950 ************************************ 00:13:17.950 12:33:00 -- common/autotest_common.sh@1104 -- # test_bdev_lvol_delete_lvstore_with_clones 00:13:17.950 12:33:00 -- lvol/hotremove.sh@104 -- # local snapshot_name1=snapshot1 snapshot_uuid1 00:13:17.950 12:33:00 -- lvol/hotremove.sh@105 -- # local snapshot_name2=snapshot2 snapshot_uuid2 00:13:17.950 12:33:00 -- lvol/hotremove.sh@106 -- # local clone_name=clone clone_uuid 00:13:17.950 12:33:00 -- lvol/hotremove.sh@107 -- # local lbd_name=lbd_test 00:13:17.950 12:33:00 -- lvol/hotremove.sh@109 -- # local bdev_uuid 00:13:17.950 12:33:00 -- lvol/hotremove.sh@110 -- # local lvstore_name=lvs_name lvstore_uuid 00:13:17.950 12:33:00 -- lvol/hotremove.sh@111 -- # local malloc_dev 00:13:17.950 12:33:00 -- lvol/hotremove.sh@113 -- # rpc_cmd bdev_malloc_create 256 512 00:13:17.950 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.950 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.209 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.209 12:33:00 -- lvol/hotremove.sh@113 -- # malloc_dev=Malloc4 00:13:18.209 12:33:00 -- lvol/hotremove.sh@114 -- # rpc_cmd bdev_lvol_create_lvstore Malloc4 lvs_name 00:13:18.209 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.209 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.469 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.469 12:33:00 -- lvol/hotremove.sh@114 -- # lvstore_uuid=7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.469 12:33:00 -- lvol/hotremove.sh@116 -- # get_lvs_jq bdev_lvol_get_lvstores -u 7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.469 12:33:00 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.470 12:33:00 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:13:18.470 12:33:00 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:18.470 12:33:00 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:18.470 12:33:00 -- common/autotest_common.sh@586 -- # local jq val 00:13:18.470 12:33:00 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:18.470 12:33:00 -- common/autotest_common.sh@596 -- # local lvs 00:13:18.470 12:33:00 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:18.470 12:33:00 -- common/autotest_common.sh@611 -- # local bdev 00:13:18.470 12:33:00 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:13:18.470 12:33:00 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:18.470 12:33:00 -- common/autotest_common.sh@620 -- # shift 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.470 12:33:00 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:13:18.470 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.470 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.470 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc4 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:13:18.470 12:33:00 -- lvol/hotremove.sh@117 -- # [[ 7d97f5b8-9a53-418f-9e31-71909c74cd56 == \7\d\9\7\f\5\b\8\-\9\a\5\3\-\4\1\8\f\-\9\e\3\1\-\7\1\9\0\9\c\7\4\c\d\5\6 ]] 00:13:18.470 12:33:00 -- lvol/hotremove.sh@118 -- # [[ lvs_name == \l\v\s\_\n\a\m\e ]] 00:13:18.470 12:33:00 -- lvol/hotremove.sh@119 -- # [[ Malloc4 == \M\a\l\l\o\c\4 ]] 00:13:18.470 12:33:00 -- lvol/hotremove.sh@121 -- # size=63 00:13:18.470 12:33:00 -- lvol/hotremove.sh@123 -- # rpc_cmd bdev_lvol_create -u 7d97f5b8-9a53-418f-9e31-71909c74cd56 lbd_test 63 00:13:18.470 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.470 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.470 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.470 12:33:00 -- lvol/hotremove.sh@123 -- # bdev_uuid=b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.470 12:33:00 -- lvol/hotremove.sh@125 -- # get_bdev_jq bdev_get_bdevs -b b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.470 12:33:00 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.470 12:33:00 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:18.470 12:33:00 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:18.470 12:33:00 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:18.470 12:33:00 -- common/autotest_common.sh@586 -- # local jq val 00:13:18.470 12:33:00 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:18.470 12:33:00 -- common/autotest_common.sh@596 -- # local lvs 00:13:18.470 12:33:00 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:18.470 12:33:00 -- common/autotest_common.sh@611 -- # local bdev 00:13:18.470 12:33:00 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:18.470 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.470 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:18.470 12:33:00 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:18.470 12:33:00 -- common/autotest_common.sh@620 -- # shift 00:13:18.470 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.470 12:33:00 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.471 12:33:00 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:18.471 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.471 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.471 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/lbd_test 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=b5886cb4-3952-484d-9e52-8dff9e185257 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:18.471 12:33:00 -- lvol/hotremove.sh@127 -- # rpc_cmd bdev_lvol_snapshot b5886cb4-3952-484d-9e52-8dff9e185257 snapshot1 00:13:18.471 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.471 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.471 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.471 12:33:00 -- lvol/hotremove.sh@127 -- # snapshot_uuid1=7f923760-c8e7-43e0-be66-39980e51ea25 00:13:18.471 12:33:00 -- lvol/hotremove.sh@129 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/snapshot1 00:13:18.471 12:33:00 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/snapshot1 00:13:18.471 12:33:00 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:18.471 12:33:00 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:18.471 12:33:00 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:18.471 12:33:00 -- common/autotest_common.sh@586 -- # local jq val 00:13:18.471 12:33:00 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:18.471 12:33:00 -- common/autotest_common.sh@596 -- # local lvs 00:13:18.471 12:33:00 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:18.471 12:33:00 -- common/autotest_common.sh@611 -- # local bdev 00:13:18.471 12:33:00 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:18.471 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.471 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:18.471 12:33:00 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:18.471 12:33:00 -- common/autotest_common.sh@620 -- # shift 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/snapshot1 00:13:18.471 12:33:00 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:18.471 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.471 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.471 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=7f923760-c8e7-43e0-be66-39980e51ea25 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/snapshot1 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=7f923760-c8e7-43e0-be66-39980e51ea25 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.471 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.471 12:33:00 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:18.471 12:33:00 -- lvol/hotremove.sh@130 -- # [[ 7f923760-c8e7-43e0-be66-39980e51ea25 == \7\f\9\2\3\7\6\0\-\c\8\e\7\-\4\3\e\0\-\b\e\6\6\-\3\9\9\8\0\e\5\1\e\a\2\5 ]] 00:13:18.471 12:33:00 -- lvol/hotremove.sh@131 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:18.471 12:33:00 -- lvol/hotremove.sh@132 -- # [[ lvs_name/snapshot1 == \l\v\s\_\n\a\m\e\/\s\n\a\p\s\h\o\t\1 ]] 00:13:18.472 12:33:00 -- lvol/hotremove.sh@134 -- # rpc_cmd bdev_lvol_clone lvs_name/snapshot1 clone 00:13:18.472 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.472 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.472 12:33:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.472 12:33:00 -- lvol/hotremove.sh@134 -- # clone_uuid=567d690f-7cb8-4f02-8892-316f1c6710c7 00:13:18.472 12:33:00 -- lvol/hotremove.sh@136 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/clone 00:13:18.472 12:33:00 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/clone 00:13:18.472 12:33:00 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:18.472 12:33:00 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:18.472 12:33:00 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:18.472 12:33:00 -- common/autotest_common.sh@586 -- # local jq val 00:13:18.472 12:33:00 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:18.472 12:33:00 -- common/autotest_common.sh@596 -- # local lvs 00:13:18.472 12:33:00 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:18.472 12:33:00 -- common/autotest_common.sh@611 -- # local bdev 00:13:18.472 12:33:00 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:18.472 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.472 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:18.731 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.731 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:18.731 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.731 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:18.731 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:18.732 12:33:00 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:00 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:18.732 12:33:00 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:18.732 12:33:00 -- common/autotest_common.sh@620 -- # shift 00:13:18.732 12:33:00 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:00 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/clone 00:13:18.732 12:33:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.732 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.732 12:33:00 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:18.732 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=567d690f-7cb8-4f02-8892-316f1c6710c7 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/clone 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=567d690f-7cb8-4f02-8892-316f1c6710c7 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=snapshot1 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:18.732 12:33:01 -- lvol/hotremove.sh@137 -- # [[ 567d690f-7cb8-4f02-8892-316f1c6710c7 == \5\6\7\d\6\9\0\f\-\7\c\b\8\-\4\f\0\2\-\8\8\9\2\-\3\1\6\f\1\c\6\7\1\0\c\7 ]] 00:13:18.732 12:33:01 -- lvol/hotremove.sh@138 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:18.732 12:33:01 -- lvol/hotremove.sh@139 -- # [[ lvs_name/clone == \l\v\s\_\n\a\m\e\/\c\l\o\n\e ]] 00:13:18.732 12:33:01 -- lvol/hotremove.sh@141 -- # rpc_cmd bdev_lvol_snapshot 567d690f-7cb8-4f02-8892-316f1c6710c7 snapshot2 00:13:18.732 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.732 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.732 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.732 12:33:01 -- lvol/hotremove.sh@141 -- # snapshot_uuid2=3926e167-b63f-48db-bb29-39ebf290e1f3 00:13:18.732 12:33:01 -- lvol/hotremove.sh@143 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/snapshot2 00:13:18.732 12:33:01 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/snapshot2 00:13:18.732 12:33:01 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:18.732 12:33:01 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:18.732 12:33:01 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:18.732 12:33:01 -- common/autotest_common.sh@586 -- # local jq val 00:13:18.732 12:33:01 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:18.732 12:33:01 -- common/autotest_common.sh@596 -- # local lvs 00:13:18.732 12:33:01 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:18.732 12:33:01 -- common/autotest_common.sh@611 -- # local bdev 00:13:18.732 12:33:01 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:18.732 12:33:01 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:18.732 12:33:01 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:18.732 12:33:01 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:18.732 12:33:01 -- common/autotest_common.sh@620 -- # shift 00:13:18.732 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.732 12:33:01 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/snapshot2 00:13:18.732 12:33:01 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:18.732 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.733 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.733 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3926e167-b63f-48db-bb29-39ebf290e1f3 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/snapshot2 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3926e167-b63f-48db-bb29-39ebf290e1f3 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=snapshot1 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:18.733 12:33:01 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:18.733 12:33:01 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:18.733 12:33:01 -- lvol/hotremove.sh@144 -- # [[ 3926e167-b63f-48db-bb29-39ebf290e1f3 == \3\9\2\6\e\1\6\7\-\b\6\3\f\-\4\8\d\b\-\b\b\2\9\-\3\9\e\b\f\2\9\0\e\1\f\3 ]] 00:13:18.733 12:33:01 -- lvol/hotremove.sh@145 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:18.733 12:33:01 -- lvol/hotremove.sh@146 -- # [[ lvs_name/snapshot2 == \l\v\s\_\n\a\m\e\/\s\n\a\p\s\h\o\t\2 ]] 00:13:18.733 12:33:01 -- lvol/hotremove.sh@148 -- # rpc_cmd bdev_lvol_delete 7f923760-c8e7-43e0-be66-39980e51ea25 00:13:18.733 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.733 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.733 [2024-10-01 12:33:01.136375] vbdev_lvol.c: 640:_vbdev_lvol_destroy: *ERROR*: Cannot delete lvol 00:13:18.733 request: 00:13:18.733 { 00:13:18.733 "name": "7f923760-c8e7-43e0-be66-39980e51ea25", 00:13:18.733 "method": "bdev_lvol_delete", 00:13:18.733 "req_id": 1 00:13:18.733 } 00:13:18.733 Got JSON-RPC error response 00:13:18.733 response: 00:13:18.733 { 00:13:18.733 "code": -32603, 00:13:18.733 "message": "Operation not permitted" 00:13:18.733 } 00:13:18.733 12:33:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:18.733 12:33:01 -- lvol/hotremove.sh@149 -- # rpc_cmd bdev_lvol_delete_lvstore -u 7d97f5b8-9a53-418f-9e31-71909c74cd56 00:13:18.733 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.733 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.733 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.733 12:33:01 -- lvol/hotremove.sh@150 -- # rpc_cmd bdev_malloc_delete Malloc4 00:13:18.733 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.733 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.300 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.300 12:33:01 -- lvol/hotremove.sh@152 -- # check_leftover_devices 00:13:19.300 12:33:01 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:19.300 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.300 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.300 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.300 12:33:01 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:19.300 12:33:01 -- lvol/common.sh@26 -- # jq length 00:13:19.300 12:33:01 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:19.300 12:33:01 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:19.300 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.300 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.559 12:33:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.559 12:33:01 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:19.559 12:33:01 -- lvol/common.sh@28 -- # jq length 00:13:19.559 ************************************ 00:13:19.559 END TEST test_bdev_lvol_delete_lvstore_with_clones 00:13:19.559 ************************************ 00:13:19.559 12:33:01 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:19.559 00:13:19.559 real 0m1.405s 00:13:19.559 user 0m0.408s 00:13:19.559 sys 0m0.081s 00:13:19.559 12:33:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.559 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.559 12:33:01 -- lvol/hotremove.sh@216 -- # run_test test_unregister_lvol_bdev test_unregister_lvol_bdev 00:13:19.559 12:33:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:19.559 12:33:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.559 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.559 ************************************ 00:13:19.559 START TEST test_unregister_lvol_bdev 00:13:19.559 ************************************ 00:13:19.559 12:33:01 -- common/autotest_common.sh@1104 -- # test_unregister_lvol_bdev 00:13:19.559 12:33:01 -- lvol/hotremove.sh@158 -- # local snapshot_name1=snapshot1 snapshot_uuid1 00:13:19.559 12:33:01 -- lvol/hotremove.sh@159 -- # local snapshot_name2=snapshot2 snapshot_uuid2 00:13:19.559 12:33:01 -- lvol/hotremove.sh@160 -- # local clone_name=clone clone_uuid 00:13:19.559 12:33:01 -- lvol/hotremove.sh@161 -- # local lbd_name=lbd_test 00:13:19.559 12:33:01 -- lvol/hotremove.sh@163 -- # local bdev_uuid 00:13:19.559 12:33:01 -- lvol/hotremove.sh@164 -- # local lvstore_name=lvs_name lvstore_uuid 00:13:19.559 12:33:01 -- lvol/hotremove.sh@165 -- # local malloc_dev 00:13:19.559 12:33:01 -- lvol/hotremove.sh@167 -- # rpc_cmd bdev_malloc_create 256 512 00:13:19.559 12:33:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.559 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.818 12:33:02 -- lvol/hotremove.sh@167 -- # malloc_dev=Malloc5 00:13:19.818 12:33:02 -- lvol/hotremove.sh@168 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs_name 00:13:19.818 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.818 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:19.818 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.818 12:33:02 -- lvol/hotremove.sh@168 -- # lvstore_uuid=0fe71bc2-1358-4d42-b2b1-4386e680556a 00:13:19.818 12:33:02 -- lvol/hotremove.sh@170 -- # get_lvs_jq bdev_lvol_get_lvstores -u 0fe71bc2-1358-4d42-b2b1-4386e680556a 00:13:19.818 12:33:02 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 0fe71bc2-1358-4d42-b2b1-4386e680556a 00:13:19.818 12:33:02 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:13:19.818 12:33:02 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:19.818 12:33:02 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:19.818 12:33:02 -- common/autotest_common.sh@586 -- # local jq val 00:13:19.818 12:33:02 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:19.818 12:33:02 -- common/autotest_common.sh@596 -- # local lvs 00:13:19.819 12:33:02 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:19.819 12:33:02 -- common/autotest_common.sh@611 -- # local bdev 00:13:19.819 12:33:02 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:13:19.819 12:33:02 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:19.819 12:33:02 -- common/autotest_common.sh@620 -- # shift 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 0fe71bc2-1358-4d42-b2b1-4386e680556a 00:13:19.819 12:33:02 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:13:19.819 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.819 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=0fe71bc2-1358-4d42-b2b1-4386e680556a 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc5 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:13:19.819 12:33:02 -- lvol/hotremove.sh@171 -- # [[ 0fe71bc2-1358-4d42-b2b1-4386e680556a == \0\f\e\7\1\b\c\2\-\1\3\5\8\-\4\d\4\2\-\b\2\b\1\-\4\3\8\6\e\6\8\0\5\5\6\a ]] 00:13:19.819 12:33:02 -- lvol/hotremove.sh@172 -- # [[ lvs_name == \l\v\s\_\n\a\m\e ]] 00:13:19.819 12:33:02 -- lvol/hotremove.sh@173 -- # [[ Malloc5 == \M\a\l\l\o\c\5 ]] 00:13:19.819 12:33:02 -- lvol/hotremove.sh@175 -- # size=63 00:13:19.819 12:33:02 -- lvol/hotremove.sh@177 -- # rpc_cmd bdev_lvol_create -u 0fe71bc2-1358-4d42-b2b1-4386e680556a lbd_test 63 00:13:19.819 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.819 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.819 12:33:02 -- lvol/hotremove.sh@177 -- # bdev_uuid=f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:19.819 12:33:02 -- lvol/hotremove.sh@179 -- # get_bdev_jq bdev_get_bdevs -b f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:19.819 12:33:02 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:19.819 12:33:02 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:19.819 12:33:02 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:19.819 12:33:02 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:19.819 12:33:02 -- common/autotest_common.sh@586 -- # local jq val 00:13:19.819 12:33:02 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:19.819 12:33:02 -- common/autotest_common.sh@596 -- # local lvs 00:13:19.819 12:33:02 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:19.819 12:33:02 -- common/autotest_common.sh@611 -- # local bdev 00:13:19.819 12:33:02 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:19.819 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:19.819 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:19.819 12:33:02 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:19.819 12:33:02 -- common/autotest_common.sh@620 -- # shift 00:13:19.819 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:19.819 12:33:02 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:19.819 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.819 12:33:02 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:19.819 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/lbd_test 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f76461ac-29b9-432a-8ea9-10461e58db5b 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.080 12:33:02 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:20.080 12:33:02 -- lvol/hotremove.sh@181 -- # rpc_cmd bdev_lvol_snapshot f76461ac-29b9-432a-8ea9-10461e58db5b snapshot1 00:13:20.080 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.080 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.080 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.080 12:33:02 -- lvol/hotremove.sh@181 -- # snapshot_uuid1=d776acd1-66b6-429b-8052-f2ed66aa40d9 00:13:20.080 12:33:02 -- lvol/hotremove.sh@183 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/snapshot1 00:13:20.080 12:33:02 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/snapshot1 00:13:20.080 12:33:02 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:20.080 12:33:02 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:20.080 12:33:02 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:20.080 12:33:02 -- common/autotest_common.sh@586 -- # local jq val 00:13:20.080 12:33:02 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:20.080 12:33:02 -- common/autotest_common.sh@596 -- # local lvs 00:13:20.080 12:33:02 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:20.080 12:33:02 -- common/autotest_common.sh@611 -- # local bdev 00:13:20.080 12:33:02 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:20.080 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.080 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:20.080 12:33:02 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:20.080 12:33:02 -- common/autotest_common.sh@620 -- # shift 00:13:20.080 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/snapshot1 00:13:20.081 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.081 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.081 12:33:02 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:20.081 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d776acd1-66b6-429b-8052-f2ed66aa40d9 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/snapshot1 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=d776acd1-66b6-429b-8052-f2ed66aa40d9 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:20.081 12:33:02 -- lvol/hotremove.sh@184 -- # [[ d776acd1-66b6-429b-8052-f2ed66aa40d9 == \d\7\7\6\a\c\d\1\-\6\6\b\6\-\4\2\9\b\-\8\0\5\2\-\f\2\e\d\6\6\a\a\4\0\d\9 ]] 00:13:20.081 12:33:02 -- lvol/hotremove.sh@185 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:20.081 12:33:02 -- lvol/hotremove.sh@186 -- # [[ lvs_name/snapshot1 == \l\v\s\_\n\a\m\e\/\s\n\a\p\s\h\o\t\1 ]] 00:13:20.081 12:33:02 -- lvol/hotremove.sh@188 -- # rpc_cmd bdev_lvol_clone lvs_name/snapshot1 clone 00:13:20.081 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.081 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.081 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.081 12:33:02 -- lvol/hotremove.sh@188 -- # clone_uuid=90347f87-4988-476d-80fd-2d6b32ec5f79 00:13:20.081 12:33:02 -- lvol/hotremove.sh@190 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/clone 00:13:20.081 12:33:02 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/clone 00:13:20.081 12:33:02 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:20.081 12:33:02 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:20.081 12:33:02 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:20.081 12:33:02 -- common/autotest_common.sh@586 -- # local jq val 00:13:20.081 12:33:02 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:20.081 12:33:02 -- common/autotest_common.sh@596 -- # local lvs 00:13:20.081 12:33:02 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:20.081 12:33:02 -- common/autotest_common.sh@611 -- # local bdev 00:13:20.081 12:33:02 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:20.081 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.081 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:20.081 12:33:02 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:20.081 12:33:02 -- common/autotest_common.sh@620 -- # shift 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/clone 00:13:20.081 12:33:02 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:20.081 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.081 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.081 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=90347f87-4988-476d-80fd-2d6b32ec5f79 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/clone 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=90347f87-4988-476d-80fd-2d6b32ec5f79 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.081 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=snapshot1 00:13:20.081 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:20.082 12:33:02 -- lvol/hotremove.sh@191 -- # [[ 90347f87-4988-476d-80fd-2d6b32ec5f79 == \9\0\3\4\7\f\8\7\-\4\9\8\8\-\4\7\6\d\-\8\0\f\d\-\2\d\6\b\3\2\e\c\5\f\7\9 ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@192 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@193 -- # [[ lvs_name/clone == \l\v\s\_\n\a\m\e\/\c\l\o\n\e ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@195 -- # rpc_cmd bdev_lvol_snapshot 90347f87-4988-476d-80fd-2d6b32ec5f79 snapshot2 00:13:20.082 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.082 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.082 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@195 -- # snapshot_uuid2=7c3de4b1-792a-4dd0-bdf8-1a1a7dc264b6 00:13:20.082 12:33:02 -- lvol/hotremove.sh@197 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/snapshot2 00:13:20.082 12:33:02 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/snapshot2 00:13:20.082 12:33:02 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:20.082 12:33:02 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:20.082 12:33:02 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:20.082 12:33:02 -- common/autotest_common.sh@586 -- # local jq val 00:13:20.082 12:33:02 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:20.082 12:33:02 -- common/autotest_common.sh@596 -- # local lvs 00:13:20.082 12:33:02 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:20.082 12:33:02 -- common/autotest_common.sh@611 -- # local bdev 00:13:20.082 12:33:02 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:20.082 12:33:02 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:20.082 12:33:02 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:20.082 12:33:02 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:20.082 12:33:02 -- common/autotest_common.sh@620 -- # shift 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/snapshot2 00:13:20.082 12:33:02 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:20.082 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.082 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.082 12:33:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=7c3de4b1-792a-4dd0-bdf8-1a1a7dc264b6 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/snapshot2 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=7c3de4b1-792a-4dd0-bdf8-1a1a7dc264b6 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=snapshot1 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:20.082 12:33:02 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:20.082 12:33:02 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:20.082 12:33:02 -- lvol/hotremove.sh@198 -- # [[ 7c3de4b1-792a-4dd0-bdf8-1a1a7dc264b6 == \7\c\3\d\e\4\b\1\-\7\9\2\a\-\4\d\d\0\-\b\d\f\8\-\1\a\1\a\7\d\c\2\6\4\b\6 ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@199 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@200 -- # [[ lvs_name/snapshot2 == \l\v\s\_\n\a\m\e\/\s\n\a\p\s\h\o\t\2 ]] 00:13:20.082 12:33:02 -- lvol/hotremove.sh@202 -- # rpc_cmd bdev_malloc_delete Malloc5 00:13:20.342 12:33:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.342 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.342 [2024-10-01 12:33:02.605302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Malloc5 being removed: closing lvstore lvs_name 00:13:20.910 12:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.910 12:33:03 -- lvol/hotremove.sh@203 -- # check_leftover_devices 00:13:20.910 12:33:03 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:20.910 12:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.910 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.910 12:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.910 12:33:03 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:20.910 12:33:03 -- lvol/common.sh@26 -- # jq length 00:13:20.910 12:33:03 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:20.910 12:33:03 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:20.910 12:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.910 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.910 12:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.910 12:33:03 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:20.910 12:33:03 -- lvol/common.sh@28 -- # jq length 00:13:20.910 ************************************ 00:13:20.910 END TEST test_unregister_lvol_bdev 00:13:20.910 ************************************ 00:13:20.910 12:33:03 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:20.910 00:13:20.910 real 0m1.417s 00:13:20.910 user 0m0.448s 00:13:20.910 sys 0m0.071s 00:13:20.910 12:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.910 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.910 12:33:03 -- lvol/hotremove.sh@218 -- # trap - SIGINT SIGTERM EXIT 00:13:20.910 12:33:03 -- lvol/hotremove.sh@219 -- # killprocess 59687 00:13:20.910 12:33:03 -- common/autotest_common.sh@926 -- # '[' -z 59687 ']' 00:13:20.910 12:33:03 -- common/autotest_common.sh@930 -- # kill -0 59687 00:13:20.911 12:33:03 -- common/autotest_common.sh@931 -- # uname 00:13:20.911 12:33:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:20.911 12:33:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59687 00:13:20.911 killing process with pid 59687 00:13:20.911 12:33:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:20.911 12:33:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:20.911 12:33:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59687' 00:13:20.911 12:33:03 -- common/autotest_common.sh@945 -- # kill 59687 00:13:20.911 12:33:03 -- common/autotest_common.sh@950 -- # wait 59687 00:13:23.445 00:13:23.445 real 0m9.908s 00:13:23.445 user 0m11.729s 00:13:23.445 sys 0m1.127s 00:13:23.445 12:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.445 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:23.445 ************************************ 00:13:23.445 END TEST lvol_hotremove 00:13:23.445 ************************************ 00:13:23.445 12:33:05 -- lvol/lvol.sh@17 -- # run_test lvol_tasting /home/vagrant/spdk_repo/spdk/test/lvol/tasting.sh 00:13:23.445 12:33:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:23.445 12:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.445 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:23.445 ************************************ 00:13:23.445 START TEST lvol_tasting 00:13:23.445 ************************************ 00:13:23.445 12:33:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/tasting.sh 00:13:23.445 * Looking for test storage... 00:13:23.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:13:23.445 12:33:05 -- lvol/tasting.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:23.445 12:33:05 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:23.445 12:33:05 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:23.445 12:33:05 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:23.445 12:33:05 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:23.445 12:33:05 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:23.445 12:33:05 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:23.445 12:33:05 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:23.445 12:33:05 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:23.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.445 12:33:05 -- lvol/tasting.sh@164 -- # spdk_pid=60050 00:13:23.445 12:33:05 -- lvol/tasting.sh@163 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:23.445 12:33:05 -- lvol/tasting.sh@165 -- # trap 'killprocess "$spdk_pid"; rm -f $testdir/aio_bdev_0 $testdir/aio_bdev_1; exit 1' SIGINT SIGTERM EXIT 00:13:23.445 12:33:05 -- lvol/tasting.sh@166 -- # waitforlisten 60050 00:13:23.445 12:33:05 -- common/autotest_common.sh@819 -- # '[' -z 60050 ']' 00:13:23.445 12:33:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.445 12:33:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:23.445 12:33:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.445 12:33:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:23.445 12:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:23.445 [2024-10-01 12:33:05.611589] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:23.445 [2024-10-01 12:33:05.612023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:13:23.445 [2024-10-01 12:33:05.783171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.445 [2024-10-01 12:33:05.961912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:23.445 [2024-10-01 12:33:05.962401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.820 12:33:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:24.820 12:33:07 -- common/autotest_common.sh@852 -- # return 0 00:13:24.820 12:33:07 -- lvol/tasting.sh@167 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_1 00:13:24.820 12:33:07 -- lvol/tasting.sh@169 -- # run_test test_tasting test_tasting 00:13:24.820 12:33:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:24.820 12:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:24.820 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:24.820 ************************************ 00:13:24.820 START TEST test_tasting 00:13:24.820 ************************************ 00:13:24.820 12:33:07 -- common/autotest_common.sh@1104 -- # test_tasting 00:13:24.820 12:33:07 -- lvol/tasting.sh@14 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 aio_bdev0 4096 00:13:24.820 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.820 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 aio_bdev0 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@15 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_1 aio_bdev1 4096 00:13:25.080 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.080 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 aio_bdev1 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@17 -- # rpc_cmd bdev_lvol_create_lvstore aio_bdev0 lvs_test 00:13:25.080 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.080 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@17 -- # lvs_uuid=7157a4b2-5a5c-407c-8198-2af628967cdf 00:13:25.080 12:33:07 -- lvol/tasting.sh@19 -- # rpc_cmd bdev_lvol_delete_lvstore -u 7157a4b2-5a5c-407c-8198-2af628967cdf 00:13:25.080 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.080 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@22 -- # rpc_cmd bdev_aio_delete aio_bdev0 00:13:25.080 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.080 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@24 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 aio_bdev0 4096 00:13:25.080 12:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.080 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.080 aio_bdev0 00:13:25.080 12:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.080 12:33:07 -- lvol/tasting.sh@25 -- # sleep 1 00:13:26.018 12:33:08 -- lvol/tasting.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores -u 7157a4b2-5a5c-407c-8198-2af628967cdf 00:13:26.018 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.018 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 request: 00:13:26.018 { 00:13:26.018 "uuid": "7157a4b2-5a5c-407c-8198-2af628967cdf", 00:13:26.018 "method": "bdev_lvol_get_lvstores", 00:13:26.018 "req_id": 1 00:13:26.018 } 00:13:26.018 Got JSON-RPC error response 00:13:26.018 response: 00:13:26.018 { 00:13:26.018 "code": -19, 00:13:26.018 "message": "No such device" 00:13:26.018 } 00:13:26.018 12:33:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:26.018 12:33:08 -- lvol/tasting.sh@30 -- # lvs1_cluster_size=1048576 00:13:26.018 12:33:08 -- lvol/tasting.sh@31 -- # lvs2_cluster_size=33554432 00:13:26.018 12:33:08 -- lvol/tasting.sh@32 -- # rpc_cmd bdev_lvol_create_lvstore aio_bdev0 lvs_test1 -c 1048576 00:13:26.018 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.018 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.018 12:33:08 -- lvol/tasting.sh@32 -- # lvs_uuid1=8249d544-7e67-497e-8e64-848a0275e422 00:13:26.018 12:33:08 -- lvol/tasting.sh@33 -- # rpc_cmd bdev_lvol_create_lvstore aio_bdev1 lvs_test2 -c 33554432 00:13:26.018 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.018 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.018 12:33:08 -- lvol/tasting.sh@33 -- # lvs_uuid2=be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 00:13:26.018 12:33:08 -- lvol/tasting.sh@36 -- # round_down 12 00:13:26.018 12:33:08 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:26.018 12:33:08 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:26.018 12:33:08 -- lvol/common.sh@36 -- # echo 12 00:13:26.018 12:33:08 -- lvol/tasting.sh@36 -- # lvol_size_mb=12 00:13:26.018 12:33:08 -- lvol/tasting.sh@37 -- # lvol_size=12582912 00:13:26.018 12:33:08 -- lvol/tasting.sh@39 -- # seq 1 5 00:13:26.018 12:33:08 -- lvol/tasting.sh@39 -- # for i in $(seq 1 5) 00:13:26.018 12:33:08 -- lvol/tasting.sh@40 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test1 12 00:13:26.018 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.018 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.018 12:33:08 -- lvol/tasting.sh@40 -- # lvol_uuid=36405014-8b69-4916-a933-12d9631f8e3c 00:13:26.018 12:33:08 -- lvol/tasting.sh@41 -- # rpc_cmd bdev_get_bdevs -b 36405014-8b69-4916-a933-12d9631f8e3c 00:13:26.018 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.018 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.018 12:33:08 -- lvol/tasting.sh@41 -- # lvol='[ 00:13:26.018 { 00:13:26.018 "name": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:26.018 "aliases": [ 00:13:26.018 "lvs_test1/lvol_test1" 00:13:26.018 ], 00:13:26.018 "product_name": "Logical Volume", 00:13:26.018 "block_size": 4096, 00:13:26.018 "num_blocks": 3072, 00:13:26.018 "uuid": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:26.018 "assigned_rate_limits": { 00:13:26.018 "rw_ios_per_sec": 0, 00:13:26.018 "rw_mbytes_per_sec": 0, 00:13:26.018 "r_mbytes_per_sec": 0, 00:13:26.018 "w_mbytes_per_sec": 0 00:13:26.018 }, 00:13:26.018 "claimed": false, 00:13:26.018 "zoned": false, 00:13:26.018 "supported_io_types": { 00:13:26.018 "read": true, 00:13:26.018 "write": true, 00:13:26.018 "unmap": true, 00:13:26.018 "write_zeroes": true, 00:13:26.018 "flush": false, 00:13:26.018 "reset": true, 00:13:26.018 "compare": false, 00:13:26.018 "compare_and_write": false, 00:13:26.018 "abort": false, 00:13:26.018 "nvme_admin": false, 00:13:26.018 "nvme_io": false 00:13:26.018 }, 00:13:26.018 "driver_specific": { 00:13:26.018 "lvol": { 00:13:26.018 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:26.018 "base_bdev": "aio_bdev0", 00:13:26.018 "thin_provision": false, 00:13:26.018 "snapshot": false, 00:13:26.018 "clone": false, 00:13:26.018 "esnap_clone": false 00:13:26.018 } 00:13:26.018 } 00:13:26.018 } 00:13:26.018 ]' 00:13:26.018 12:33:08 -- lvol/tasting.sh@43 -- # jq -r '.[0].name' 00:13:26.277 12:33:08 -- lvol/tasting.sh@43 -- # '[' 36405014-8b69-4916-a933-12d9631f8e3c = 36405014-8b69-4916-a933-12d9631f8e3c ']' 00:13:26.277 12:33:08 -- lvol/tasting.sh@44 -- # jq -r '.[0].uuid' 00:13:26.277 12:33:08 -- lvol/tasting.sh@44 -- # '[' 36405014-8b69-4916-a933-12d9631f8e3c = 36405014-8b69-4916-a933-12d9631f8e3c ']' 00:13:26.277 12:33:08 -- lvol/tasting.sh@45 -- # jq -r '.[0].aliases[0]' 00:13:26.277 12:33:08 -- lvol/tasting.sh@45 -- # '[' lvs_test1/lvol_test1 = lvs_test1/lvol_test1 ']' 00:13:26.277 12:33:08 -- lvol/tasting.sh@46 -- # jq -r '.[0].block_size' 00:13:26.277 12:33:08 -- lvol/tasting.sh@46 -- # '[' 4096 = 4096 ']' 00:13:26.277 12:33:08 -- lvol/tasting.sh@47 -- # jq -r '.[0].num_blocks' 00:13:26.277 12:33:08 -- lvol/tasting.sh@47 -- # '[' 3072 = 3072 ']' 00:13:26.277 12:33:08 -- lvol/tasting.sh@39 -- # for i in $(seq 1 5) 00:13:26.277 12:33:08 -- lvol/tasting.sh@40 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test2 12 00:13:26.277 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.277 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.277 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.277 12:33:08 -- lvol/tasting.sh@40 -- # lvol_uuid=b279cf7e-2e11-424f-9176-101334e10d03 00:13:26.277 12:33:08 -- lvol/tasting.sh@41 -- # rpc_cmd bdev_get_bdevs -b b279cf7e-2e11-424f-9176-101334e10d03 00:13:26.277 12:33:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.277 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.536 12:33:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.536 12:33:08 -- lvol/tasting.sh@41 -- # lvol='[ 00:13:26.536 { 00:13:26.536 "name": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:26.536 "aliases": [ 00:13:26.536 "lvs_test1/lvol_test2" 00:13:26.536 ], 00:13:26.536 "product_name": "Logical Volume", 00:13:26.536 "block_size": 4096, 00:13:26.536 "num_blocks": 3072, 00:13:26.536 "uuid": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:26.536 "assigned_rate_limits": { 00:13:26.536 "rw_ios_per_sec": 0, 00:13:26.536 "rw_mbytes_per_sec": 0, 00:13:26.536 "r_mbytes_per_sec": 0, 00:13:26.536 "w_mbytes_per_sec": 0 00:13:26.536 }, 00:13:26.536 "claimed": false, 00:13:26.536 "zoned": false, 00:13:26.536 "supported_io_types": { 00:13:26.536 "read": true, 00:13:26.536 "write": true, 00:13:26.536 "unmap": true, 00:13:26.536 "write_zeroes": true, 00:13:26.536 "flush": false, 00:13:26.536 "reset": true, 00:13:26.536 "compare": false, 00:13:26.536 "compare_and_write": false, 00:13:26.536 "abort": false, 00:13:26.536 "nvme_admin": false, 00:13:26.536 "nvme_io": false 00:13:26.536 }, 00:13:26.536 "driver_specific": { 00:13:26.536 "lvol": { 00:13:26.536 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:26.536 "base_bdev": "aio_bdev0", 00:13:26.536 "thin_provision": false, 00:13:26.536 "snapshot": false, 00:13:26.536 "clone": false, 00:13:26.536 "esnap_clone": false 00:13:26.536 } 00:13:26.536 } 00:13:26.536 } 00:13:26.536 ]' 00:13:26.536 12:33:08 -- lvol/tasting.sh@43 -- # jq -r '.[0].name' 00:13:26.536 12:33:08 -- lvol/tasting.sh@43 -- # '[' b279cf7e-2e11-424f-9176-101334e10d03 = b279cf7e-2e11-424f-9176-101334e10d03 ']' 00:13:26.536 12:33:08 -- lvol/tasting.sh@44 -- # jq -r '.[0].uuid' 00:13:26.536 12:33:08 -- lvol/tasting.sh@44 -- # '[' b279cf7e-2e11-424f-9176-101334e10d03 = b279cf7e-2e11-424f-9176-101334e10d03 ']' 00:13:26.536 12:33:08 -- lvol/tasting.sh@45 -- # jq -r '.[0].aliases[0]' 00:13:26.536 12:33:08 -- lvol/tasting.sh@45 -- # '[' lvs_test1/lvol_test2 = lvs_test1/lvol_test2 ']' 00:13:26.536 12:33:08 -- lvol/tasting.sh@46 -- # jq -r '.[0].block_size' 00:13:26.536 12:33:09 -- lvol/tasting.sh@46 -- # '[' 4096 = 4096 ']' 00:13:26.536 12:33:09 -- lvol/tasting.sh@47 -- # jq -r '.[0].num_blocks' 00:13:26.795 12:33:09 -- lvol/tasting.sh@47 -- # '[' 3072 = 3072 ']' 00:13:26.795 12:33:09 -- lvol/tasting.sh@39 -- # for i in $(seq 1 5) 00:13:26.795 12:33:09 -- lvol/tasting.sh@40 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test3 12 00:13:26.795 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.795 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:26.795 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.795 12:33:09 -- lvol/tasting.sh@40 -- # lvol_uuid=e01999c5-16d5-4e6b-b9dd-6f36600412cc 00:13:26.795 12:33:09 -- lvol/tasting.sh@41 -- # rpc_cmd bdev_get_bdevs -b e01999c5-16d5-4e6b-b9dd-6f36600412cc 00:13:26.795 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.795 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:26.795 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.795 12:33:09 -- lvol/tasting.sh@41 -- # lvol='[ 00:13:26.795 { 00:13:26.795 "name": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:26.795 "aliases": [ 00:13:26.795 "lvs_test1/lvol_test3" 00:13:26.795 ], 00:13:26.795 "product_name": "Logical Volume", 00:13:26.795 "block_size": 4096, 00:13:26.795 "num_blocks": 3072, 00:13:26.795 "uuid": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:26.795 "assigned_rate_limits": { 00:13:26.795 "rw_ios_per_sec": 0, 00:13:26.795 "rw_mbytes_per_sec": 0, 00:13:26.795 "r_mbytes_per_sec": 0, 00:13:26.795 "w_mbytes_per_sec": 0 00:13:26.795 }, 00:13:26.795 "claimed": false, 00:13:26.795 "zoned": false, 00:13:26.795 "supported_io_types": { 00:13:26.795 "read": true, 00:13:26.795 "write": true, 00:13:26.795 "unmap": true, 00:13:26.795 "write_zeroes": true, 00:13:26.795 "flush": false, 00:13:26.795 "reset": true, 00:13:26.795 "compare": false, 00:13:26.795 "compare_and_write": false, 00:13:26.795 "abort": false, 00:13:26.795 "nvme_admin": false, 00:13:26.795 "nvme_io": false 00:13:26.795 }, 00:13:26.795 "driver_specific": { 00:13:26.795 "lvol": { 00:13:26.795 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:26.795 "base_bdev": "aio_bdev0", 00:13:26.795 "thin_provision": false, 00:13:26.795 "snapshot": false, 00:13:26.795 "clone": false, 00:13:26.795 "esnap_clone": false 00:13:26.795 } 00:13:26.795 } 00:13:26.795 } 00:13:26.795 ]' 00:13:26.795 12:33:09 -- lvol/tasting.sh@43 -- # jq -r '.[0].name' 00:13:26.795 12:33:09 -- lvol/tasting.sh@43 -- # '[' e01999c5-16d5-4e6b-b9dd-6f36600412cc = e01999c5-16d5-4e6b-b9dd-6f36600412cc ']' 00:13:26.795 12:33:09 -- lvol/tasting.sh@44 -- # jq -r '.[0].uuid' 00:13:26.795 12:33:09 -- lvol/tasting.sh@44 -- # '[' e01999c5-16d5-4e6b-b9dd-6f36600412cc = e01999c5-16d5-4e6b-b9dd-6f36600412cc ']' 00:13:26.795 12:33:09 -- lvol/tasting.sh@45 -- # jq -r '.[0].aliases[0]' 00:13:26.795 12:33:09 -- lvol/tasting.sh@45 -- # '[' lvs_test1/lvol_test3 = lvs_test1/lvol_test3 ']' 00:13:26.795 12:33:09 -- lvol/tasting.sh@46 -- # jq -r '.[0].block_size' 00:13:26.795 12:33:09 -- lvol/tasting.sh@46 -- # '[' 4096 = 4096 ']' 00:13:26.795 12:33:09 -- lvol/tasting.sh@47 -- # jq -r '.[0].num_blocks' 00:13:27.054 12:33:09 -- lvol/tasting.sh@47 -- # '[' 3072 = 3072 ']' 00:13:27.054 12:33:09 -- lvol/tasting.sh@39 -- # for i in $(seq 1 5) 00:13:27.054 12:33:09 -- lvol/tasting.sh@40 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test4 12 00:13:27.054 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.054 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.054 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.054 12:33:09 -- lvol/tasting.sh@40 -- # lvol_uuid=799ce17c-92a7-42d7-ab35-623e9ef03636 00:13:27.054 12:33:09 -- lvol/tasting.sh@41 -- # rpc_cmd bdev_get_bdevs -b 799ce17c-92a7-42d7-ab35-623e9ef03636 00:13:27.054 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.054 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.054 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.054 12:33:09 -- lvol/tasting.sh@41 -- # lvol='[ 00:13:27.054 { 00:13:27.054 "name": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:27.054 "aliases": [ 00:13:27.054 "lvs_test1/lvol_test4" 00:13:27.054 ], 00:13:27.054 "product_name": "Logical Volume", 00:13:27.054 "block_size": 4096, 00:13:27.054 "num_blocks": 3072, 00:13:27.054 "uuid": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:27.054 "assigned_rate_limits": { 00:13:27.054 "rw_ios_per_sec": 0, 00:13:27.054 "rw_mbytes_per_sec": 0, 00:13:27.054 "r_mbytes_per_sec": 0, 00:13:27.054 "w_mbytes_per_sec": 0 00:13:27.054 }, 00:13:27.054 "claimed": false, 00:13:27.054 "zoned": false, 00:13:27.054 "supported_io_types": { 00:13:27.054 "read": true, 00:13:27.054 "write": true, 00:13:27.054 "unmap": true, 00:13:27.054 "write_zeroes": true, 00:13:27.054 "flush": false, 00:13:27.054 "reset": true, 00:13:27.054 "compare": false, 00:13:27.054 "compare_and_write": false, 00:13:27.054 "abort": false, 00:13:27.054 "nvme_admin": false, 00:13:27.054 "nvme_io": false 00:13:27.054 }, 00:13:27.054 "driver_specific": { 00:13:27.054 "lvol": { 00:13:27.054 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:27.054 "base_bdev": "aio_bdev0", 00:13:27.054 "thin_provision": false, 00:13:27.054 "snapshot": false, 00:13:27.054 "clone": false, 00:13:27.054 "esnap_clone": false 00:13:27.054 } 00:13:27.054 } 00:13:27.054 } 00:13:27.054 ]' 00:13:27.054 12:33:09 -- lvol/tasting.sh@43 -- # jq -r '.[0].name' 00:13:27.054 12:33:09 -- lvol/tasting.sh@43 -- # '[' 799ce17c-92a7-42d7-ab35-623e9ef03636 = 799ce17c-92a7-42d7-ab35-623e9ef03636 ']' 00:13:27.054 12:33:09 -- lvol/tasting.sh@44 -- # jq -r '.[0].uuid' 00:13:27.054 12:33:09 -- lvol/tasting.sh@44 -- # '[' 799ce17c-92a7-42d7-ab35-623e9ef03636 = 799ce17c-92a7-42d7-ab35-623e9ef03636 ']' 00:13:27.054 12:33:09 -- lvol/tasting.sh@45 -- # jq -r '.[0].aliases[0]' 00:13:27.054 12:33:09 -- lvol/tasting.sh@45 -- # '[' lvs_test1/lvol_test4 = lvs_test1/lvol_test4 ']' 00:13:27.054 12:33:09 -- lvol/tasting.sh@46 -- # jq -r '.[0].block_size' 00:13:27.054 12:33:09 -- lvol/tasting.sh@46 -- # '[' 4096 = 4096 ']' 00:13:27.054 12:33:09 -- lvol/tasting.sh@47 -- # jq -r '.[0].num_blocks' 00:13:27.313 12:33:09 -- lvol/tasting.sh@47 -- # '[' 3072 = 3072 ']' 00:13:27.313 12:33:09 -- lvol/tasting.sh@39 -- # for i in $(seq 1 5) 00:13:27.313 12:33:09 -- lvol/tasting.sh@40 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test5 12 00:13:27.313 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.313 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.313 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.313 12:33:09 -- lvol/tasting.sh@40 -- # lvol_uuid=2bf5b04f-9adf-4242-b5aa-6d184243947e 00:13:27.313 12:33:09 -- lvol/tasting.sh@41 -- # rpc_cmd bdev_get_bdevs -b 2bf5b04f-9adf-4242-b5aa-6d184243947e 00:13:27.313 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.313 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.313 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.313 12:33:09 -- lvol/tasting.sh@41 -- # lvol='[ 00:13:27.313 { 00:13:27.313 "name": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:27.313 "aliases": [ 00:13:27.313 "lvs_test1/lvol_test5" 00:13:27.313 ], 00:13:27.313 "product_name": "Logical Volume", 00:13:27.313 "block_size": 4096, 00:13:27.313 "num_blocks": 3072, 00:13:27.313 "uuid": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:27.313 "assigned_rate_limits": { 00:13:27.313 "rw_ios_per_sec": 0, 00:13:27.313 "rw_mbytes_per_sec": 0, 00:13:27.313 "r_mbytes_per_sec": 0, 00:13:27.313 "w_mbytes_per_sec": 0 00:13:27.313 }, 00:13:27.313 "claimed": false, 00:13:27.313 "zoned": false, 00:13:27.313 "supported_io_types": { 00:13:27.313 "read": true, 00:13:27.313 "write": true, 00:13:27.313 "unmap": true, 00:13:27.313 "write_zeroes": true, 00:13:27.313 "flush": false, 00:13:27.313 "reset": true, 00:13:27.313 "compare": false, 00:13:27.313 "compare_and_write": false, 00:13:27.313 "abort": false, 00:13:27.313 "nvme_admin": false, 00:13:27.313 "nvme_io": false 00:13:27.313 }, 00:13:27.313 "driver_specific": { 00:13:27.313 "lvol": { 00:13:27.313 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:27.313 "base_bdev": "aio_bdev0", 00:13:27.313 "thin_provision": false, 00:13:27.313 "snapshot": false, 00:13:27.313 "clone": false, 00:13:27.313 "esnap_clone": false 00:13:27.313 } 00:13:27.313 } 00:13:27.313 } 00:13:27.313 ]' 00:13:27.313 12:33:09 -- lvol/tasting.sh@43 -- # jq -r '.[0].name' 00:13:27.313 12:33:09 -- lvol/tasting.sh@43 -- # '[' 2bf5b04f-9adf-4242-b5aa-6d184243947e = 2bf5b04f-9adf-4242-b5aa-6d184243947e ']' 00:13:27.313 12:33:09 -- lvol/tasting.sh@44 -- # jq -r '.[0].uuid' 00:13:27.313 12:33:09 -- lvol/tasting.sh@44 -- # '[' 2bf5b04f-9adf-4242-b5aa-6d184243947e = 2bf5b04f-9adf-4242-b5aa-6d184243947e ']' 00:13:27.313 12:33:09 -- lvol/tasting.sh@45 -- # jq -r '.[0].aliases[0]' 00:13:27.313 12:33:09 -- lvol/tasting.sh@45 -- # '[' lvs_test1/lvol_test5 = lvs_test1/lvol_test5 ']' 00:13:27.313 12:33:09 -- lvol/tasting.sh@46 -- # jq -r '.[0].block_size' 00:13:27.572 12:33:09 -- lvol/tasting.sh@46 -- # '[' 4096 = 4096 ']' 00:13:27.572 12:33:09 -- lvol/tasting.sh@47 -- # jq -r '.[0].num_blocks' 00:13:27.572 12:33:09 -- lvol/tasting.sh@47 -- # '[' 3072 = 3072 ']' 00:13:27.572 12:33:09 -- lvol/tasting.sh@51 -- # round_down 76 32 00:13:27.572 12:33:09 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:27.572 12:33:09 -- lvol/common.sh@33 -- # '[' -n 32 ']' 00:13:27.572 12:33:09 -- lvol/common.sh@34 -- # CLUSTER_SIZE_MB=32 00:13:27.572 12:33:09 -- lvol/common.sh@36 -- # echo 64 00:13:27.572 12:33:09 -- lvol/tasting.sh@51 -- # lvol2_size_mb=64 00:13:27.572 12:33:09 -- lvol/tasting.sh@52 -- # lvol2_size=67108864 00:13:27.572 12:33:09 -- lvol/tasting.sh@54 -- # seq 1 5 00:13:27.572 12:33:09 -- lvol/tasting.sh@54 -- # for i in $(seq 1 5) 00:13:27.572 12:33:09 -- lvol/tasting.sh@55 -- # rpc_cmd bdev_lvol_create -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 lvol_test1 64 00:13:27.572 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.572 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.572 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.572 12:33:09 -- lvol/tasting.sh@55 -- # lvol_uuid=61b580e3-0efb-445a-8d45-a1fb30b857dd 00:13:27.572 12:33:09 -- lvol/tasting.sh@56 -- # rpc_cmd bdev_get_bdevs -b 61b580e3-0efb-445a-8d45-a1fb30b857dd 00:13:27.572 12:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.572 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.572 12:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.572 12:33:09 -- lvol/tasting.sh@56 -- # lvol='[ 00:13:27.572 { 00:13:27.572 "name": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:27.572 "aliases": [ 00:13:27.572 "lvs_test2/lvol_test1" 00:13:27.572 ], 00:13:27.572 "product_name": "Logical Volume", 00:13:27.572 "block_size": 4096, 00:13:27.572 "num_blocks": 16384, 00:13:27.572 "uuid": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:27.572 "assigned_rate_limits": { 00:13:27.572 "rw_ios_per_sec": 0, 00:13:27.572 "rw_mbytes_per_sec": 0, 00:13:27.572 "r_mbytes_per_sec": 0, 00:13:27.572 "w_mbytes_per_sec": 0 00:13:27.572 }, 00:13:27.572 "claimed": false, 00:13:27.572 "zoned": false, 00:13:27.572 "supported_io_types": { 00:13:27.572 "read": true, 00:13:27.572 "write": true, 00:13:27.572 "unmap": true, 00:13:27.572 "write_zeroes": true, 00:13:27.572 "flush": false, 00:13:27.572 "reset": true, 00:13:27.572 "compare": false, 00:13:27.572 "compare_and_write": false, 00:13:27.572 "abort": false, 00:13:27.572 "nvme_admin": false, 00:13:27.572 "nvme_io": false 00:13:27.572 }, 00:13:27.572 "driver_specific": { 00:13:27.572 "lvol": { 00:13:27.572 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:27.572 "base_bdev": "aio_bdev1", 00:13:27.572 "thin_provision": false, 00:13:27.572 "snapshot": false, 00:13:27.572 "clone": false, 00:13:27.572 "esnap_clone": false 00:13:27.572 } 00:13:27.572 } 00:13:27.572 } 00:13:27.572 ]' 00:13:27.572 12:33:09 -- lvol/tasting.sh@58 -- # jq -r '.[0].name' 00:13:27.572 12:33:10 -- lvol/tasting.sh@58 -- # '[' 61b580e3-0efb-445a-8d45-a1fb30b857dd = 61b580e3-0efb-445a-8d45-a1fb30b857dd ']' 00:13:27.572 12:33:10 -- lvol/tasting.sh@59 -- # jq -r '.[0].uuid' 00:13:27.572 12:33:10 -- lvol/tasting.sh@59 -- # '[' 61b580e3-0efb-445a-8d45-a1fb30b857dd = 61b580e3-0efb-445a-8d45-a1fb30b857dd ']' 00:13:27.572 12:33:10 -- lvol/tasting.sh@60 -- # jq -r '.[0].aliases[0]' 00:13:27.831 12:33:10 -- lvol/tasting.sh@60 -- # '[' lvs_test2/lvol_test1 = lvs_test2/lvol_test1 ']' 00:13:27.831 12:33:10 -- lvol/tasting.sh@61 -- # jq -r '.[0].block_size' 00:13:27.831 12:33:10 -- lvol/tasting.sh@61 -- # '[' 4096 = 4096 ']' 00:13:27.831 12:33:10 -- lvol/tasting.sh@62 -- # jq -r '.[0].num_blocks' 00:13:27.831 12:33:10 -- lvol/tasting.sh@62 -- # '[' 16384 = 16384 ']' 00:13:27.831 12:33:10 -- lvol/tasting.sh@54 -- # for i in $(seq 1 5) 00:13:27.831 12:33:10 -- lvol/tasting.sh@55 -- # rpc_cmd bdev_lvol_create -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 lvol_test2 64 00:13:27.831 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.831 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:27.831 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.831 12:33:10 -- lvol/tasting.sh@55 -- # lvol_uuid=28fdba03-5935-49a1-9891-95a05a752708 00:13:27.831 12:33:10 -- lvol/tasting.sh@56 -- # rpc_cmd bdev_get_bdevs -b 28fdba03-5935-49a1-9891-95a05a752708 00:13:27.831 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.831 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:27.831 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.831 12:33:10 -- lvol/tasting.sh@56 -- # lvol='[ 00:13:27.831 { 00:13:27.831 "name": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:27.831 "aliases": [ 00:13:27.831 "lvs_test2/lvol_test2" 00:13:27.831 ], 00:13:27.831 "product_name": "Logical Volume", 00:13:27.831 "block_size": 4096, 00:13:27.831 "num_blocks": 16384, 00:13:27.831 "uuid": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:27.831 "assigned_rate_limits": { 00:13:27.831 "rw_ios_per_sec": 0, 00:13:27.831 "rw_mbytes_per_sec": 0, 00:13:27.831 "r_mbytes_per_sec": 0, 00:13:27.831 "w_mbytes_per_sec": 0 00:13:27.831 }, 00:13:27.831 "claimed": false, 00:13:27.831 "zoned": false, 00:13:27.831 "supported_io_types": { 00:13:27.831 "read": true, 00:13:27.831 "write": true, 00:13:27.831 "unmap": true, 00:13:27.831 "write_zeroes": true, 00:13:27.831 "flush": false, 00:13:27.831 "reset": true, 00:13:27.831 "compare": false, 00:13:27.831 "compare_and_write": false, 00:13:27.831 "abort": false, 00:13:27.831 "nvme_admin": false, 00:13:27.831 "nvme_io": false 00:13:27.831 }, 00:13:27.831 "driver_specific": { 00:13:27.831 "lvol": { 00:13:27.831 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:27.831 "base_bdev": "aio_bdev1", 00:13:27.831 "thin_provision": false, 00:13:27.831 "snapshot": false, 00:13:27.831 "clone": false, 00:13:27.831 "esnap_clone": false 00:13:27.831 } 00:13:27.831 } 00:13:27.831 } 00:13:27.831 ]' 00:13:27.831 12:33:10 -- lvol/tasting.sh@58 -- # jq -r '.[0].name' 00:13:27.831 12:33:10 -- lvol/tasting.sh@58 -- # '[' 28fdba03-5935-49a1-9891-95a05a752708 = 28fdba03-5935-49a1-9891-95a05a752708 ']' 00:13:27.831 12:33:10 -- lvol/tasting.sh@59 -- # jq -r '.[0].uuid' 00:13:27.831 12:33:10 -- lvol/tasting.sh@59 -- # '[' 28fdba03-5935-49a1-9891-95a05a752708 = 28fdba03-5935-49a1-9891-95a05a752708 ']' 00:13:27.831 12:33:10 -- lvol/tasting.sh@60 -- # jq -r '.[0].aliases[0]' 00:13:28.091 12:33:10 -- lvol/tasting.sh@60 -- # '[' lvs_test2/lvol_test2 = lvs_test2/lvol_test2 ']' 00:13:28.091 12:33:10 -- lvol/tasting.sh@61 -- # jq -r '.[0].block_size' 00:13:28.091 12:33:10 -- lvol/tasting.sh@61 -- # '[' 4096 = 4096 ']' 00:13:28.091 12:33:10 -- lvol/tasting.sh@62 -- # jq -r '.[0].num_blocks' 00:13:28.091 12:33:10 -- lvol/tasting.sh@62 -- # '[' 16384 = 16384 ']' 00:13:28.091 12:33:10 -- lvol/tasting.sh@54 -- # for i in $(seq 1 5) 00:13:28.091 12:33:10 -- lvol/tasting.sh@55 -- # rpc_cmd bdev_lvol_create -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 lvol_test3 64 00:13:28.091 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.091 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.091 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.091 12:33:10 -- lvol/tasting.sh@55 -- # lvol_uuid=7efacaf8-927e-4236-bf76-33ce1085f0d9 00:13:28.091 12:33:10 -- lvol/tasting.sh@56 -- # rpc_cmd bdev_get_bdevs -b 7efacaf8-927e-4236-bf76-33ce1085f0d9 00:13:28.091 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.091 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.091 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.091 12:33:10 -- lvol/tasting.sh@56 -- # lvol='[ 00:13:28.091 { 00:13:28.091 "name": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:28.091 "aliases": [ 00:13:28.091 "lvs_test2/lvol_test3" 00:13:28.091 ], 00:13:28.091 "product_name": "Logical Volume", 00:13:28.091 "block_size": 4096, 00:13:28.091 "num_blocks": 16384, 00:13:28.091 "uuid": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:28.091 "assigned_rate_limits": { 00:13:28.091 "rw_ios_per_sec": 0, 00:13:28.091 "rw_mbytes_per_sec": 0, 00:13:28.091 "r_mbytes_per_sec": 0, 00:13:28.091 "w_mbytes_per_sec": 0 00:13:28.091 }, 00:13:28.091 "claimed": false, 00:13:28.091 "zoned": false, 00:13:28.091 "supported_io_types": { 00:13:28.091 "read": true, 00:13:28.091 "write": true, 00:13:28.091 "unmap": true, 00:13:28.091 "write_zeroes": true, 00:13:28.091 "flush": false, 00:13:28.091 "reset": true, 00:13:28.091 "compare": false, 00:13:28.091 "compare_and_write": false, 00:13:28.091 "abort": false, 00:13:28.091 "nvme_admin": false, 00:13:28.091 "nvme_io": false 00:13:28.091 }, 00:13:28.091 "driver_specific": { 00:13:28.091 "lvol": { 00:13:28.091 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:28.091 "base_bdev": "aio_bdev1", 00:13:28.091 "thin_provision": false, 00:13:28.091 "snapshot": false, 00:13:28.091 "clone": false, 00:13:28.091 "esnap_clone": false 00:13:28.091 } 00:13:28.091 } 00:13:28.091 } 00:13:28.091 ]' 00:13:28.091 12:33:10 -- lvol/tasting.sh@58 -- # jq -r '.[0].name' 00:13:28.091 12:33:10 -- lvol/tasting.sh@58 -- # '[' 7efacaf8-927e-4236-bf76-33ce1085f0d9 = 7efacaf8-927e-4236-bf76-33ce1085f0d9 ']' 00:13:28.091 12:33:10 -- lvol/tasting.sh@59 -- # jq -r '.[0].uuid' 00:13:28.350 12:33:10 -- lvol/tasting.sh@59 -- # '[' 7efacaf8-927e-4236-bf76-33ce1085f0d9 = 7efacaf8-927e-4236-bf76-33ce1085f0d9 ']' 00:13:28.350 12:33:10 -- lvol/tasting.sh@60 -- # jq -r '.[0].aliases[0]' 00:13:28.350 12:33:10 -- lvol/tasting.sh@60 -- # '[' lvs_test2/lvol_test3 = lvs_test2/lvol_test3 ']' 00:13:28.350 12:33:10 -- lvol/tasting.sh@61 -- # jq -r '.[0].block_size' 00:13:28.350 12:33:10 -- lvol/tasting.sh@61 -- # '[' 4096 = 4096 ']' 00:13:28.350 12:33:10 -- lvol/tasting.sh@62 -- # jq -r '.[0].num_blocks' 00:13:28.350 12:33:10 -- lvol/tasting.sh@62 -- # '[' 16384 = 16384 ']' 00:13:28.350 12:33:10 -- lvol/tasting.sh@54 -- # for i in $(seq 1 5) 00:13:28.350 12:33:10 -- lvol/tasting.sh@55 -- # rpc_cmd bdev_lvol_create -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 lvol_test4 64 00:13:28.350 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.350 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.350 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.350 12:33:10 -- lvol/tasting.sh@55 -- # lvol_uuid=b787db7a-bd51-45e5-abc4-f6e426fb8a83 00:13:28.350 12:33:10 -- lvol/tasting.sh@56 -- # rpc_cmd bdev_get_bdevs -b b787db7a-bd51-45e5-abc4-f6e426fb8a83 00:13:28.351 12:33:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.351 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.351 12:33:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.351 12:33:10 -- lvol/tasting.sh@56 -- # lvol='[ 00:13:28.351 { 00:13:28.351 "name": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:28.351 "aliases": [ 00:13:28.351 "lvs_test2/lvol_test4" 00:13:28.351 ], 00:13:28.351 "product_name": "Logical Volume", 00:13:28.351 "block_size": 4096, 00:13:28.351 "num_blocks": 16384, 00:13:28.351 "uuid": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:28.351 "assigned_rate_limits": { 00:13:28.351 "rw_ios_per_sec": 0, 00:13:28.351 "rw_mbytes_per_sec": 0, 00:13:28.351 "r_mbytes_per_sec": 0, 00:13:28.351 "w_mbytes_per_sec": 0 00:13:28.351 }, 00:13:28.351 "claimed": false, 00:13:28.351 "zoned": false, 00:13:28.351 "supported_io_types": { 00:13:28.351 "read": true, 00:13:28.351 "write": true, 00:13:28.351 "unmap": true, 00:13:28.351 "write_zeroes": true, 00:13:28.351 "flush": false, 00:13:28.351 "reset": true, 00:13:28.351 "compare": false, 00:13:28.351 "compare_and_write": false, 00:13:28.351 "abort": false, 00:13:28.351 "nvme_admin": false, 00:13:28.351 "nvme_io": false 00:13:28.351 }, 00:13:28.351 "driver_specific": { 00:13:28.351 "lvol": { 00:13:28.351 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:28.351 "base_bdev": "aio_bdev1", 00:13:28.351 "thin_provision": false, 00:13:28.351 "snapshot": false, 00:13:28.351 "clone": false, 00:13:28.351 "esnap_clone": false 00:13:28.351 } 00:13:28.351 } 00:13:28.351 } 00:13:28.351 ]' 00:13:28.351 12:33:10 -- lvol/tasting.sh@58 -- # jq -r '.[0].name' 00:13:28.351 12:33:10 -- lvol/tasting.sh@58 -- # '[' b787db7a-bd51-45e5-abc4-f6e426fb8a83 = b787db7a-bd51-45e5-abc4-f6e426fb8a83 ']' 00:13:28.351 12:33:10 -- lvol/tasting.sh@59 -- # jq -r '.[0].uuid' 00:13:28.609 12:33:10 -- lvol/tasting.sh@59 -- # '[' b787db7a-bd51-45e5-abc4-f6e426fb8a83 = b787db7a-bd51-45e5-abc4-f6e426fb8a83 ']' 00:13:28.609 12:33:10 -- lvol/tasting.sh@60 -- # jq -r '.[0].aliases[0]' 00:13:28.609 12:33:10 -- lvol/tasting.sh@60 -- # '[' lvs_test2/lvol_test4 = lvs_test2/lvol_test4 ']' 00:13:28.609 12:33:10 -- lvol/tasting.sh@61 -- # jq -r '.[0].block_size' 00:13:28.609 12:33:11 -- lvol/tasting.sh@61 -- # '[' 4096 = 4096 ']' 00:13:28.609 12:33:11 -- lvol/tasting.sh@62 -- # jq -r '.[0].num_blocks' 00:13:28.609 12:33:11 -- lvol/tasting.sh@62 -- # '[' 16384 = 16384 ']' 00:13:28.609 12:33:11 -- lvol/tasting.sh@54 -- # for i in $(seq 1 5) 00:13:28.610 12:33:11 -- lvol/tasting.sh@55 -- # rpc_cmd bdev_lvol_create -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 lvol_test5 64 00:13:28.610 12:33:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.610 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:13:28.610 12:33:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.610 12:33:11 -- lvol/tasting.sh@55 -- # lvol_uuid=6bf2e2fb-33ee-47f4-a62a-322d020625e7 00:13:28.610 12:33:11 -- lvol/tasting.sh@56 -- # rpc_cmd bdev_get_bdevs -b 6bf2e2fb-33ee-47f4-a62a-322d020625e7 00:13:28.610 12:33:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.610 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:13:28.610 12:33:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.610 12:33:11 -- lvol/tasting.sh@56 -- # lvol='[ 00:13:28.610 { 00:13:28.610 "name": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:28.610 "aliases": [ 00:13:28.610 "lvs_test2/lvol_test5" 00:13:28.610 ], 00:13:28.610 "product_name": "Logical Volume", 00:13:28.610 "block_size": 4096, 00:13:28.610 "num_blocks": 16384, 00:13:28.610 "uuid": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:28.610 "assigned_rate_limits": { 00:13:28.610 "rw_ios_per_sec": 0, 00:13:28.610 "rw_mbytes_per_sec": 0, 00:13:28.610 "r_mbytes_per_sec": 0, 00:13:28.610 "w_mbytes_per_sec": 0 00:13:28.610 }, 00:13:28.610 "claimed": false, 00:13:28.610 "zoned": false, 00:13:28.610 "supported_io_types": { 00:13:28.610 "read": true, 00:13:28.610 "write": true, 00:13:28.610 "unmap": true, 00:13:28.610 "write_zeroes": true, 00:13:28.610 "flush": false, 00:13:28.610 "reset": true, 00:13:28.610 "compare": false, 00:13:28.610 "compare_and_write": false, 00:13:28.610 "abort": false, 00:13:28.610 "nvme_admin": false, 00:13:28.610 "nvme_io": false 00:13:28.610 }, 00:13:28.610 "driver_specific": { 00:13:28.610 "lvol": { 00:13:28.610 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:28.610 "base_bdev": "aio_bdev1", 00:13:28.610 "thin_provision": false, 00:13:28.610 "snapshot": false, 00:13:28.610 "clone": false, 00:13:28.610 "esnap_clone": false 00:13:28.610 } 00:13:28.610 } 00:13:28.610 } 00:13:28.610 ]' 00:13:28.610 12:33:11 -- lvol/tasting.sh@58 -- # jq -r '.[0].name' 00:13:28.868 12:33:11 -- lvol/tasting.sh@58 -- # '[' 6bf2e2fb-33ee-47f4-a62a-322d020625e7 = 6bf2e2fb-33ee-47f4-a62a-322d020625e7 ']' 00:13:28.868 12:33:11 -- lvol/tasting.sh@59 -- # jq -r '.[0].uuid' 00:13:28.868 12:33:11 -- lvol/tasting.sh@59 -- # '[' 6bf2e2fb-33ee-47f4-a62a-322d020625e7 = 6bf2e2fb-33ee-47f4-a62a-322d020625e7 ']' 00:13:28.868 12:33:11 -- lvol/tasting.sh@60 -- # jq -r '.[0].aliases[0]' 00:13:28.868 12:33:11 -- lvol/tasting.sh@60 -- # '[' lvs_test2/lvol_test5 = lvs_test2/lvol_test5 ']' 00:13:28.868 12:33:11 -- lvol/tasting.sh@61 -- # jq -r '.[0].block_size' 00:13:28.868 12:33:11 -- lvol/tasting.sh@61 -- # '[' 4096 = 4096 ']' 00:13:28.868 12:33:11 -- lvol/tasting.sh@62 -- # jq -r '.[0].num_blocks' 00:13:28.868 12:33:11 -- lvol/tasting.sh@62 -- # '[' 16384 = 16384 ']' 00:13:28.868 12:33:11 -- lvol/tasting.sh@65 -- # rpc_cmd bdev_get_bdevs 00:13:28.868 12:33:11 -- lvol/tasting.sh@65 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:13:28.868 12:33:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.868 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.127 12:33:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.127 12:33:11 -- lvol/tasting.sh@65 -- # old_lvols='[ 00:13:29.127 { 00:13:29.127 "name": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:29.127 "aliases": [ 00:13:29.127 "lvs_test1/lvol_test1" 00:13:29.127 ], 00:13:29.127 "product_name": "Logical Volume", 00:13:29.127 "block_size": 4096, 00:13:29.127 "num_blocks": 3072, 00:13:29.127 "uuid": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:29.127 "assigned_rate_limits": { 00:13:29.127 "rw_ios_per_sec": 0, 00:13:29.127 "rw_mbytes_per_sec": 0, 00:13:29.127 "r_mbytes_per_sec": 0, 00:13:29.127 "w_mbytes_per_sec": 0 00:13:29.127 }, 00:13:29.127 "claimed": false, 00:13:29.127 "zoned": false, 00:13:29.127 "supported_io_types": { 00:13:29.127 "read": true, 00:13:29.127 "write": true, 00:13:29.127 "unmap": true, 00:13:29.127 "write_zeroes": true, 00:13:29.127 "flush": false, 00:13:29.127 "reset": true, 00:13:29.127 "compare": false, 00:13:29.127 "compare_and_write": false, 00:13:29.127 "abort": false, 00:13:29.127 "nvme_admin": false, 00:13:29.127 "nvme_io": false 00:13:29.127 }, 00:13:29.127 "driver_specific": { 00:13:29.127 "lvol": { 00:13:29.127 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.127 "base_bdev": "aio_bdev0", 00:13:29.127 "thin_provision": false, 00:13:29.127 "snapshot": false, 00:13:29.127 "clone": false, 00:13:29.127 "esnap_clone": false 00:13:29.127 } 00:13:29.127 } 00:13:29.127 }, 00:13:29.127 { 00:13:29.127 "name": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:29.127 "aliases": [ 00:13:29.127 "lvs_test1/lvol_test2" 00:13:29.127 ], 00:13:29.127 "product_name": "Logical Volume", 00:13:29.127 "block_size": 4096, 00:13:29.127 "num_blocks": 3072, 00:13:29.127 "uuid": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:29.127 "assigned_rate_limits": { 00:13:29.127 "rw_ios_per_sec": 0, 00:13:29.127 "rw_mbytes_per_sec": 0, 00:13:29.127 "r_mbytes_per_sec": 0, 00:13:29.127 "w_mbytes_per_sec": 0 00:13:29.127 }, 00:13:29.127 "claimed": false, 00:13:29.127 "zoned": false, 00:13:29.127 "supported_io_types": { 00:13:29.127 "read": true, 00:13:29.127 "write": true, 00:13:29.127 "unmap": true, 00:13:29.127 "write_zeroes": true, 00:13:29.127 "flush": false, 00:13:29.127 "reset": true, 00:13:29.127 "compare": false, 00:13:29.127 "compare_and_write": false, 00:13:29.127 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.128 "base_bdev": "aio_bdev0", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test1/lvol_test3" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 3072, 00:13:29.128 "uuid": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.128 "base_bdev": "aio_bdev0", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test1/lvol_test4" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 3072, 00:13:29.128 "uuid": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.128 "base_bdev": "aio_bdev0", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test1/lvol_test5" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 3072, 00:13:29.128 "uuid": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.128 "base_bdev": "aio_bdev0", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test2/lvol_test1" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 16384, 00:13:29.128 "uuid": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.128 "base_bdev": "aio_bdev1", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test2/lvol_test2" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 16384, 00:13:29.128 "uuid": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.128 "base_bdev": "aio_bdev1", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test2/lvol_test3" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 16384, 00:13:29.128 "uuid": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.128 "base_bdev": "aio_bdev1", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test2/lvol_test4" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 16384, 00:13:29.128 "uuid": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.128 "zoned": false, 00:13:29.128 "supported_io_types": { 00:13:29.128 "read": true, 00:13:29.128 "write": true, 00:13:29.128 "unmap": true, 00:13:29.128 "write_zeroes": true, 00:13:29.128 "flush": false, 00:13:29.128 "reset": true, 00:13:29.128 "compare": false, 00:13:29.128 "compare_and_write": false, 00:13:29.128 "abort": false, 00:13:29.128 "nvme_admin": false, 00:13:29.128 "nvme_io": false 00:13:29.128 }, 00:13:29.128 "driver_specific": { 00:13:29.128 "lvol": { 00:13:29.128 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.128 "base_bdev": "aio_bdev1", 00:13:29.128 "thin_provision": false, 00:13:29.128 "snapshot": false, 00:13:29.128 "clone": false, 00:13:29.128 "esnap_clone": false 00:13:29.128 } 00:13:29.128 } 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:29.128 "aliases": [ 00:13:29.128 "lvs_test2/lvol_test5" 00:13:29.128 ], 00:13:29.128 "product_name": "Logical Volume", 00:13:29.128 "block_size": 4096, 00:13:29.128 "num_blocks": 16384, 00:13:29.128 "uuid": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:29.128 "assigned_rate_limits": { 00:13:29.128 "rw_ios_per_sec": 0, 00:13:29.128 "rw_mbytes_per_sec": 0, 00:13:29.128 "r_mbytes_per_sec": 0, 00:13:29.128 "w_mbytes_per_sec": 0 00:13:29.128 }, 00:13:29.128 "claimed": false, 00:13:29.129 "zoned": false, 00:13:29.129 "supported_io_types": { 00:13:29.129 "read": true, 00:13:29.129 "write": true, 00:13:29.129 "unmap": true, 00:13:29.129 "write_zeroes": true, 00:13:29.129 "flush": false, 00:13:29.129 "reset": true, 00:13:29.129 "compare": false, 00:13:29.129 "compare_and_write": false, 00:13:29.129 "abort": false, 00:13:29.129 "nvme_admin": false, 00:13:29.129 "nvme_io": false 00:13:29.129 }, 00:13:29.129 "driver_specific": { 00:13:29.129 "lvol": { 00:13:29.129 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.129 "base_bdev": "aio_bdev1", 00:13:29.129 "thin_provision": false, 00:13:29.129 "snapshot": false, 00:13:29.129 "clone": false, 00:13:29.129 "esnap_clone": false 00:13:29.129 } 00:13:29.129 } 00:13:29.129 } 00:13:29.129 ]' 00:13:29.129 12:33:11 -- lvol/tasting.sh@66 -- # jq length 00:13:29.129 12:33:11 -- lvol/tasting.sh@66 -- # '[' 10 == 10 ']' 00:13:29.129 12:33:11 -- lvol/tasting.sh@67 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:29.129 12:33:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.129 12:33:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 12:33:11 -- lvol/tasting.sh@67 -- # jq . 00:13:29.129 12:33:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.129 12:33:11 -- lvol/tasting.sh@67 -- # old_lvs='[ 00:13:29.129 { 00:13:29.129 "uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:29.129 "name": "lvs_test1", 00:13:29.129 "base_bdev": "aio_bdev0", 00:13:29.129 "total_data_clusters": 398, 00:13:29.129 "free_clusters": 338, 00:13:29.129 "block_size": 4096, 00:13:29.129 "cluster_size": 1048576 00:13:29.129 }, 00:13:29.129 { 00:13:29.129 "uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:29.129 "name": "lvs_test2", 00:13:29.129 "base_bdev": "aio_bdev1", 00:13:29.129 "total_data_clusters": 11, 00:13:29.129 "free_clusters": 1, 00:13:29.129 "block_size": 4096, 00:13:29.129 "cluster_size": 33554432 00:13:29.129 } 00:13:29.129 ]' 00:13:29.129 12:33:11 -- lvol/tasting.sh@70 -- # killprocess 60050 00:13:29.129 12:33:11 -- common/autotest_common.sh@926 -- # '[' -z 60050 ']' 00:13:29.129 12:33:11 -- common/autotest_common.sh@930 -- # kill -0 60050 00:13:29.129 12:33:11 -- common/autotest_common.sh@931 -- # uname 00:13:29.129 12:33:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:29.129 12:33:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60050 00:13:29.129 killing process with pid 60050 00:13:29.129 12:33:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:29.129 12:33:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:29.129 12:33:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60050' 00:13:29.129 12:33:11 -- common/autotest_common.sh@945 -- # kill 60050 00:13:29.129 12:33:11 -- common/autotest_common.sh@950 -- # wait 60050 00:13:31.065 12:33:13 -- lvol/tasting.sh@72 -- # spdk_pid=60289 00:13:31.065 12:33:13 -- lvol/tasting.sh@73 -- # waitforlisten 60289 00:13:31.065 12:33:13 -- lvol/tasting.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:31.065 12:33:13 -- common/autotest_common.sh@819 -- # '[' -z 60289 ']' 00:13:31.065 12:33:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.065 12:33:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.065 12:33:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.065 12:33:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.065 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:31.323 [2024-10-01 12:33:13.678232] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:31.323 [2024-10-01 12:33:13.678363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:13:31.323 [2024-10-01 12:33:13.837424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.582 [2024-10-01 12:33:14.016599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.582 [2024-10-01 12:33:14.017118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.958 12:33:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.958 12:33:15 -- common/autotest_common.sh@852 -- # return 0 00:13:32.958 12:33:15 -- lvol/tasting.sh@76 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 aio_bdev0 4096 00:13:32.958 12:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.958 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:13:32.958 aio_bdev0 00:13:32.958 12:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.958 12:33:15 -- lvol/tasting.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_1 aio_bdev1 4096 00:13:32.958 12:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.958 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:13:32.958 aio_bdev1 00:13:32.958 12:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.958 12:33:15 -- lvol/tasting.sh@78 -- # sleep 1 00:13:33.893 12:33:16 -- lvol/tasting.sh@81 -- # rpc_cmd bdev_get_bdevs 00:13:33.893 12:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.893 12:33:16 -- lvol/tasting.sh@81 -- # jq -r '[ .[] | select(.product_name == "Logical Volume") ]' 00:13:33.893 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.152 12:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.152 12:33:16 -- lvol/tasting.sh@81 -- # new_lvols='[ 00:13:34.152 { 00:13:34.152 "name": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:34.152 "aliases": [ 00:13:34.152 "lvs_test1/lvol_test2" 00:13:34.152 ], 00:13:34.152 "product_name": "Logical Volume", 00:13:34.152 "block_size": 4096, 00:13:34.152 "num_blocks": 3072, 00:13:34.152 "uuid": "b279cf7e-2e11-424f-9176-101334e10d03", 00:13:34.152 "assigned_rate_limits": { 00:13:34.152 "rw_ios_per_sec": 0, 00:13:34.152 "rw_mbytes_per_sec": 0, 00:13:34.152 "r_mbytes_per_sec": 0, 00:13:34.152 "w_mbytes_per_sec": 0 00:13:34.152 }, 00:13:34.152 "claimed": false, 00:13:34.152 "zoned": false, 00:13:34.152 "supported_io_types": { 00:13:34.152 "read": true, 00:13:34.152 "write": true, 00:13:34.152 "unmap": true, 00:13:34.152 "write_zeroes": true, 00:13:34.152 "flush": false, 00:13:34.152 "reset": true, 00:13:34.152 "compare": false, 00:13:34.152 "compare_and_write": false, 00:13:34.152 "abort": false, 00:13:34.152 "nvme_admin": false, 00:13:34.152 "nvme_io": false 00:13:34.152 }, 00:13:34.152 "driver_specific": { 00:13:34.152 "lvol": { 00:13:34.152 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.152 "base_bdev": "aio_bdev0", 00:13:34.152 "thin_provision": false, 00:13:34.152 "snapshot": false, 00:13:34.152 "clone": false, 00:13:34.152 "esnap_clone": false 00:13:34.152 } 00:13:34.152 } 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:34.152 "aliases": [ 00:13:34.152 "lvs_test1/lvol_test1" 00:13:34.152 ], 00:13:34.152 "product_name": "Logical Volume", 00:13:34.152 "block_size": 4096, 00:13:34.152 "num_blocks": 3072, 00:13:34.152 "uuid": "36405014-8b69-4916-a933-12d9631f8e3c", 00:13:34.152 "assigned_rate_limits": { 00:13:34.152 "rw_ios_per_sec": 0, 00:13:34.152 "rw_mbytes_per_sec": 0, 00:13:34.152 "r_mbytes_per_sec": 0, 00:13:34.152 "w_mbytes_per_sec": 0 00:13:34.152 }, 00:13:34.152 "claimed": false, 00:13:34.152 "zoned": false, 00:13:34.152 "supported_io_types": { 00:13:34.152 "read": true, 00:13:34.152 "write": true, 00:13:34.152 "unmap": true, 00:13:34.152 "write_zeroes": true, 00:13:34.152 "flush": false, 00:13:34.152 "reset": true, 00:13:34.152 "compare": false, 00:13:34.152 "compare_and_write": false, 00:13:34.152 "abort": false, 00:13:34.152 "nvme_admin": false, 00:13:34.152 "nvme_io": false 00:13:34.152 }, 00:13:34.152 "driver_specific": { 00:13:34.152 "lvol": { 00:13:34.152 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.152 "base_bdev": "aio_bdev0", 00:13:34.152 "thin_provision": false, 00:13:34.152 "snapshot": false, 00:13:34.152 "clone": false, 00:13:34.152 "esnap_clone": false 00:13:34.152 } 00:13:34.152 } 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:34.152 "aliases": [ 00:13:34.152 "lvs_test1/lvol_test3" 00:13:34.152 ], 00:13:34.152 "product_name": "Logical Volume", 00:13:34.152 "block_size": 4096, 00:13:34.152 "num_blocks": 3072, 00:13:34.152 "uuid": "e01999c5-16d5-4e6b-b9dd-6f36600412cc", 00:13:34.152 "assigned_rate_limits": { 00:13:34.152 "rw_ios_per_sec": 0, 00:13:34.152 "rw_mbytes_per_sec": 0, 00:13:34.152 "r_mbytes_per_sec": 0, 00:13:34.152 "w_mbytes_per_sec": 0 00:13:34.152 }, 00:13:34.152 "claimed": false, 00:13:34.152 "zoned": false, 00:13:34.152 "supported_io_types": { 00:13:34.152 "read": true, 00:13:34.152 "write": true, 00:13:34.152 "unmap": true, 00:13:34.152 "write_zeroes": true, 00:13:34.152 "flush": false, 00:13:34.152 "reset": true, 00:13:34.152 "compare": false, 00:13:34.152 "compare_and_write": false, 00:13:34.152 "abort": false, 00:13:34.152 "nvme_admin": false, 00:13:34.152 "nvme_io": false 00:13:34.152 }, 00:13:34.152 "driver_specific": { 00:13:34.152 "lvol": { 00:13:34.152 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.152 "base_bdev": "aio_bdev0", 00:13:34.152 "thin_provision": false, 00:13:34.152 "snapshot": false, 00:13:34.152 "clone": false, 00:13:34.152 "esnap_clone": false 00:13:34.152 } 00:13:34.152 } 00:13:34.152 }, 00:13:34.152 { 00:13:34.152 "name": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:34.152 "aliases": [ 00:13:34.152 "lvs_test1/lvol_test4" 00:13:34.152 ], 00:13:34.152 "product_name": "Logical Volume", 00:13:34.152 "block_size": 4096, 00:13:34.152 "num_blocks": 3072, 00:13:34.152 "uuid": "799ce17c-92a7-42d7-ab35-623e9ef03636", 00:13:34.152 "assigned_rate_limits": { 00:13:34.152 "rw_ios_per_sec": 0, 00:13:34.152 "rw_mbytes_per_sec": 0, 00:13:34.152 "r_mbytes_per_sec": 0, 00:13:34.152 "w_mbytes_per_sec": 0 00:13:34.152 }, 00:13:34.152 "claimed": false, 00:13:34.152 "zoned": false, 00:13:34.152 "supported_io_types": { 00:13:34.152 "read": true, 00:13:34.152 "write": true, 00:13:34.152 "unmap": true, 00:13:34.152 "write_zeroes": true, 00:13:34.152 "flush": false, 00:13:34.152 "reset": true, 00:13:34.152 "compare": false, 00:13:34.152 "compare_and_write": false, 00:13:34.152 "abort": false, 00:13:34.152 "nvme_admin": false, 00:13:34.152 "nvme_io": false 00:13:34.152 }, 00:13:34.152 "driver_specific": { 00:13:34.152 "lvol": { 00:13:34.152 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.152 "base_bdev": "aio_bdev0", 00:13:34.152 "thin_provision": false, 00:13:34.152 "snapshot": false, 00:13:34.152 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test1/lvol_test5" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 3072, 00:13:34.153 "uuid": "2bf5b04f-9adf-4242-b5aa-6d184243947e", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.153 "base_bdev": "aio_bdev0", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test2/lvol_test2" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 16384, 00:13:34.153 "uuid": "28fdba03-5935-49a1-9891-95a05a752708", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test2/lvol_test1" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 16384, 00:13:34.153 "uuid": "61b580e3-0efb-445a-8d45-a1fb30b857dd", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test2/lvol_test3" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 16384, 00:13:34.153 "uuid": "7efacaf8-927e-4236-bf76-33ce1085f0d9", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test2/lvol_test5" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 16384, 00:13:34.153 "uuid": "6bf2e2fb-33ee-47f4-a62a-322d020625e7", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "name": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:34.153 "aliases": [ 00:13:34.153 "lvs_test2/lvol_test4" 00:13:34.153 ], 00:13:34.153 "product_name": "Logical Volume", 00:13:34.153 "block_size": 4096, 00:13:34.153 "num_blocks": 16384, 00:13:34.153 "uuid": "b787db7a-bd51-45e5-abc4-f6e426fb8a83", 00:13:34.153 "assigned_rate_limits": { 00:13:34.153 "rw_ios_per_sec": 0, 00:13:34.153 "rw_mbytes_per_sec": 0, 00:13:34.153 "r_mbytes_per_sec": 0, 00:13:34.153 "w_mbytes_per_sec": 0 00:13:34.153 }, 00:13:34.153 "claimed": false, 00:13:34.153 "zoned": false, 00:13:34.153 "supported_io_types": { 00:13:34.153 "read": true, 00:13:34.153 "write": true, 00:13:34.153 "unmap": true, 00:13:34.153 "write_zeroes": true, 00:13:34.153 "flush": false, 00:13:34.153 "reset": true, 00:13:34.153 "compare": false, 00:13:34.153 "compare_and_write": false, 00:13:34.153 "abort": false, 00:13:34.153 "nvme_admin": false, 00:13:34.153 "nvme_io": false 00:13:34.153 }, 00:13:34.153 "driver_specific": { 00:13:34.153 "lvol": { 00:13:34.153 "lvol_store_uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "thin_provision": false, 00:13:34.153 "snapshot": false, 00:13:34.153 "clone": false, 00:13:34.153 "esnap_clone": false 00:13:34.153 } 00:13:34.153 } 00:13:34.153 } 00:13:34.153 ]' 00:13:34.153 12:33:16 -- lvol/tasting.sh@82 -- # jq length 00:13:34.153 12:33:16 -- lvol/tasting.sh@82 -- # '[' 10 == 10 ']' 00:13:34.153 12:33:16 -- lvol/tasting.sh@83 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:34.153 12:33:16 -- lvol/tasting.sh@83 -- # jq . 00:13:34.153 12:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.153 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.153 12:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.153 12:33:16 -- lvol/tasting.sh@83 -- # new_lvs='[ 00:13:34.153 { 00:13:34.153 "uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.153 "name": "lvs_test1", 00:13:34.153 "base_bdev": "aio_bdev0", 00:13:34.153 "total_data_clusters": 398, 00:13:34.153 "free_clusters": 338, 00:13:34.153 "block_size": 4096, 00:13:34.153 "cluster_size": 1048576 00:13:34.153 }, 00:13:34.153 { 00:13:34.153 "uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:34.153 "name": "lvs_test2", 00:13:34.153 "base_bdev": "aio_bdev1", 00:13:34.153 "total_data_clusters": 11, 00:13:34.153 "free_clusters": 1, 00:13:34.153 "block_size": 4096, 00:13:34.153 "cluster_size": 33554432 00:13:34.153 } 00:13:34.153 ]' 00:13:34.153 12:33:16 -- lvol/tasting.sh@84 -- # jq '. | sort' 00:13:34.153 12:33:16 -- lvol/tasting.sh@84 -- # diff /dev/fd/62 /dev/fd/61 00:13:34.153 12:33:16 -- lvol/tasting.sh@84 -- # jq '. | sort' 00:13:34.153 12:33:16 -- lvol/tasting.sh@88 -- # diff /dev/fd/62 /dev/fd/61 00:13:34.153 12:33:16 -- lvol/tasting.sh@88 -- # jq '. | sort' 00:13:34.153 12:33:16 -- lvol/tasting.sh@88 -- # jq '. | sort' 00:13:34.413 12:33:16 -- lvol/tasting.sh@94 -- # seq 6 10 00:13:34.413 12:33:16 -- lvol/tasting.sh@94 -- # for i in $(seq 6 10) 00:13:34.413 12:33:16 -- lvol/tasting.sh@95 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test6 12 00:13:34.413 12:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.413 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.413 12:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.413 12:33:16 -- lvol/tasting.sh@95 -- # lvol_uuid=6d73699c-2537-4aab-84df-de3f781296e7 00:13:34.413 12:33:16 -- lvol/tasting.sh@96 -- # rpc_cmd bdev_get_bdevs -b 6d73699c-2537-4aab-84df-de3f781296e7 00:13:34.413 12:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.413 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.413 12:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.413 12:33:16 -- lvol/tasting.sh@96 -- # lvol='[ 00:13:34.413 { 00:13:34.413 "name": "6d73699c-2537-4aab-84df-de3f781296e7", 00:13:34.413 "aliases": [ 00:13:34.413 "lvs_test1/lvol_test6" 00:13:34.413 ], 00:13:34.413 "product_name": "Logical Volume", 00:13:34.413 "block_size": 4096, 00:13:34.413 "num_blocks": 3072, 00:13:34.413 "uuid": "6d73699c-2537-4aab-84df-de3f781296e7", 00:13:34.413 "assigned_rate_limits": { 00:13:34.413 "rw_ios_per_sec": 0, 00:13:34.413 "rw_mbytes_per_sec": 0, 00:13:34.413 "r_mbytes_per_sec": 0, 00:13:34.413 "w_mbytes_per_sec": 0 00:13:34.413 }, 00:13:34.413 "claimed": false, 00:13:34.413 "zoned": false, 00:13:34.413 "supported_io_types": { 00:13:34.413 "read": true, 00:13:34.413 "write": true, 00:13:34.413 "unmap": true, 00:13:34.413 "write_zeroes": true, 00:13:34.413 "flush": false, 00:13:34.413 "reset": true, 00:13:34.413 "compare": false, 00:13:34.413 "compare_and_write": false, 00:13:34.413 "abort": false, 00:13:34.413 "nvme_admin": false, 00:13:34.413 "nvme_io": false 00:13:34.413 }, 00:13:34.413 "driver_specific": { 00:13:34.413 "lvol": { 00:13:34.413 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.413 "base_bdev": "aio_bdev0", 00:13:34.413 "thin_provision": false, 00:13:34.413 "snapshot": false, 00:13:34.413 "clone": false, 00:13:34.413 "esnap_clone": false 00:13:34.413 } 00:13:34.413 } 00:13:34.413 } 00:13:34.413 ]' 00:13:34.413 12:33:16 -- lvol/tasting.sh@98 -- # jq -r '.[0].name' 00:13:34.413 12:33:16 -- lvol/tasting.sh@98 -- # '[' 6d73699c-2537-4aab-84df-de3f781296e7 = 6d73699c-2537-4aab-84df-de3f781296e7 ']' 00:13:34.413 12:33:16 -- lvol/tasting.sh@99 -- # jq -r '.[0].uuid' 00:13:34.413 12:33:16 -- lvol/tasting.sh@99 -- # '[' 6d73699c-2537-4aab-84df-de3f781296e7 = 6d73699c-2537-4aab-84df-de3f781296e7 ']' 00:13:34.413 12:33:16 -- lvol/tasting.sh@100 -- # jq -r '.[0].aliases[0]' 00:13:34.413 12:33:16 -- lvol/tasting.sh@100 -- # '[' lvs_test1/lvol_test6 = lvs_test1/lvol_test6 ']' 00:13:34.413 12:33:16 -- lvol/tasting.sh@101 -- # jq -r '.[0].block_size' 00:13:34.672 12:33:16 -- lvol/tasting.sh@101 -- # '[' 4096 = 4096 ']' 00:13:34.672 12:33:16 -- lvol/tasting.sh@102 -- # jq -r '.[0].num_blocks' 00:13:34.672 12:33:16 -- lvol/tasting.sh@102 -- # '[' 3072 = 3072 ']' 00:13:34.672 12:33:16 -- lvol/tasting.sh@94 -- # for i in $(seq 6 10) 00:13:34.672 12:33:16 -- lvol/tasting.sh@95 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test7 12 00:13:34.672 12:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.672 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.672 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.672 12:33:17 -- lvol/tasting.sh@95 -- # lvol_uuid=b86a12e7-7b2d-4ba7-816f-be72a3b06f91 00:13:34.672 12:33:17 -- lvol/tasting.sh@96 -- # rpc_cmd bdev_get_bdevs -b b86a12e7-7b2d-4ba7-816f-be72a3b06f91 00:13:34.672 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.672 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:34.672 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.672 12:33:17 -- lvol/tasting.sh@96 -- # lvol='[ 00:13:34.672 { 00:13:34.672 "name": "b86a12e7-7b2d-4ba7-816f-be72a3b06f91", 00:13:34.672 "aliases": [ 00:13:34.672 "lvs_test1/lvol_test7" 00:13:34.672 ], 00:13:34.672 "product_name": "Logical Volume", 00:13:34.672 "block_size": 4096, 00:13:34.672 "num_blocks": 3072, 00:13:34.672 "uuid": "b86a12e7-7b2d-4ba7-816f-be72a3b06f91", 00:13:34.672 "assigned_rate_limits": { 00:13:34.672 "rw_ios_per_sec": 0, 00:13:34.672 "rw_mbytes_per_sec": 0, 00:13:34.672 "r_mbytes_per_sec": 0, 00:13:34.672 "w_mbytes_per_sec": 0 00:13:34.672 }, 00:13:34.672 "claimed": false, 00:13:34.672 "zoned": false, 00:13:34.672 "supported_io_types": { 00:13:34.672 "read": true, 00:13:34.672 "write": true, 00:13:34.672 "unmap": true, 00:13:34.672 "write_zeroes": true, 00:13:34.672 "flush": false, 00:13:34.672 "reset": true, 00:13:34.672 "compare": false, 00:13:34.672 "compare_and_write": false, 00:13:34.672 "abort": false, 00:13:34.672 "nvme_admin": false, 00:13:34.672 "nvme_io": false 00:13:34.672 }, 00:13:34.672 "driver_specific": { 00:13:34.672 "lvol": { 00:13:34.672 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.672 "base_bdev": "aio_bdev0", 00:13:34.672 "thin_provision": false, 00:13:34.672 "snapshot": false, 00:13:34.672 "clone": false, 00:13:34.672 "esnap_clone": false 00:13:34.672 } 00:13:34.672 } 00:13:34.672 } 00:13:34.672 ]' 00:13:34.672 12:33:17 -- lvol/tasting.sh@98 -- # jq -r '.[0].name' 00:13:34.672 12:33:17 -- lvol/tasting.sh@98 -- # '[' b86a12e7-7b2d-4ba7-816f-be72a3b06f91 = b86a12e7-7b2d-4ba7-816f-be72a3b06f91 ']' 00:13:34.672 12:33:17 -- lvol/tasting.sh@99 -- # jq -r '.[0].uuid' 00:13:34.672 12:33:17 -- lvol/tasting.sh@99 -- # '[' b86a12e7-7b2d-4ba7-816f-be72a3b06f91 = b86a12e7-7b2d-4ba7-816f-be72a3b06f91 ']' 00:13:34.672 12:33:17 -- lvol/tasting.sh@100 -- # jq -r '.[0].aliases[0]' 00:13:34.672 12:33:17 -- lvol/tasting.sh@100 -- # '[' lvs_test1/lvol_test7 = lvs_test1/lvol_test7 ']' 00:13:34.672 12:33:17 -- lvol/tasting.sh@101 -- # jq -r '.[0].block_size' 00:13:34.932 12:33:17 -- lvol/tasting.sh@101 -- # '[' 4096 = 4096 ']' 00:13:34.932 12:33:17 -- lvol/tasting.sh@102 -- # jq -r '.[0].num_blocks' 00:13:34.932 12:33:17 -- lvol/tasting.sh@102 -- # '[' 3072 = 3072 ']' 00:13:34.932 12:33:17 -- lvol/tasting.sh@94 -- # for i in $(seq 6 10) 00:13:34.932 12:33:17 -- lvol/tasting.sh@95 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test8 12 00:13:34.932 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.932 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:34.932 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.932 12:33:17 -- lvol/tasting.sh@95 -- # lvol_uuid=6d554570-d39c-4167-9c35-d9177533be77 00:13:34.932 12:33:17 -- lvol/tasting.sh@96 -- # rpc_cmd bdev_get_bdevs -b 6d554570-d39c-4167-9c35-d9177533be77 00:13:34.932 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.932 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:34.932 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.932 12:33:17 -- lvol/tasting.sh@96 -- # lvol='[ 00:13:34.932 { 00:13:34.932 "name": "6d554570-d39c-4167-9c35-d9177533be77", 00:13:34.932 "aliases": [ 00:13:34.932 "lvs_test1/lvol_test8" 00:13:34.932 ], 00:13:34.932 "product_name": "Logical Volume", 00:13:34.932 "block_size": 4096, 00:13:34.932 "num_blocks": 3072, 00:13:34.932 "uuid": "6d554570-d39c-4167-9c35-d9177533be77", 00:13:34.932 "assigned_rate_limits": { 00:13:34.932 "rw_ios_per_sec": 0, 00:13:34.932 "rw_mbytes_per_sec": 0, 00:13:34.932 "r_mbytes_per_sec": 0, 00:13:34.932 "w_mbytes_per_sec": 0 00:13:34.932 }, 00:13:34.932 "claimed": false, 00:13:34.932 "zoned": false, 00:13:34.932 "supported_io_types": { 00:13:34.932 "read": true, 00:13:34.932 "write": true, 00:13:34.932 "unmap": true, 00:13:34.932 "write_zeroes": true, 00:13:34.932 "flush": false, 00:13:34.932 "reset": true, 00:13:34.932 "compare": false, 00:13:34.932 "compare_and_write": false, 00:13:34.932 "abort": false, 00:13:34.932 "nvme_admin": false, 00:13:34.932 "nvme_io": false 00:13:34.932 }, 00:13:34.932 "driver_specific": { 00:13:34.932 "lvol": { 00:13:34.932 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:34.932 "base_bdev": "aio_bdev0", 00:13:34.932 "thin_provision": false, 00:13:34.932 "snapshot": false, 00:13:34.932 "clone": false, 00:13:34.932 "esnap_clone": false 00:13:34.932 } 00:13:34.932 } 00:13:34.932 } 00:13:34.932 ]' 00:13:34.932 12:33:17 -- lvol/tasting.sh@98 -- # jq -r '.[0].name' 00:13:34.932 12:33:17 -- lvol/tasting.sh@98 -- # '[' 6d554570-d39c-4167-9c35-d9177533be77 = 6d554570-d39c-4167-9c35-d9177533be77 ']' 00:13:34.932 12:33:17 -- lvol/tasting.sh@99 -- # jq -r '.[0].uuid' 00:13:34.932 12:33:17 -- lvol/tasting.sh@99 -- # '[' 6d554570-d39c-4167-9c35-d9177533be77 = 6d554570-d39c-4167-9c35-d9177533be77 ']' 00:13:34.932 12:33:17 -- lvol/tasting.sh@100 -- # jq -r '.[0].aliases[0]' 00:13:35.191 12:33:17 -- lvol/tasting.sh@100 -- # '[' lvs_test1/lvol_test8 = lvs_test1/lvol_test8 ']' 00:13:35.191 12:33:17 -- lvol/tasting.sh@101 -- # jq -r '.[0].block_size' 00:13:35.191 12:33:17 -- lvol/tasting.sh@101 -- # '[' 4096 = 4096 ']' 00:13:35.191 12:33:17 -- lvol/tasting.sh@102 -- # jq -r '.[0].num_blocks' 00:13:35.191 12:33:17 -- lvol/tasting.sh@102 -- # '[' 3072 = 3072 ']' 00:13:35.191 12:33:17 -- lvol/tasting.sh@94 -- # for i in $(seq 6 10) 00:13:35.191 12:33:17 -- lvol/tasting.sh@95 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test9 12 00:13:35.191 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.191 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.191 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.191 12:33:17 -- lvol/tasting.sh@95 -- # lvol_uuid=31cbb05c-99c0-464f-b6f7-5e4b517127b8 00:13:35.191 12:33:17 -- lvol/tasting.sh@96 -- # rpc_cmd bdev_get_bdevs -b 31cbb05c-99c0-464f-b6f7-5e4b517127b8 00:13:35.191 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.191 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.191 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.191 12:33:17 -- lvol/tasting.sh@96 -- # lvol='[ 00:13:35.191 { 00:13:35.191 "name": "31cbb05c-99c0-464f-b6f7-5e4b517127b8", 00:13:35.191 "aliases": [ 00:13:35.191 "lvs_test1/lvol_test9" 00:13:35.191 ], 00:13:35.191 "product_name": "Logical Volume", 00:13:35.191 "block_size": 4096, 00:13:35.191 "num_blocks": 3072, 00:13:35.191 "uuid": "31cbb05c-99c0-464f-b6f7-5e4b517127b8", 00:13:35.191 "assigned_rate_limits": { 00:13:35.191 "rw_ios_per_sec": 0, 00:13:35.191 "rw_mbytes_per_sec": 0, 00:13:35.191 "r_mbytes_per_sec": 0, 00:13:35.191 "w_mbytes_per_sec": 0 00:13:35.191 }, 00:13:35.191 "claimed": false, 00:13:35.191 "zoned": false, 00:13:35.191 "supported_io_types": { 00:13:35.191 "read": true, 00:13:35.191 "write": true, 00:13:35.191 "unmap": true, 00:13:35.191 "write_zeroes": true, 00:13:35.191 "flush": false, 00:13:35.191 "reset": true, 00:13:35.191 "compare": false, 00:13:35.191 "compare_and_write": false, 00:13:35.191 "abort": false, 00:13:35.191 "nvme_admin": false, 00:13:35.191 "nvme_io": false 00:13:35.191 }, 00:13:35.191 "driver_specific": { 00:13:35.191 "lvol": { 00:13:35.191 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:35.191 "base_bdev": "aio_bdev0", 00:13:35.191 "thin_provision": false, 00:13:35.191 "snapshot": false, 00:13:35.191 "clone": false, 00:13:35.191 "esnap_clone": false 00:13:35.191 } 00:13:35.191 } 00:13:35.191 } 00:13:35.191 ]' 00:13:35.191 12:33:17 -- lvol/tasting.sh@98 -- # jq -r '.[0].name' 00:13:35.191 12:33:17 -- lvol/tasting.sh@98 -- # '[' 31cbb05c-99c0-464f-b6f7-5e4b517127b8 = 31cbb05c-99c0-464f-b6f7-5e4b517127b8 ']' 00:13:35.191 12:33:17 -- lvol/tasting.sh@99 -- # jq -r '.[0].uuid' 00:13:35.191 12:33:17 -- lvol/tasting.sh@99 -- # '[' 31cbb05c-99c0-464f-b6f7-5e4b517127b8 = 31cbb05c-99c0-464f-b6f7-5e4b517127b8 ']' 00:13:35.191 12:33:17 -- lvol/tasting.sh@100 -- # jq -r '.[0].aliases[0]' 00:13:35.450 12:33:17 -- lvol/tasting.sh@100 -- # '[' lvs_test1/lvol_test9 = lvs_test1/lvol_test9 ']' 00:13:35.450 12:33:17 -- lvol/tasting.sh@101 -- # jq -r '.[0].block_size' 00:13:35.450 12:33:17 -- lvol/tasting.sh@101 -- # '[' 4096 = 4096 ']' 00:13:35.450 12:33:17 -- lvol/tasting.sh@102 -- # jq -r '.[0].num_blocks' 00:13:35.450 12:33:17 -- lvol/tasting.sh@102 -- # '[' 3072 = 3072 ']' 00:13:35.450 12:33:17 -- lvol/tasting.sh@94 -- # for i in $(seq 6 10) 00:13:35.450 12:33:17 -- lvol/tasting.sh@95 -- # rpc_cmd bdev_lvol_create -u 8249d544-7e67-497e-8e64-848a0275e422 lvol_test10 12 00:13:35.450 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.450 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.450 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.450 12:33:17 -- lvol/tasting.sh@95 -- # lvol_uuid=b3ffa963-016d-410e-a3fd-dc07b5a5f58a 00:13:35.450 12:33:17 -- lvol/tasting.sh@96 -- # rpc_cmd bdev_get_bdevs -b b3ffa963-016d-410e-a3fd-dc07b5a5f58a 00:13:35.450 12:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.450 12:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.450 12:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.450 12:33:17 -- lvol/tasting.sh@96 -- # lvol='[ 00:13:35.450 { 00:13:35.450 "name": "b3ffa963-016d-410e-a3fd-dc07b5a5f58a", 00:13:35.450 "aliases": [ 00:13:35.450 "lvs_test1/lvol_test10" 00:13:35.450 ], 00:13:35.450 "product_name": "Logical Volume", 00:13:35.450 "block_size": 4096, 00:13:35.450 "num_blocks": 3072, 00:13:35.450 "uuid": "b3ffa963-016d-410e-a3fd-dc07b5a5f58a", 00:13:35.450 "assigned_rate_limits": { 00:13:35.450 "rw_ios_per_sec": 0, 00:13:35.450 "rw_mbytes_per_sec": 0, 00:13:35.450 "r_mbytes_per_sec": 0, 00:13:35.450 "w_mbytes_per_sec": 0 00:13:35.450 }, 00:13:35.450 "claimed": false, 00:13:35.450 "zoned": false, 00:13:35.450 "supported_io_types": { 00:13:35.450 "read": true, 00:13:35.450 "write": true, 00:13:35.450 "unmap": true, 00:13:35.450 "write_zeroes": true, 00:13:35.450 "flush": false, 00:13:35.450 "reset": true, 00:13:35.450 "compare": false, 00:13:35.450 "compare_and_write": false, 00:13:35.450 "abort": false, 00:13:35.450 "nvme_admin": false, 00:13:35.450 "nvme_io": false 00:13:35.450 }, 00:13:35.450 "driver_specific": { 00:13:35.450 "lvol": { 00:13:35.450 "lvol_store_uuid": "8249d544-7e67-497e-8e64-848a0275e422", 00:13:35.450 "base_bdev": "aio_bdev0", 00:13:35.450 "thin_provision": false, 00:13:35.450 "snapshot": false, 00:13:35.450 "clone": false, 00:13:35.450 "esnap_clone": false 00:13:35.450 } 00:13:35.450 } 00:13:35.450 } 00:13:35.450 ]' 00:13:35.450 12:33:17 -- lvol/tasting.sh@98 -- # jq -r '.[0].name' 00:13:35.450 12:33:17 -- lvol/tasting.sh@98 -- # '[' b3ffa963-016d-410e-a3fd-dc07b5a5f58a = b3ffa963-016d-410e-a3fd-dc07b5a5f58a ']' 00:13:35.450 12:33:17 -- lvol/tasting.sh@99 -- # jq -r '.[0].uuid' 00:13:35.709 12:33:17 -- lvol/tasting.sh@99 -- # '[' b3ffa963-016d-410e-a3fd-dc07b5a5f58a = b3ffa963-016d-410e-a3fd-dc07b5a5f58a ']' 00:13:35.709 12:33:17 -- lvol/tasting.sh@100 -- # jq -r '.[0].aliases[0]' 00:13:35.709 12:33:18 -- lvol/tasting.sh@100 -- # '[' lvs_test1/lvol_test10 = lvs_test1/lvol_test10 ']' 00:13:35.709 12:33:18 -- lvol/tasting.sh@101 -- # jq -r '.[0].block_size' 00:13:35.709 12:33:18 -- lvol/tasting.sh@101 -- # '[' 4096 = 4096 ']' 00:13:35.709 12:33:18 -- lvol/tasting.sh@102 -- # jq -r '.[0].num_blocks' 00:13:35.709 12:33:18 -- lvol/tasting.sh@102 -- # '[' 3072 = 3072 ']' 00:13:35.709 12:33:18 -- lvol/tasting.sh@105 -- # seq 1 10 00:13:35.709 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.709 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test1 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test2 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test3 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test4 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test5 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test6 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test7 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test8 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test9 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@105 -- # for i in $(seq 1 10) 00:13:35.710 12:33:18 -- lvol/tasting.sh@106 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test10 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.710 12:33:18 -- lvol/tasting.sh@109 -- # rpc_cmd bdev_lvol_delete_lvstore -u 8249d544-7e67-497e-8e64-848a0275e422 00:13:35.710 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.710 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.969 12:33:18 -- lvol/tasting.sh@112 -- # rpc_cmd bdev_lvol_create_lvstore aio_bdev0 lvs_test1 00:13:35.969 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.969 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.969 12:33:18 -- lvol/tasting.sh@112 -- # lvs_uuid1=4a4482ef-34bb-4e56-8712-84d60da9e540 00:13:35.969 12:33:18 -- lvol/tasting.sh@113 -- # seq 1 10 00:13:35.969 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.969 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test1 12 00:13:35.969 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.969 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 f72e0805-5264-4bd2-8482-0e0029821b6c 00:13:35.969 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.969 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.969 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test2 12 00:13:35.969 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.969 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 f272c8d2-5481-4349-b92c-7579f8894e8c 00:13:35.969 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.969 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.969 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test3 12 00:13:35.969 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.969 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 8e3f2397-2029-477f-9400-002677b72c1d 00:13:35.969 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.969 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.969 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test4 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 b3493d3c-13af-4e8f-ac18-89b957ba1a28 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test5 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 31b8206f-6a69-4ce8-a510-2531e09b2bfd 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test6 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 99270e90-6b5e-4914-b9f5-675e11a7cf36 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test7 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 624cfb10-46e6-4b05-8ae2-b6667fc255b3 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test8 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 4e7ac2a4-70ad-4bc8-af4b-e110ba37b2de 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test9 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 93b9eb10-f918-4d11-99f2-77f6f9c8a943 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@113 -- # for i in $(seq 1 10) 00:13:35.970 12:33:18 -- lvol/tasting.sh@114 -- # rpc_cmd bdev_lvol_create -u 4a4482ef-34bb-4e56-8712-84d60da9e540 lvol_test10 12 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 2b1e6cf5-376d-46de-b86c-074d49578cb9 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@118 -- # rpc_cmd bdev_lvol_delete_lvstore -u 4a4482ef-34bb-4e56-8712-84d60da9e540 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@119 -- # rpc_cmd bdev_lvol_get_lvstores -u 4a4482ef-34bb-4e56-8712-84d60da9e540 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 request: 00:13:35.970 { 00:13:35.970 "uuid": "4a4482ef-34bb-4e56-8712-84d60da9e540", 00:13:35.970 "method": "bdev_lvol_get_lvstores", 00:13:35.970 "req_id": 1 00:13:35.970 } 00:13:35.970 Got JSON-RPC error response 00:13:35.970 response: 00:13:35.970 { 00:13:35.970 "code": -19, 00:13:35.970 "message": "No such device" 00:13:35.970 } 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@120 -- # rpc_cmd bdev_lvol_delete_lvstore -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@121 -- # rpc_cmd bdev_lvol_get_lvstores -u be4e422d-a93e-49b6-8dbb-e5b2263ea0c0 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 request: 00:13:35.970 { 00:13:35.970 "uuid": "be4e422d-a93e-49b6-8dbb-e5b2263ea0c0", 00:13:35.970 "method": "bdev_lvol_get_lvstores", 00:13:35.970 "req_id": 1 00:13:35.970 } 00:13:35.970 Got JSON-RPC error response 00:13:35.970 response: 00:13:35.970 { 00:13:35.970 "code": -19, 00:13:35.970 "message": "No such device" 00:13:35.970 } 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@122 -- # rpc_cmd bdev_aio_delete aio_bdev0 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@123 -- # rpc_cmd bdev_aio_delete aio_bdev1 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/tasting.sh@124 -- # check_leftover_devices 00:13:35.970 12:33:18 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:35.970 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.970 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.970 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.970 12:33:18 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:35.970 12:33:18 -- lvol/common.sh@26 -- # jq length 00:13:36.229 12:33:18 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:36.229 12:33:18 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:36.229 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.229 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.229 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.229 12:33:18 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:36.229 12:33:18 -- lvol/common.sh@28 -- # jq length 00:13:36.229 ************************************ 00:13:36.229 END TEST test_tasting 00:13:36.229 ************************************ 00:13:36.229 12:33:18 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:36.229 00:13:36.229 real 0m11.278s 00:13:36.229 user 0m12.499s 00:13:36.229 sys 0m1.038s 00:13:36.229 12:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.229 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.229 12:33:18 -- lvol/tasting.sh@170 -- # run_test test_delete_lvol_store_persistent_positive test_delete_lvol_store_persistent_positive 00:13:36.229 12:33:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:36.229 12:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.229 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.229 ************************************ 00:13:36.229 START TEST test_delete_lvol_store_persistent_positive 00:13:36.229 ************************************ 00:13:36.229 12:33:18 -- common/autotest_common.sh@1104 -- # test_delete_lvol_store_persistent_positive 00:13:36.229 12:33:18 -- lvol/tasting.sh@129 -- # local aio0=/home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:13:36.229 12:33:18 -- lvol/tasting.sh@130 -- # local bdev_aio_name=aio_bdev_0 bdev_block_size=4096 00:13:36.229 12:33:18 -- lvol/tasting.sh@131 -- # local lvstore_name=lvstore_test lvstore_uuid 00:13:36.229 12:33:18 -- lvol/tasting.sh@133 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 aio_bdev_0 4096 00:13:36.229 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.229 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.229 aio_bdev_0 00:13:36.229 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.229 12:33:18 -- lvol/tasting.sh@135 -- # get_bdev_jq bdev_get_bdevs -b aio_bdev_0 00:13:36.229 12:33:18 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b aio_bdev_0 00:13:36.229 12:33:18 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:13:36.229 12:33:18 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:36.229 12:33:18 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:36.229 12:33:18 -- common/autotest_common.sh@586 -- # local jq val 00:13:36.229 12:33:18 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:36.229 12:33:18 -- common/autotest_common.sh@596 -- # local lvs 00:13:36.229 12:33:18 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:36.229 12:33:18 -- common/autotest_common.sh@611 -- # local bdev 00:13:36.229 12:33:18 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:13:36.229 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.229 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:13:36.229 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.229 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:13:36.230 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.230 12:33:18 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:13:36.230 12:33:18 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:36.230 12:33:18 -- common/autotest_common.sh@620 -- # shift 00:13:36.230 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.230 12:33:18 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b aio_bdev_0 00:13:36.230 12:33:18 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:13:36.230 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.230 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=aio_bdev_0 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=bb3872e8-8a9e-4547-b978-c9bbbe82041b 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4096 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=102400 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=bb3872e8-8a9e-4547-b978-c9bbbe82041b 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='AIO disk' 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:13:36.489 12:33:18 -- lvol/tasting.sh@136 -- # [[ aio_bdev_0 == \a\i\o\_\b\d\e\v\_\0 ]] 00:13:36.489 12:33:18 -- lvol/tasting.sh@137 -- # [[ AIO disk == \A\I\O\ \d\i\s\k ]] 00:13:36.489 12:33:18 -- lvol/tasting.sh@138 -- # (( jq_out[block_size] == bdev_block_size )) 00:13:36.489 12:33:18 -- lvol/tasting.sh@140 -- # rpc_cmd bdev_lvol_create_lvstore aio_bdev_0 lvstore_test 00:13:36.489 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.489 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.489 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.489 12:33:18 -- lvol/tasting.sh@140 -- # lvstore_uuid=828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.489 12:33:18 -- lvol/tasting.sh@142 -- # get_lvs_jq bdev_lvol_get_lvstores -u 828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.489 12:33:18 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.489 12:33:18 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:13:36.489 12:33:18 -- common/autotest_common.sh@585 -- # jq_out=() 00:13:36.489 12:33:18 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:13:36.489 12:33:18 -- common/autotest_common.sh@586 -- # local jq val 00:13:36.489 12:33:18 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:13:36.489 12:33:18 -- common/autotest_common.sh@596 -- # local lvs 00:13:36.489 12:33:18 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:13:36.489 12:33:18 -- common/autotest_common.sh@611 -- # local bdev 00:13:36.489 12:33:18 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:13:36.489 12:33:18 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:13:36.489 12:33:18 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:13:36.489 12:33:18 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:13:36.489 12:33:18 -- common/autotest_common.sh@620 -- # shift 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.489 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.489 12:33:18 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:13:36.489 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.489 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvstore_test 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=aio_bdev_0 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=99 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=99 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4096 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:13:36.489 12:33:18 -- common/autotest_common.sh@621 -- # read -r elem val 00:13:36.489 12:33:18 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:13:36.490 12:33:18 -- lvol/tasting.sh@143 -- # [[ 828490d3-56d0-4074-8d97-f1a8c48a9dcd == \8\2\8\4\9\0\d\3\-\5\6\d\0\-\4\0\7\4\-\8\d\9\7\-\f\1\a\8\c\4\8\a\9\d\c\d ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@144 -- # [[ lvstore_test == \l\v\s\t\o\r\e\_\t\e\s\t ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@145 -- # [[ aio_bdev_0 == \a\i\o\_\b\d\e\v\_\0 ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@147 -- # rpc_cmd bdev_lvol_delete_lvstore -u 828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:36.490 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.490 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.490 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@148 -- # rpc_cmd bdev_aio_delete aio_bdev_0 00:13:36.490 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.490 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.490 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@150 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 aio_bdev_0 4096 00:13:36.490 12:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.490 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.490 aio_bdev_0 00:13:36.490 12:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.490 12:33:18 -- lvol/tasting.sh@152 -- # sleep 1 00:13:37.425 12:33:19 -- lvol/tasting.sh@156 -- # rpc_cmd bdev_lvol_get_lvstores -u 828490d3-56d0-4074-8d97-f1a8c48a9dcd 00:13:37.425 12:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.425 12:33:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.425 request: 00:13:37.425 { 00:13:37.425 "uuid": "828490d3-56d0-4074-8d97-f1a8c48a9dcd", 00:13:37.425 "method": "bdev_lvol_get_lvstores", 00:13:37.425 "req_id": 1 00:13:37.425 } 00:13:37.425 Got JSON-RPC error response 00:13:37.425 response: 00:13:37.425 { 00:13:37.425 "code": -19, 00:13:37.425 "message": "No such device" 00:13:37.425 } 00:13:37.425 12:33:19 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:37.425 12:33:19 -- lvol/tasting.sh@159 -- # rpc_cmd bdev_aio_delete aio_bdev_0 00:13:37.425 12:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.425 12:33:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.425 12:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.425 12:33:19 -- lvol/tasting.sh@160 -- # check_leftover_devices 00:13:37.425 12:33:19 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:37.425 12:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.425 12:33:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.425 12:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.425 12:33:19 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:37.425 12:33:19 -- lvol/common.sh@26 -- # jq length 00:13:37.683 12:33:19 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:37.683 12:33:19 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:37.683 12:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.683 12:33:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.683 12:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.683 12:33:19 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:37.683 12:33:20 -- lvol/common.sh@28 -- # jq length 00:13:37.683 ************************************ 00:13:37.683 END TEST test_delete_lvol_store_persistent_positive 00:13:37.683 ************************************ 00:13:37.683 12:33:20 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:37.683 00:13:37.683 real 0m1.397s 00:13:37.683 user 0m0.233s 00:13:37.683 sys 0m0.036s 00:13:37.683 12:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.683 12:33:20 -- common/autotest_common.sh@10 -- # set +x 00:13:37.683 12:33:20 -- lvol/tasting.sh@172 -- # trap - SIGINT SIGTERM EXIT 00:13:37.683 12:33:20 -- lvol/tasting.sh@173 -- # killprocess 60289 00:13:37.683 12:33:20 -- common/autotest_common.sh@926 -- # '[' -z 60289 ']' 00:13:37.683 12:33:20 -- common/autotest_common.sh@930 -- # kill -0 60289 00:13:37.683 12:33:20 -- common/autotest_common.sh@931 -- # uname 00:13:37.683 12:33:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.683 12:33:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60289 00:13:37.683 killing process with pid 60289 00:13:37.683 12:33:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:37.683 12:33:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:37.683 12:33:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60289' 00:13:37.684 12:33:20 -- common/autotest_common.sh@945 -- # kill 60289 00:13:37.684 12:33:20 -- common/autotest_common.sh@950 -- # wait 60289 00:13:39.593 12:33:22 -- lvol/tasting.sh@174 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_1 00:13:39.593 ************************************ 00:13:39.593 END TEST lvol_tasting 00:13:39.593 ************************************ 00:13:39.593 00:13:39.593 real 0m16.621s 00:13:39.593 user 0m21.396s 00:13:39.593 sys 0m1.601s 00:13:39.593 12:33:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.593 12:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:39.593 12:33:22 -- lvol/lvol.sh@18 -- # run_test lvol_snapshot_clone /home/vagrant/spdk_repo/spdk/test/lvol/snapshot_clone.sh 00:13:39.593 12:33:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:39.593 12:33:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.593 12:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:39.593 ************************************ 00:13:39.593 START TEST lvol_snapshot_clone 00:13:39.593 ************************************ 00:13:39.593 12:33:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/snapshot_clone.sh 00:13:39.851 * Looking for test storage... 00:13:39.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:39.851 12:33:22 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:39.851 12:33:22 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:39.851 12:33:22 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:39.851 12:33:22 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:39.851 12:33:22 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:39.851 12:33:22 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:39.851 12:33:22 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:39.851 12:33:22 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:39.851 12:33:22 -- bdev/nbd_common.sh@6 -- # set -e 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@603 -- # spdk_pid=60576 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@604 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@602 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.851 12:33:22 -- lvol/snapshot_clone.sh@605 -- # waitforlisten 60576 00:13:39.851 12:33:22 -- common/autotest_common.sh@819 -- # '[' -z 60576 ']' 00:13:39.851 12:33:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.851 12:33:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.851 12:33:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.851 12:33:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.851 12:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:39.851 [2024-10-01 12:33:22.290337] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:39.851 [2024-10-01 12:33:22.290770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:13:40.110 [2024-10-01 12:33:22.460715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.110 [2024-10-01 12:33:22.629463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.110 [2024-10-01 12:33:22.629725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.488 12:33:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.488 12:33:23 -- common/autotest_common.sh@852 -- # return 0 00:13:41.488 12:33:23 -- lvol/snapshot_clone.sh@606 -- # modprobe nbd 00:13:41.488 12:33:23 -- lvol/snapshot_clone.sh@608 -- # run_test test_snapshot_compare_with_lvol_bdev test_snapshot_compare_with_lvol_bdev 00:13:41.488 12:33:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:41.488 12:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.488 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.488 ************************************ 00:13:41.488 START TEST test_snapshot_compare_with_lvol_bdev 00:13:41.488 ************************************ 00:13:41.488 12:33:23 -- common/autotest_common.sh@1104 -- # test_snapshot_compare_with_lvol_bdev 00:13:41.488 12:33:23 -- lvol/snapshot_clone.sh@13 -- # rpc_cmd bdev_malloc_create 128 512 00:13:41.488 12:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.488 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 12:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@13 -- # malloc_name=Malloc0 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@14 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:13:41.747 12:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.747 12:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 12:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@14 -- # lvs_uuid=ec4b926f-6749-4a60-8397-46d3f771a2c5 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@17 -- # round_down 20 00:13:41.747 12:33:24 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:41.747 12:33:24 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:41.747 12:33:24 -- lvol/common.sh@36 -- # echo 20 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@17 -- # lvol_size_mb=20 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@18 -- # lvol_size=20971520 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@20 -- # rpc_cmd bdev_lvol_create -u ec4b926f-6749-4a60-8397-46d3f771a2c5 lvol_test1 20 -t 00:13:41.747 12:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.747 12:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 12:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@20 -- # lvol_uuid1=e6392ea2-3b71-4526-a0c9-4956e838effa 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@21 -- # rpc_cmd bdev_lvol_create -u ec4b926f-6749-4a60-8397-46d3f771a2c5 lvol_test2 20 00:13:41.747 12:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.747 12:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:41.747 12:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@21 -- # lvol_uuid2=c475904b-c437-4d3e-9078-8c750b54d6a1 00:13:41.747 12:33:24 -- lvol/snapshot_clone.sh@24 -- # nbd_start_disks /var/tmp/spdk.sock e6392ea2-3b71-4526-a0c9-4956e838effa /dev/nbd0 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('e6392ea2-3b71-4526-a0c9-4956e838effa') 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@12 -- # local i 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.747 12:33:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk e6392ea2-3b71-4526-a0c9-4956e838effa /dev/nbd0 00:13:42.005 /dev/nbd0 00:13:42.005 12:33:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.005 12:33:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.005 12:33:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:42.005 12:33:24 -- common/autotest_common.sh@857 -- # local i 00:13:42.005 12:33:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:42.005 12:33:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:42.005 12:33:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:42.005 12:33:24 -- common/autotest_common.sh@861 -- # break 00:13:42.005 12:33:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:42.005 12:33:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:42.005 12:33:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:42.005 1+0 records in 00:13:42.005 1+0 records out 00:13:42.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344114 s, 11.9 MB/s 00:13:42.005 12:33:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:42.005 12:33:24 -- common/autotest_common.sh@874 -- # size=4096 00:13:42.005 12:33:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:42.005 12:33:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:42.005 12:33:24 -- common/autotest_common.sh@877 -- # return 0 00:13:42.005 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.005 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.006 12:33:24 -- lvol/snapshot_clone.sh@25 -- # count=2 00:13:42.006 12:33:24 -- lvol/snapshot_clone.sh@26 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 count=2 00:13:42.006 2+0 records in 00:13:42.006 2+0 records out 00:13:42.006 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.0531237 s, 158 MB/s 00:13:42.006 12:33:24 -- lvol/snapshot_clone.sh@27 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@51 -- # local i 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.006 12:33:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.264 12:33:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.265 12:33:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.265 12:33:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.265 12:33:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.265 12:33:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.265 12:33:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@41 -- # break 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.528 12:33:24 -- lvol/snapshot_clone.sh@29 -- # nbd_start_disks /var/tmp/spdk.sock c475904b-c437-4d3e-9078-8c750b54d6a1 /dev/nbd0 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('c475904b-c437-4d3e-9078-8c750b54d6a1') 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@12 -- # local i 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.528 12:33:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk c475904b-c437-4d3e-9078-8c750b54d6a1 /dev/nbd0 00:13:42.787 /dev/nbd0 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.787 12:33:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:42.787 12:33:25 -- common/autotest_common.sh@857 -- # local i 00:13:42.787 12:33:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:42.787 12:33:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:42.787 12:33:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:42.787 12:33:25 -- common/autotest_common.sh@861 -- # break 00:13:42.787 12:33:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:42.787 12:33:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:42.787 12:33:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:42.787 1+0 records in 00:13:42.787 1+0 records out 00:13:42.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288425 s, 14.2 MB/s 00:13:42.787 12:33:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:42.787 12:33:25 -- common/autotest_common.sh@874 -- # size=4096 00:13:42.787 12:33:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:42.787 12:33:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:42.787 12:33:25 -- common/autotest_common.sh@877 -- # return 0 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.787 12:33:25 -- lvol/snapshot_clone.sh@30 -- # count=5 00:13:42.787 12:33:25 -- lvol/snapshot_clone.sh@31 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 count=5 00:13:42.787 5+0 records in 00:13:42.787 5+0 records out 00:13:42.787 20971520 bytes (21 MB, 20 MiB) copied, 0.137456 s, 153 MB/s 00:13:42.787 12:33:25 -- lvol/snapshot_clone.sh@32 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@51 -- # local i 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.787 12:33:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@41 -- # break 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.046 12:33:25 -- lvol/snapshot_clone.sh@35 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test1 lvol_snapshot1 00:13:43.046 12:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.046 12:33:25 -- common/autotest_common.sh@10 -- # set +x 00:13:43.046 12:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.046 12:33:25 -- lvol/snapshot_clone.sh@35 -- # snapshot_uuid1=ac247762-ab03-4249-afa2-e0530f629309 00:13:43.046 12:33:25 -- lvol/snapshot_clone.sh@36 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test2 lvol_snapshot2 00:13:43.046 12:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.046 12:33:25 -- common/autotest_common.sh@10 -- # set +x 00:13:43.046 12:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.046 12:33:25 -- lvol/snapshot_clone.sh@36 -- # snapshot_uuid2=bf176e7a-9439-4f17-a4df-f87f4af12383 00:13:43.046 12:33:25 -- lvol/snapshot_clone.sh@38 -- # nbd_start_disks /var/tmp/spdk.sock ac247762-ab03-4249-afa2-e0530f629309 /dev/nbd0 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('ac247762-ab03-4249-afa2-e0530f629309') 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@12 -- # local i 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.046 12:33:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk ac247762-ab03-4249-afa2-e0530f629309 /dev/nbd0 00:13:43.305 /dev/nbd0 00:13:43.305 12:33:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.305 12:33:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.305 12:33:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:43.305 12:33:25 -- common/autotest_common.sh@857 -- # local i 00:13:43.305 12:33:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:43.305 12:33:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:43.305 12:33:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:43.305 12:33:25 -- common/autotest_common.sh@861 -- # break 00:13:43.305 12:33:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:43.305 12:33:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:43.305 12:33:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:43.305 1+0 records in 00:13:43.305 1+0 records out 00:13:43.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461137 s, 8.9 MB/s 00:13:43.305 12:33:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:43.305 12:33:25 -- common/autotest_common.sh@874 -- # size=4096 00:13:43.305 12:33:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:43.305 12:33:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:43.305 12:33:25 -- common/autotest_common.sh@877 -- # return 0 00:13:43.305 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.305 12:33:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.305 12:33:25 -- lvol/snapshot_clone.sh@41 -- # count=5 00:13:43.305 12:33:25 -- lvol/snapshot_clone.sh@42 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4194304 count=5 00:13:43.563 dd: error writing '/dev/nbd0': Input/output error 00:13:43.563 1+0 records in 00:13:43.563 0+0 records out 00:13:43.563 0 bytes copied, 0.0618913 s, 0.0 kB/s 00:13:43.563 12:33:25 -- lvol/snapshot_clone.sh@43 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@51 -- # local i 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.564 12:33:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@41 -- # break 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.822 12:33:26 -- lvol/snapshot_clone.sh@46 -- # local lvol_nbd1=/dev/nbd0 lvol_nbd2=/dev/nbd1 00:13:43.822 12:33:26 -- lvol/snapshot_clone.sh@47 -- # local snapshot_nbd1=/dev/nbd2 snapshot_nbd2=/dev/nbd3 00:13:43.822 12:33:26 -- lvol/snapshot_clone.sh@49 -- # nbd_start_disks /var/tmp/spdk.sock e6392ea2-3b71-4526-a0c9-4956e838effa /dev/nbd0 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('e6392ea2-3b71-4526-a0c9-4956e838effa') 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@12 -- # local i 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.822 12:33:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk e6392ea2-3b71-4526-a0c9-4956e838effa /dev/nbd0 00:13:44.081 /dev/nbd0 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.081 12:33:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:44.081 12:33:26 -- common/autotest_common.sh@857 -- # local i 00:13:44.081 12:33:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:44.081 12:33:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:44.081 12:33:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:44.081 12:33:26 -- common/autotest_common.sh@861 -- # break 00:13:44.081 12:33:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:44.081 12:33:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:44.081 12:33:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:44.081 1+0 records in 00:13:44.081 1+0 records out 00:13:44.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394487 s, 10.4 MB/s 00:13:44.081 12:33:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.081 12:33:26 -- common/autotest_common.sh@874 -- # size=4096 00:13:44.081 12:33:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.081 12:33:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:44.081 12:33:26 -- common/autotest_common.sh@877 -- # return 0 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.081 12:33:26 -- lvol/snapshot_clone.sh@50 -- # nbd_start_disks /var/tmp/spdk.sock c475904b-c437-4d3e-9078-8c750b54d6a1 /dev/nbd1 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('c475904b-c437-4d3e-9078-8c750b54d6a1') 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@12 -- # local i 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.081 12:33:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk c475904b-c437-4d3e-9078-8c750b54d6a1 /dev/nbd1 00:13:44.340 /dev/nbd1 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.340 12:33:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:44.340 12:33:26 -- common/autotest_common.sh@857 -- # local i 00:13:44.340 12:33:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:44.340 12:33:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:44.340 12:33:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:44.340 12:33:26 -- common/autotest_common.sh@861 -- # break 00:13:44.340 12:33:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:44.340 12:33:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:44.340 12:33:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:44.340 1+0 records in 00:13:44.340 1+0 records out 00:13:44.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003734 s, 11.0 MB/s 00:13:44.340 12:33:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.340 12:33:26 -- common/autotest_common.sh@874 -- # size=4096 00:13:44.340 12:33:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.340 12:33:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:44.340 12:33:26 -- common/autotest_common.sh@877 -- # return 0 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.340 12:33:26 -- lvol/snapshot_clone.sh@51 -- # nbd_start_disks /var/tmp/spdk.sock ac247762-ab03-4249-afa2-e0530f629309 /dev/nbd2 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('ac247762-ab03-4249-afa2-e0530f629309') 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd2') 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@12 -- # local i 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.340 12:33:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.341 12:33:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk ac247762-ab03-4249-afa2-e0530f629309 /dev/nbd2 00:13:44.600 /dev/nbd2 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:44.600 12:33:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:13:44.600 12:33:27 -- common/autotest_common.sh@857 -- # local i 00:13:44.600 12:33:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:44.600 12:33:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:44.600 12:33:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:13:44.600 12:33:27 -- common/autotest_common.sh@861 -- # break 00:13:44.600 12:33:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:44.600 12:33:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:44.600 12:33:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:44.600 1+0 records in 00:13:44.600 1+0 records out 00:13:44.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287939 s, 14.2 MB/s 00:13:44.600 12:33:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.600 12:33:27 -- common/autotest_common.sh@874 -- # size=4096 00:13:44.600 12:33:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:44.600 12:33:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:44.600 12:33:27 -- common/autotest_common.sh@877 -- # return 0 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.600 12:33:27 -- lvol/snapshot_clone.sh@52 -- # nbd_start_disks /var/tmp/spdk.sock bf176e7a-9439-4f17-a4df-f87f4af12383 /dev/nbd3 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('bf176e7a-9439-4f17-a4df-f87f4af12383') 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd3') 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@12 -- # local i 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.600 12:33:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk bf176e7a-9439-4f17-a4df-f87f4af12383 /dev/nbd3 00:13:44.859 /dev/nbd3 00:13:44.859 12:33:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:44.859 12:33:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:44.859 12:33:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:13:44.859 12:33:27 -- common/autotest_common.sh@857 -- # local i 00:13:44.859 12:33:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:44.859 12:33:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:44.859 12:33:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:13:44.859 12:33:27 -- common/autotest_common.sh@861 -- # break 00:13:44.859 12:33:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:44.859 12:33:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:44.859 12:33:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:44.859 1+0 records in 00:13:44.859 1+0 records out 00:13:44.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274701 s, 14.9 MB/s 00:13:44.859 12:33:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:45.117 12:33:27 -- common/autotest_common.sh@874 -- # size=4096 00:13:45.117 12:33:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:45.117 12:33:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:45.117 12:33:27 -- common/autotest_common.sh@877 -- # return 0 00:13:45.117 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.117 12:33:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.117 12:33:27 -- lvol/snapshot_clone.sh@54 -- # cmp /dev/nbd0 /dev/nbd2 00:13:45.117 12:33:27 -- lvol/snapshot_clone.sh@55 -- # cmp /dev/nbd1 /dev/nbd3 00:13:45.117 12:33:27 -- lvol/snapshot_clone.sh@58 -- # count=2 00:13:45.117 12:33:27 -- lvol/snapshot_clone.sh@59 -- # dd if=/dev/urandom of=/dev/nbd0 oflag=direct seek=2 bs=4194304 count=2 00:13:45.375 2+0 records in 00:13:45.375 2+0 records out 00:13:45.375 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.0594701 s, 141 MB/s 00:13:45.375 12:33:27 -- lvol/snapshot_clone.sh@62 -- # cmp /dev/nbd0 /dev/nbd2 00:13:45.375 /dev/nbd0 /dev/nbd2 differ: byte 8388609, line 32668 00:13:45.375 12:33:27 -- lvol/snapshot_clone.sh@65 -- # for bdev in "${!lvol_nbd@}" "${!snapshot_nbd@}" 00:13:45.375 12:33:27 -- lvol/snapshot_clone.sh@66 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@51 -- # local i 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.375 12:33:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.633 12:33:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.633 12:33:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.633 12:33:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.633 12:33:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.633 12:33:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.634 12:33:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@41 -- # break 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.634 12:33:28 -- lvol/snapshot_clone.sh@65 -- # for bdev in "${!lvol_nbd@}" "${!snapshot_nbd@}" 00:13:45.634 12:33:28 -- lvol/snapshot_clone.sh@66 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@51 -- # local i 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.634 12:33:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@41 -- # break 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.892 12:33:28 -- lvol/snapshot_clone.sh@65 -- # for bdev in "${!lvol_nbd@}" "${!snapshot_nbd@}" 00:13:45.892 12:33:28 -- lvol/snapshot_clone.sh@66 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd2 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd2') 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@51 -- # local i 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.892 12:33:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd2 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@41 -- # break 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.150 12:33:28 -- lvol/snapshot_clone.sh@65 -- # for bdev in "${!lvol_nbd@}" "${!snapshot_nbd@}" 00:13:46.150 12:33:28 -- lvol/snapshot_clone.sh@66 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd3 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd3') 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@51 -- # local i 00:13:46.150 12:33:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.151 12:33:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd3 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@41 -- # break 00:13:46.410 12:33:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@69 -- # rpc_cmd bdev_lvol_delete e6392ea2-3b71-4526-a0c9-4956e838effa 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@70 -- # rpc_cmd bdev_get_bdevs -b e6392ea2-3b71-4526-a0c9-4956e838effa 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 [2024-10-01 12:33:28.800140] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e6392ea2-3b71-4526-a0c9-4956e838effa 00:13:46.410 request: 00:13:46.410 { 00:13:46.410 "name": "e6392ea2-3b71-4526-a0c9-4956e838effa", 00:13:46.410 "method": "bdev_get_bdevs", 00:13:46.410 "req_id": 1 00:13:46.410 } 00:13:46.410 Got JSON-RPC error response 00:13:46.410 response: 00:13:46.410 { 00:13:46.410 "code": -19, 00:13:46.410 "message": "No such device" 00:13:46.410 } 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@71 -- # rpc_cmd bdev_lvol_delete ac247762-ab03-4249-afa2-e0530f629309 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@72 -- # rpc_cmd bdev_get_bdevs -b ac247762-ab03-4249-afa2-e0530f629309 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 [2024-10-01 12:33:28.820136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: ac247762-ab03-4249-afa2-e0530f629309 00:13:46.410 request: 00:13:46.410 { 00:13:46.410 "name": "ac247762-ab03-4249-afa2-e0530f629309", 00:13:46.410 "method": "bdev_get_bdevs", 00:13:46.410 "req_id": 1 00:13:46.410 } 00:13:46.410 Got JSON-RPC error response 00:13:46.410 response: 00:13:46.410 { 00:13:46.410 "code": -19, 00:13:46.410 "message": "No such device" 00:13:46.410 } 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@73 -- # rpc_cmd bdev_lvol_delete c475904b-c437-4d3e-9078-8c750b54d6a1 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@74 -- # rpc_cmd bdev_get_bdevs -b c475904b-c437-4d3e-9078-8c750b54d6a1 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 [2024-10-01 12:33:28.840137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: c475904b-c437-4d3e-9078-8c750b54d6a1 00:13:46.410 request: 00:13:46.410 { 00:13:46.410 "name": "c475904b-c437-4d3e-9078-8c750b54d6a1", 00:13:46.410 "method": "bdev_get_bdevs", 00:13:46.410 "req_id": 1 00:13:46.410 } 00:13:46.410 Got JSON-RPC error response 00:13:46.410 response: 00:13:46.410 { 00:13:46.410 "code": -19, 00:13:46.410 "message": "No such device" 00:13:46.410 } 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@75 -- # rpc_cmd bdev_lvol_delete bf176e7a-9439-4f17-a4df-f87f4af12383 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@76 -- # rpc_cmd bdev_get_bdevs -b bf176e7a-9439-4f17-a4df-f87f4af12383 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 [2024-10-01 12:33:28.864178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: bf176e7a-9439-4f17-a4df-f87f4af12383 00:13:46.410 request: 00:13:46.410 { 00:13:46.410 "name": "bf176e7a-9439-4f17-a4df-f87f4af12383", 00:13:46.410 "method": "bdev_get_bdevs", 00:13:46.410 "req_id": 1 00:13:46.410 } 00:13:46.410 Got JSON-RPC error response 00:13:46.410 response: 00:13:46.410 { 00:13:46.410 "code": -19, 00:13:46.410 "message": "No such device" 00:13:46.410 } 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -u ec4b926f-6749-4a60-8397-46d3f771a2c5 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@78 -- # rpc_cmd bdev_lvol_get_lvstores -u ec4b926f-6749-4a60-8397-46d3f771a2c5 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 request: 00:13:46.410 { 00:13:46.410 "uuid": "ec4b926f-6749-4a60-8397-46d3f771a2c5", 00:13:46.410 "method": "bdev_lvol_get_lvstores", 00:13:46.410 "req_id": 1 00:13:46.410 } 00:13:46.410 Got JSON-RPC error response 00:13:46.410 response: 00:13:46.410 { 00:13:46.410 "code": -19, 00:13:46.410 "message": "No such device" 00:13:46.410 } 00:13:46.410 12:33:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:13:46.410 12:33:28 -- lvol/snapshot_clone.sh@79 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:46.410 12:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.410 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.669 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.669 12:33:29 -- lvol/snapshot_clone.sh@80 -- # check_leftover_devices 00:13:46.669 12:33:29 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:46.669 12:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.669 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:46.928 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.928 12:33:29 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:46.928 12:33:29 -- lvol/common.sh@26 -- # jq length 00:13:46.928 12:33:29 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:46.929 12:33:29 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:46.929 12:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.929 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:46.929 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.929 12:33:29 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:46.929 12:33:29 -- lvol/common.sh@28 -- # jq length 00:13:46.929 ************************************ 00:13:46.929 END TEST test_snapshot_compare_with_lvol_bdev 00:13:46.929 ************************************ 00:13:46.929 12:33:29 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:46.929 00:13:46.929 real 0m5.332s 00:13:46.929 user 0m3.494s 00:13:46.929 sys 0m0.812s 00:13:46.929 12:33:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.929 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:46.929 12:33:29 -- lvol/snapshot_clone.sh@609 -- # run_test test_create_snapshot_with_io test_create_snapshot_with_io 00:13:46.929 12:33:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:46.929 12:33:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:46.929 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:46.929 ************************************ 00:13:46.929 START TEST test_create_snapshot_with_io 00:13:46.929 ************************************ 00:13:46.929 12:33:29 -- common/autotest_common.sh@1104 -- # test_create_snapshot_with_io 00:13:46.929 12:33:29 -- lvol/snapshot_clone.sh@86 -- # rpc_cmd bdev_malloc_create 128 512 00:13:46.929 12:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.929 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:47.193 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@86 -- # malloc_name=Malloc1 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@87 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test 00:13:47.193 12:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.193 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:47.193 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@87 -- # lvs_uuid=4920d3bf-811a-46bc-9adc-ecad59d4bf5b 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@90 -- # round_down 62 00:13:47.193 12:33:29 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:13:47.193 12:33:29 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:13:47.193 12:33:29 -- lvol/common.sh@36 -- # echo 60 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@90 -- # lvol_size_mb=60 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@91 -- # lvol_size=62914560 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@93 -- # rpc_cmd bdev_lvol_create -u 4920d3bf-811a-46bc-9adc-ecad59d4bf5b lvol_test 60 -t 00:13:47.193 12:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.193 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:47.193 12:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@93 -- # lvol_uuid=c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 00:13:47.193 12:33:29 -- lvol/snapshot_clone.sh@96 -- # nbd_start_disks /var/tmp/spdk.sock c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 /dev/nbd0 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2') 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@12 -- # local i 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.193 12:33:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 /dev/nbd0 00:13:47.459 /dev/nbd0 00:13:47.459 12:33:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.459 12:33:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.459 12:33:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:47.459 12:33:29 -- common/autotest_common.sh@857 -- # local i 00:13:47.459 12:33:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:47.459 12:33:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:47.459 12:33:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:47.459 12:33:29 -- common/autotest_common.sh@861 -- # break 00:13:47.459 12:33:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:47.459 12:33:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:47.459 12:33:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:13:47.459 1+0 records in 00:13:47.459 1+0 records out 00:13:47.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269081 s, 15.2 MB/s 00:13:47.459 12:33:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:47.459 12:33:29 -- common/autotest_common.sh@874 -- # size=4096 00:13:47.459 12:33:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:13:47.459 12:33:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:47.459 12:33:29 -- common/autotest_common.sh@877 -- # return 0 00:13:47.459 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.459 12:33:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.459 12:33:29 -- lvol/snapshot_clone.sh@98 -- # fio_proc=60814 00:13:47.459 12:33:29 -- lvol/snapshot_clone.sh@99 -- # sleep 4 00:13:47.459 12:33:29 -- lvol/snapshot_clone.sh@97 -- # run_fio_test /dev/nbd0 0 62914560 write 0xcc '--time_based --runtime=16' 00:13:47.459 12:33:29 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:13:47.459 12:33:29 -- lvol/common.sh@41 -- # local offset=0 00:13:47.459 12:33:29 -- lvol/common.sh@42 -- # local size=62914560 00:13:47.459 12:33:29 -- lvol/common.sh@43 -- # local rw=write 00:13:47.459 12:33:29 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:47.459 12:33:29 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=16' 00:13:47.459 12:33:29 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:47.459 12:33:29 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:47.459 12:33:29 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:47.459 12:33:29 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --time_based --runtime=16 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:47.459 12:33:29 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --time_based --runtime=16 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:47.459 fio: verification read phase will never start because write phase uses all of runtime 00:13:47.459 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:47.459 fio-3.35 00:13:47.459 Starting 1 process 00:13:51.647 12:33:33 -- lvol/snapshot_clone.sh@101 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:13:51.647 12:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.647 12:33:33 -- common/autotest_common.sh@10 -- # set +x 00:13:51.647 12:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.647 12:33:33 -- lvol/snapshot_clone.sh@101 -- # snapshot_uuid=96a4427c-1f9d-4675-8f9b-c7fc2d03c781 00:13:51.647 12:33:33 -- lvol/snapshot_clone.sh@102 -- # wait 60814 00:14:03.857 00:14:03.857 fio_test: (groupid=0, jobs=1): err= 0: pid=60818: Tue Oct 1 12:33:46 2024 00:14:03.857 write: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(755MiB/16001msec); 0 zone resets 00:14:03.857 clat (usec): min=58, max=3294, avg=81.14, stdev=33.32 00:14:03.858 lat (usec): min=59, max=3295, avg=82.02, stdev=33.38 00:14:03.858 clat percentiles (usec): 00:14:03.858 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 71], 00:14:03.858 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 80], 00:14:03.858 | 70.00th=[ 85], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 109], 00:14:03.858 | 99.00th=[ 126], 99.50th=[ 135], 99.90th=[ 178], 99.95th=[ 474], 00:14:03.858 | 99.99th=[ 1795] 00:14:03.858 bw ( KiB/s): min=44456, max=55592, per=100.00%, avg=48412.94, stdev=3073.83, samples=31 00:14:03.858 iops : min=11114, max=13898, avg=12103.23, stdev=768.46, samples=31 00:14:03.858 lat (usec) : 100=89.59%, 250=10.35%, 500=0.02%, 750=0.01%, 1000=0.01% 00:14:03.858 lat (msec) : 2=0.03%, 4=0.01% 00:14:03.858 cpu : usr=4.04%, sys=7.34%, ctx=199205, majf=0, minf=386 00:14:03.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.858 issued rwts: total=0,193333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.858 00:14:03.858 Run status group 0 (all jobs): 00:14:03.858 WRITE: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=755MiB (792MB), run=16001-16001msec 00:14:03.858 00:14:03.858 Disk stats (read/write): 00:14:03.858 nbd0: ios=10/192052, merge=0/0, ticks=1/14188, in_queue=14189, util=99.47% 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@105 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@51 -- # local i 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@41 -- # break 00:14:03.858 12:33:46 -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@106 -- # rpc_cmd bdev_lvol_delete c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 00:14:03.858 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.858 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@107 -- # rpc_cmd bdev_get_bdevs -b c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 00:14:03.858 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.858 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 [2024-10-01 12:33:46.340868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2 00:14:03.858 request: 00:14:03.858 { 00:14:03.858 "name": "c8f0ddd4-c712-40bc-b5bd-39dee9bad9f2", 00:14:03.858 "method": "bdev_get_bdevs", 00:14:03.858 "req_id": 1 00:14:03.858 } 00:14:03.858 Got JSON-RPC error response 00:14:03.858 response: 00:14:03.858 { 00:14:03.858 "code": -19, 00:14:03.858 "message": "No such device" 00:14:03.858 } 00:14:03.858 12:33:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@108 -- # rpc_cmd bdev_lvol_delete 96a4427c-1f9d-4675-8f9b-c7fc2d03c781 00:14:03.858 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.858 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@109 -- # rpc_cmd bdev_get_bdevs -b 96a4427c-1f9d-4675-8f9b-c7fc2d03c781 00:14:03.858 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.858 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:03.858 [2024-10-01 12:33:46.367831] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 96a4427c-1f9d-4675-8f9b-c7fc2d03c781 00:14:03.858 request: 00:14:03.858 { 00:14:03.858 "name": "96a4427c-1f9d-4675-8f9b-c7fc2d03c781", 00:14:03.858 "method": "bdev_get_bdevs", 00:14:03.858 "req_id": 1 00:14:03.858 } 00:14:03.858 Got JSON-RPC error response 00:14:03.858 response: 00:14:03.858 { 00:14:03.858 "code": -19, 00:14:03.858 "message": "No such device" 00:14:03.858 } 00:14:03.858 12:33:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:03.858 12:33:46 -- lvol/snapshot_clone.sh@110 -- # rpc_cmd bdev_lvol_delete_lvstore -u 4920d3bf-811a-46bc-9adc-ecad59d4bf5b 00:14:03.858 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.858 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.117 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.117 12:33:46 -- lvol/snapshot_clone.sh@111 -- # rpc_cmd bdev_lvol_get_lvstores -u 4920d3bf-811a-46bc-9adc-ecad59d4bf5b 00:14:04.117 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.117 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.117 request: 00:14:04.117 { 00:14:04.117 "uuid": "4920d3bf-811a-46bc-9adc-ecad59d4bf5b", 00:14:04.117 "method": "bdev_lvol_get_lvstores", 00:14:04.117 "req_id": 1 00:14:04.117 } 00:14:04.117 Got JSON-RPC error response 00:14:04.117 response: 00:14:04.117 { 00:14:04.117 "code": -19, 00:14:04.117 "message": "No such device" 00:14:04.117 } 00:14:04.117 12:33:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:04.117 12:33:46 -- lvol/snapshot_clone.sh@112 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:04.117 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.117 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.376 12:33:46 -- lvol/snapshot_clone.sh@113 -- # check_leftover_devices 00:14:04.376 12:33:46 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.376 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.376 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.376 12:33:46 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.376 12:33:46 -- lvol/common.sh@26 -- # jq length 00:14:04.376 12:33:46 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.376 12:33:46 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.376 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.376 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.376 12:33:46 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.376 12:33:46 -- lvol/common.sh@28 -- # jq length 00:14:04.376 ************************************ 00:14:04.376 END TEST test_create_snapshot_with_io 00:14:04.376 ************************************ 00:14:04.376 12:33:46 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.376 00:14:04.376 real 0m17.449s 00:14:04.376 user 0m1.344s 00:14:04.376 sys 0m1.349s 00:14:04.376 12:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.376 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 12:33:46 -- lvol/snapshot_clone.sh@610 -- # run_test test_create_snapshot_of_snapshot test_create_snapshot_of_snapshot 00:14:04.376 12:33:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:04.376 12:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.376 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 ************************************ 00:14:04.376 START TEST test_create_snapshot_of_snapshot 00:14:04.376 ************************************ 00:14:04.376 12:33:46 -- common/autotest_common.sh@1104 -- # test_create_snapshot_of_snapshot 00:14:04.376 12:33:46 -- lvol/snapshot_clone.sh@118 -- # rpc_cmd bdev_malloc_create 128 512 00:14:04.376 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.376 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:46 -- lvol/snapshot_clone.sh@118 -- # malloc_name=Malloc2 00:14:04.635 12:33:46 -- lvol/snapshot_clone.sh@119 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test 00:14:04.635 12:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:46 -- lvol/snapshot_clone.sh@119 -- # lvs_uuid=bb2b9bc4-70eb-455c-8740-488da494ebc2 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@122 -- # round_down 41 00:14:04.635 12:33:47 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:04.635 12:33:47 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:04.635 12:33:47 -- lvol/common.sh@36 -- # echo 40 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@122 -- # lvol_size_mb=40 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@124 -- # rpc_cmd bdev_lvol_create -u bb2b9bc4-70eb-455c-8740-488da494ebc2 lvol_test 40 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@124 -- # lvol_uuid=c1f65baa-9b13-48f7-a271-35ec54ba2bec 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@125 -- # rpc_cmd bdev_get_bdevs -b c1f65baa-9b13-48f7-a271-35ec54ba2bec 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@125 -- # lvol='[ 00:14:04.635 { 00:14:04.635 "name": "c1f65baa-9b13-48f7-a271-35ec54ba2bec", 00:14:04.635 "aliases": [ 00:14:04.635 "lvs_test/lvol_test" 00:14:04.635 ], 00:14:04.635 "product_name": "Logical Volume", 00:14:04.635 "block_size": 512, 00:14:04.635 "num_blocks": 81920, 00:14:04.635 "uuid": "c1f65baa-9b13-48f7-a271-35ec54ba2bec", 00:14:04.635 "assigned_rate_limits": { 00:14:04.635 "rw_ios_per_sec": 0, 00:14:04.635 "rw_mbytes_per_sec": 0, 00:14:04.635 "r_mbytes_per_sec": 0, 00:14:04.635 "w_mbytes_per_sec": 0 00:14:04.635 }, 00:14:04.635 "claimed": false, 00:14:04.635 "zoned": false, 00:14:04.635 "supported_io_types": { 00:14:04.635 "read": true, 00:14:04.635 "write": true, 00:14:04.635 "unmap": true, 00:14:04.635 "write_zeroes": true, 00:14:04.635 "flush": false, 00:14:04.635 "reset": true, 00:14:04.635 "compare": false, 00:14:04.635 "compare_and_write": false, 00:14:04.635 "abort": false, 00:14:04.635 "nvme_admin": false, 00:14:04.635 "nvme_io": false 00:14:04.635 }, 00:14:04.635 "memory_domains": [ 00:14:04.635 { 00:14:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.635 "dma_device_type": 2 00:14:04.635 } 00:14:04.635 ], 00:14:04.635 "driver_specific": { 00:14:04.635 "lvol": { 00:14:04.635 "lvol_store_uuid": "bb2b9bc4-70eb-455c-8740-488da494ebc2", 00:14:04.635 "base_bdev": "Malloc2", 00:14:04.635 "thin_provision": false, 00:14:04.635 "snapshot": false, 00:14:04.635 "clone": false, 00:14:04.635 "esnap_clone": false 00:14:04.635 } 00:14:04.635 } 00:14:04.635 } 00:14:04.635 ]' 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@128 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@128 -- # snapshot_uuid=4efd60b7-64b1-46fd-bb29-5d5440bece35 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@132 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_snapshot lvol_snapshot2 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 request: 00:14:04.635 { 00:14:04.635 "lvol_name": "lvs_test/lvol_snapshot", 00:14:04.635 "snapshot_name": "lvol_snapshot2", 00:14:04.635 "method": "bdev_lvol_snapshot", 00:14:04.635 "req_id": 1 00:14:04.635 } 00:14:04.635 Got JSON-RPC error response 00:14:04.635 response: 00:14:04.635 { 00:14:04.635 "code": -32602, 00:14:04.635 "message": "Invalid argument" 00:14:04.635 } 00:14:04.635 12:33:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@135 -- # rpc_cmd bdev_lvol_delete c1f65baa-9b13-48f7-a271-35ec54ba2bec 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.635 12:33:47 -- lvol/snapshot_clone.sh@136 -- # rpc_cmd bdev_get_bdevs -b c1f65baa-9b13-48f7-a271-35ec54ba2bec 00:14:04.635 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.635 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.635 [2024-10-01 12:33:47.074509] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: c1f65baa-9b13-48f7-a271-35ec54ba2bec 00:14:04.635 request: 00:14:04.635 { 00:14:04.636 "name": "c1f65baa-9b13-48f7-a271-35ec54ba2bec", 00:14:04.636 "method": "bdev_get_bdevs", 00:14:04.636 "req_id": 1 00:14:04.636 } 00:14:04.636 Got JSON-RPC error response 00:14:04.636 response: 00:14:04.636 { 00:14:04.636 "code": -19, 00:14:04.636 "message": "No such device" 00:14:04.636 } 00:14:04.636 12:33:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:04.636 12:33:47 -- lvol/snapshot_clone.sh@137 -- # rpc_cmd bdev_lvol_delete 4efd60b7-64b1-46fd-bb29-5d5440bece35 00:14:04.636 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.636 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.636 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.636 12:33:47 -- lvol/snapshot_clone.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4efd60b7-64b1-46fd-bb29-5d5440bece35 00:14:04.636 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.636 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.636 [2024-10-01 12:33:47.099029] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4efd60b7-64b1-46fd-bb29-5d5440bece35 00:14:04.636 request: 00:14:04.636 { 00:14:04.636 "name": "4efd60b7-64b1-46fd-bb29-5d5440bece35", 00:14:04.636 "method": "bdev_get_bdevs", 00:14:04.636 "req_id": 1 00:14:04.636 } 00:14:04.636 Got JSON-RPC error response 00:14:04.636 response: 00:14:04.636 { 00:14:04.636 "code": -19, 00:14:04.636 "message": "No such device" 00:14:04.636 } 00:14:04.636 12:33:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:04.636 12:33:47 -- lvol/snapshot_clone.sh@139 -- # rpc_cmd bdev_lvol_delete_lvstore -u bb2b9bc4-70eb-455c-8740-488da494ebc2 00:14:04.636 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.636 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.636 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.636 12:33:47 -- lvol/snapshot_clone.sh@140 -- # rpc_cmd bdev_lvol_get_lvstores -u bb2b9bc4-70eb-455c-8740-488da494ebc2 00:14:04.636 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.636 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.636 request: 00:14:04.636 { 00:14:04.636 "uuid": "bb2b9bc4-70eb-455c-8740-488da494ebc2", 00:14:04.636 "method": "bdev_lvol_get_lvstores", 00:14:04.636 "req_id": 1 00:14:04.636 } 00:14:04.636 Got JSON-RPC error response 00:14:04.636 response: 00:14:04.636 { 00:14:04.636 "code": -19, 00:14:04.636 "message": "No such device" 00:14:04.636 } 00:14:04.636 12:33:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:04.636 12:33:47 -- lvol/snapshot_clone.sh@141 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:04.636 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.636 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.895 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.895 12:33:47 -- lvol/snapshot_clone.sh@142 -- # check_leftover_devices 00:14:04.895 12:33:47 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.895 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.895 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.895 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.153 12:33:47 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:05.153 12:33:47 -- lvol/common.sh@26 -- # jq length 00:14:05.153 12:33:47 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:05.153 12:33:47 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:05.153 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.153 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.153 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.153 12:33:47 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:05.153 12:33:47 -- lvol/common.sh@28 -- # jq length 00:14:05.153 ************************************ 00:14:05.153 END TEST test_create_snapshot_of_snapshot 00:14:05.153 ************************************ 00:14:05.153 12:33:47 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:05.153 00:14:05.153 real 0m0.669s 00:14:05.153 user 0m0.136s 00:14:05.153 sys 0m0.027s 00:14:05.153 12:33:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.153 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.153 12:33:47 -- lvol/snapshot_clone.sh@611 -- # run_test test_clone_snapshot_relations test_clone_snapshot_relations 00:14:05.153 12:33:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:05.153 12:33:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.153 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.153 ************************************ 00:14:05.153 START TEST test_clone_snapshot_relations 00:14:05.153 ************************************ 00:14:05.153 12:33:47 -- common/autotest_common.sh@1104 -- # test_clone_snapshot_relations 00:14:05.153 12:33:47 -- lvol/snapshot_clone.sh@149 -- # rpc_cmd bdev_malloc_create 128 512 00:14:05.153 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.153 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@149 -- # malloc_name=Malloc3 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@150 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:14:05.413 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.413 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@150 -- # lvs_uuid=1cf73c20-ced1-4c77-9079-5f310f2b6a26 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@153 -- # round_down 20 00:14:05.413 12:33:47 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:05.413 12:33:47 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:05.413 12:33:47 -- lvol/common.sh@36 -- # echo 20 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@153 -- # lvol_size_mb=20 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@154 -- # lvol_size=20971520 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@156 -- # rpc_cmd bdev_lvol_create -u 1cf73c20-ced1-4c77-9079-5f310f2b6a26 lvol_test 20 00:14:05.413 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.413 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@156 -- # lvol_uuid=37e39ecc-0c87-458d-bce3-0a41d7ee6e2f 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@157 -- # rpc_cmd bdev_get_bdevs -b 37e39ecc-0c87-458d-bce3-0a41d7ee6e2f 00:14:05.413 12:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.413 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 12:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@157 -- # lvol='[ 00:14:05.413 { 00:14:05.413 "name": "37e39ecc-0c87-458d-bce3-0a41d7ee6e2f", 00:14:05.413 "aliases": [ 00:14:05.413 "lvs_test/lvol_test" 00:14:05.413 ], 00:14:05.413 "product_name": "Logical Volume", 00:14:05.413 "block_size": 512, 00:14:05.413 "num_blocks": 40960, 00:14:05.413 "uuid": "37e39ecc-0c87-458d-bce3-0a41d7ee6e2f", 00:14:05.413 "assigned_rate_limits": { 00:14:05.413 "rw_ios_per_sec": 0, 00:14:05.413 "rw_mbytes_per_sec": 0, 00:14:05.413 "r_mbytes_per_sec": 0, 00:14:05.413 "w_mbytes_per_sec": 0 00:14:05.413 }, 00:14:05.413 "claimed": false, 00:14:05.413 "zoned": false, 00:14:05.413 "supported_io_types": { 00:14:05.413 "read": true, 00:14:05.413 "write": true, 00:14:05.413 "unmap": true, 00:14:05.413 "write_zeroes": true, 00:14:05.413 "flush": false, 00:14:05.413 "reset": true, 00:14:05.413 "compare": false, 00:14:05.413 "compare_and_write": false, 00:14:05.413 "abort": false, 00:14:05.413 "nvme_admin": false, 00:14:05.413 "nvme_io": false 00:14:05.413 }, 00:14:05.413 "memory_domains": [ 00:14:05.413 { 00:14:05.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.413 "dma_device_type": 2 00:14:05.413 } 00:14:05.413 ], 00:14:05.413 "driver_specific": { 00:14:05.413 "lvol": { 00:14:05.413 "lvol_store_uuid": "1cf73c20-ced1-4c77-9079-5f310f2b6a26", 00:14:05.413 "base_bdev": "Malloc3", 00:14:05.413 "thin_provision": false, 00:14:05.413 "snapshot": false, 00:14:05.413 "clone": false, 00:14:05.413 "esnap_clone": false 00:14:05.413 } 00:14:05.413 } 00:14:05.413 } 00:14:05.413 ]' 00:14:05.413 12:33:47 -- lvol/snapshot_clone.sh@160 -- # nbd_start_disks /var/tmp/spdk.sock 37e39ecc-0c87-458d-bce3-0a41d7ee6e2f /dev/nbd0 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('37e39ecc-0c87-458d-bce3-0a41d7ee6e2f') 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@12 -- # local i 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.413 12:33:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 37e39ecc-0c87-458d-bce3-0a41d7ee6e2f /dev/nbd0 00:14:05.672 /dev/nbd0 00:14:05.672 12:33:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.672 12:33:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.672 12:33:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:05.672 12:33:48 -- common/autotest_common.sh@857 -- # local i 00:14:05.672 12:33:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:05.672 12:33:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:05.673 12:33:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:05.673 12:33:48 -- common/autotest_common.sh@861 -- # break 00:14:05.673 12:33:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:05.673 12:33:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:05.673 12:33:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:05.673 1+0 records in 00:14:05.673 1+0 records out 00:14:05.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308435 s, 13.3 MB/s 00:14:05.673 12:33:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:05.673 12:33:48 -- common/autotest_common.sh@874 -- # size=4096 00:14:05.673 12:33:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:05.673 12:33:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:05.673 12:33:48 -- common/autotest_common.sh@877 -- # return 0 00:14:05.673 12:33:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.673 12:33:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.673 12:33:48 -- lvol/snapshot_clone.sh@161 -- # run_fio_test /dev/nbd0 0 20971520 write 0xcc 00:14:05.673 12:33:48 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:05.673 12:33:48 -- lvol/common.sh@41 -- # local offset=0 00:14:05.673 12:33:48 -- lvol/common.sh@42 -- # local size=20971520 00:14:05.673 12:33:48 -- lvol/common.sh@43 -- # local rw=write 00:14:05.673 12:33:48 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:05.673 12:33:48 -- lvol/common.sh@45 -- # local extra_params= 00:14:05.673 12:33:48 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:05.673 12:33:48 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:05.673 12:33:48 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:05.673 12:33:48 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:05.673 12:33:48 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:05.673 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:05.673 fio-3.35 00:14:05.673 Starting 1 process 00:14:07.053 00:14:07.053 fio_test: (groupid=0, jobs=1): err= 0: pid=61087: Tue Oct 1 12:33:49 2024 00:14:07.053 read: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(20.0MiB/402msec) 00:14:07.053 clat (usec): min=58, max=452, avg=77.22, stdev=16.53 00:14:07.053 lat (usec): min=58, max=452, avg=77.31, stdev=16.53 00:14:07.053 clat percentiles (usec): 00:14:07.053 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 65], 00:14:07.053 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 79], 00:14:07.053 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 98], 95.00th=[ 108], 00:14:07.053 | 99.00th=[ 125], 99.50th=[ 135], 99.90th=[ 155], 99.95th=[ 310], 00:14:07.053 | 99.99th=[ 453] 00:14:07.053 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(20.0MiB/440msec); 0 zone resets 00:14:07.053 clat (usec): min=61, max=688, avg=84.10, stdev=18.36 00:14:07.053 lat (usec): min=62, max=689, avg=84.94, stdev=18.51 00:14:07.053 clat percentiles (usec): 00:14:07.053 | 1.00th=[ 64], 5.00th=[ 70], 10.00th=[ 70], 20.00th=[ 72], 00:14:07.053 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 84], 00:14:07.053 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 114], 00:14:07.053 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 182], 99.95th=[ 265], 00:14:07.053 | 99.99th=[ 693] 00:14:07.053 bw ( KiB/s): min=40960, max=40960, per=88.00%, avg=40960.00, stdev= 0.00, samples=1 00:14:07.053 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=1 00:14:07.053 lat (usec) : 100=88.96%, 250=10.98%, 500=0.04%, 750=0.02% 00:14:07.053 cpu : usr=2.73%, sys=9.51%, ctx=10242, majf=0, minf=146 00:14:07.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.054 issued rwts: total=5120,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.054 00:14:07.054 Run status group 0 (all jobs): 00:14:07.054 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=20.0MiB (21.0MB), run=402-402msec 00:14:07.054 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=20.0MiB (21.0MB), run=440-440msec 00:14:07.054 00:14:07.054 Disk stats (read/write): 00:14:07.054 nbd0: ios=2468/5120, merge=0/0, ticks=186/385, in_queue=571, util=86.50% 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@162 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@51 -- # local i 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@41 -- # break 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@165 -- # rpc_cmd bdev_lvol_clone lvs_test/lvol_test clone_test 00:14:07.054 12:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.054 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 request: 00:14:07.054 { 00:14:07.054 "snapshot_name": "lvs_test/lvol_test", 00:14:07.054 "clone_name": "clone_test", 00:14:07.054 "method": "bdev_lvol_clone", 00:14:07.054 "req_id": 1 00:14:07.054 } 00:14:07.054 Got JSON-RPC error response 00:14:07.054 response: 00:14:07.054 { 00:14:07.054 "code": -32602, 00:14:07.054 "message": "Invalid argument" 00:14:07.054 } 00:14:07.054 12:33:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@168 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:07.054 12:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.054 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 12:33:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@168 -- # snapshot_uuid=bf89f260-8e52-4d85-9174-32b550fbf3d4 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@171 -- # rpc_cmd bdev_lvol_clone lvs_test/lvol_test clone_test 00:14:07.054 12:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.054 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 request: 00:14:07.054 { 00:14:07.054 "snapshot_name": "lvs_test/lvol_test", 00:14:07.054 "clone_name": "clone_test", 00:14:07.054 "method": "bdev_lvol_clone", 00:14:07.054 "req_id": 1 00:14:07.054 } 00:14:07.054 Got JSON-RPC error response 00:14:07.054 response: 00:14:07.054 { 00:14:07.054 "code": -32602, 00:14:07.054 "message": "Invalid argument" 00:14:07.054 } 00:14:07.054 12:33:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@174 -- # rpc_cmd bdev_lvol_clone lvs_test/lvol_snapshot clone_test1 00:14:07.054 12:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.054 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 12:33:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@174 -- # clone_uuid1=1dcf34bb-0f8f-477e-bd8a-7a275e4c248b 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@175 -- # rpc_cmd bdev_lvol_clone lvs_test/lvol_snapshot clone_test2 00:14:07.054 12:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.054 12:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.054 12:33:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@175 -- # clone_uuid2=fb426244-ec70-4546-b667-2255130318a2 00:14:07.054 12:33:49 -- lvol/snapshot_clone.sh@179 -- # nbd_start_disks /var/tmp/spdk.sock 1dcf34bb-0f8f-477e-bd8a-7a275e4c248b /dev/nbd0 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('1dcf34bb-0f8f-477e-bd8a-7a275e4c248b') 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@12 -- # local i 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.054 12:33:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 1dcf34bb-0f8f-477e-bd8a-7a275e4c248b /dev/nbd0 00:14:07.313 /dev/nbd0 00:14:07.313 12:33:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.313 12:33:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.313 12:33:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:07.313 12:33:49 -- common/autotest_common.sh@857 -- # local i 00:14:07.313 12:33:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:07.313 12:33:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:07.313 12:33:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:07.313 12:33:49 -- common/autotest_common.sh@861 -- # break 00:14:07.313 12:33:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:07.313 12:33:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:07.313 12:33:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:07.313 1+0 records in 00:14:07.313 1+0 records out 00:14:07.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029791 s, 13.7 MB/s 00:14:07.313 12:33:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:07.313 12:33:49 -- common/autotest_common.sh@874 -- # size=4096 00:14:07.313 12:33:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:07.313 12:33:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:07.313 12:33:49 -- common/autotest_common.sh@877 -- # return 0 00:14:07.313 12:33:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.313 12:33:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.313 12:33:49 -- lvol/snapshot_clone.sh@180 -- # fill_size=10485760 00:14:07.313 12:33:49 -- lvol/snapshot_clone.sh@181 -- # run_fio_test /dev/nbd0 0 10485760 write 0xaa 00:14:07.313 12:33:49 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:07.313 12:33:49 -- lvol/common.sh@41 -- # local offset=0 00:14:07.313 12:33:49 -- lvol/common.sh@42 -- # local size=10485760 00:14:07.313 12:33:49 -- lvol/common.sh@43 -- # local rw=write 00:14:07.313 12:33:49 -- lvol/common.sh@44 -- # local pattern=0xaa 00:14:07.313 12:33:49 -- lvol/common.sh@45 -- # local extra_params= 00:14:07.313 12:33:49 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:07.313 12:33:49 -- lvol/common.sh@48 -- # [[ -n 0xaa ]] 00:14:07.313 12:33:49 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xaa --verify_state_save=0' 00:14:07.313 12:33:49 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=10485760 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xaa --verify_state_save=0' 00:14:07.313 12:33:49 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=10485760 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xaa --verify_state_save=0 00:14:07.573 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:07.573 fio-3.35 00:14:07.573 Starting 1 process 00:14:08.141 00:14:08.141 fio_test: (groupid=0, jobs=1): err= 0: pid=61129: Tue Oct 1 12:33:50 2024 00:14:08.141 read: IOPS=13.2k, BW=51.5MiB/s (54.1MB/s)(10.0MiB/194msec) 00:14:08.141 clat (usec): min=59, max=338, avg=74.53, stdev=16.07 00:14:08.141 lat (usec): min=59, max=338, avg=74.60, stdev=16.07 00:14:08.141 clat percentiles (usec): 00:14:08.141 | 1.00th=[ 62], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 64], 00:14:08.141 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 72], 00:14:08.141 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 100], 00:14:08.141 | 99.00th=[ 123], 99.50th=[ 139], 99.90th=[ 273], 99.95th=[ 289], 00:14:08.141 | 99.99th=[ 338] 00:14:08.141 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(10.0MiB/242msec); 0 zone resets 00:14:08.141 clat (usec): min=60, max=1610, avg=92.43, stdev=44.30 00:14:08.141 lat (usec): min=61, max=1633, avg=93.38, stdev=44.64 00:14:08.141 clat percentiles (usec): 00:14:08.141 | 1.00th=[ 63], 5.00th=[ 69], 10.00th=[ 80], 20.00th=[ 83], 00:14:08.141 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:14:08.141 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 117], 00:14:08.141 | 99.00th=[ 135], 99.50th=[ 147], 99.90th=[ 1139], 99.95th=[ 1156], 00:14:08.141 | 99.99th=[ 1614] 00:14:08.141 lat (usec) : 100=86.45%, 250=13.44%, 500=0.06% 00:14:08.141 lat (msec) : 2=0.06% 00:14:08.141 cpu : usr=2.07%, sys=9.43%, ctx=10241, majf=0, minf=85 00:14:08.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.141 issued rwts: total=2560,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.141 00:14:08.141 Run status group 0 (all jobs): 00:14:08.141 READ: bw=51.5MiB/s (54.1MB/s), 51.5MiB/s-51.5MiB/s (54.1MB/s-54.1MB/s), io=10.0MiB (10.5MB), run=194-194msec 00:14:08.141 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=10.0MiB (10.5MB), run=242-242msec 00:14:08.141 00:14:08.141 Disk stats (read/write): 00:14:08.141 nbd0: ios=2031/2560, merge=0/0, ticks=142/215, in_queue=358, util=79.92% 00:14:08.141 12:33:50 -- lvol/snapshot_clone.sh@184 -- # nbd_start_disks /var/tmp/spdk.sock bf89f260-8e52-4d85-9174-32b550fbf3d4 /dev/nbd1 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('bf89f260-8e52-4d85-9174-32b550fbf3d4') 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@12 -- # local i 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.141 12:33:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk bf89f260-8e52-4d85-9174-32b550fbf3d4 /dev/nbd1 00:14:08.401 /dev/nbd1 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.401 12:33:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:08.401 12:33:50 -- common/autotest_common.sh@857 -- # local i 00:14:08.401 12:33:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:08.401 12:33:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:08.401 12:33:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:08.401 12:33:50 -- common/autotest_common.sh@861 -- # break 00:14:08.401 12:33:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:08.401 12:33:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:08.401 12:33:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:08.401 1+0 records in 00:14:08.401 1+0 records out 00:14:08.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358709 s, 11.4 MB/s 00:14:08.401 12:33:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:08.401 12:33:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:08.401 12:33:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:08.401 12:33:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:08.401 12:33:50 -- common/autotest_common.sh@877 -- # return 0 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.401 12:33:50 -- lvol/snapshot_clone.sh@185 -- # nbd_start_disks /var/tmp/spdk.sock fb426244-ec70-4546-b667-2255130318a2 /dev/nbd2 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('fb426244-ec70-4546-b667-2255130318a2') 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd2') 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@12 -- # local i 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.401 12:33:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk fb426244-ec70-4546-b667-2255130318a2 /dev/nbd2 00:14:08.661 /dev/nbd2 00:14:08.661 12:33:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:08.661 12:33:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:08.661 12:33:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:08.661 12:33:50 -- common/autotest_common.sh@857 -- # local i 00:14:08.661 12:33:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:08.661 12:33:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:08.661 12:33:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:08.661 12:33:50 -- common/autotest_common.sh@861 -- # break 00:14:08.661 12:33:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:08.661 12:33:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:08.661 12:33:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:08.661 1+0 records in 00:14:08.661 1+0 records out 00:14:08.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356191 s, 11.5 MB/s 00:14:08.661 12:33:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:08.661 12:33:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:08.661 12:33:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:08.661 12:33:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:08.661 12:33:51 -- common/autotest_common.sh@877 -- # return 0 00:14:08.661 12:33:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.661 12:33:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.661 12:33:51 -- lvol/snapshot_clone.sh@186 -- # sleep 1 00:14:09.597 12:33:52 -- lvol/snapshot_clone.sh@187 -- # cmp /dev/nbd1 /dev/nbd2 00:14:09.597 12:33:52 -- lvol/snapshot_clone.sh@189 -- # cmp /dev/nbd0 /dev/nbd1 00:14:09.597 /dev/nbd0 /dev/nbd1 differ: byte 1, line 1 00:14:09.597 12:33:52 -- lvol/snapshot_clone.sh@191 -- # rpc_cmd bdev_get_bdevs -b lvs_test/lvol_snapshot 00:14:09.597 12:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.597 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.856 12:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@191 -- # snapshot_bdev='[ 00:14:09.857 { 00:14:09.857 "name": "bf89f260-8e52-4d85-9174-32b550fbf3d4", 00:14:09.857 "aliases": [ 00:14:09.857 "lvs_test/lvol_snapshot" 00:14:09.857 ], 00:14:09.857 "product_name": "Logical Volume", 00:14:09.857 "block_size": 512, 00:14:09.857 "num_blocks": 40960, 00:14:09.857 "uuid": "bf89f260-8e52-4d85-9174-32b550fbf3d4", 00:14:09.857 "assigned_rate_limits": { 00:14:09.857 "rw_ios_per_sec": 0, 00:14:09.857 "rw_mbytes_per_sec": 0, 00:14:09.857 "r_mbytes_per_sec": 0, 00:14:09.857 "w_mbytes_per_sec": 0 00:14:09.857 }, 00:14:09.857 "claimed": false, 00:14:09.857 "zoned": false, 00:14:09.857 "supported_io_types": { 00:14:09.857 "read": true, 00:14:09.857 "write": false, 00:14:09.857 "unmap": false, 00:14:09.857 "write_zeroes": false, 00:14:09.857 "flush": false, 00:14:09.857 "reset": true, 00:14:09.857 "compare": false, 00:14:09.857 "compare_and_write": false, 00:14:09.857 "abort": false, 00:14:09.857 "nvme_admin": false, 00:14:09.857 "nvme_io": false 00:14:09.857 }, 00:14:09.857 "memory_domains": [ 00:14:09.857 { 00:14:09.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.857 "dma_device_type": 2 00:14:09.857 } 00:14:09.857 ], 00:14:09.857 "driver_specific": { 00:14:09.857 "lvol": { 00:14:09.857 "lvol_store_uuid": "1cf73c20-ced1-4c77-9079-5f310f2b6a26", 00:14:09.857 "base_bdev": "Malloc3", 00:14:09.857 "thin_provision": false, 00:14:09.857 "snapshot": true, 00:14:09.857 "clone": false, 00:14:09.857 "clones": [ 00:14:09.857 "lvol_test", 00:14:09.857 "clone_test1", 00:14:09.857 "clone_test2" 00:14:09.857 ], 00:14:09.857 "esnap_clone": false 00:14:09.857 } 00:14:09.857 } 00:14:09.857 } 00:14:09.857 ]' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@192 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone_test1 00:14:09.857 12:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.857 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.857 12:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@192 -- # clone_bdev1='[ 00:14:09.857 { 00:14:09.857 "name": "1dcf34bb-0f8f-477e-bd8a-7a275e4c248b", 00:14:09.857 "aliases": [ 00:14:09.857 "lvs_test/clone_test1" 00:14:09.857 ], 00:14:09.857 "product_name": "Logical Volume", 00:14:09.857 "block_size": 512, 00:14:09.857 "num_blocks": 40960, 00:14:09.857 "uuid": "1dcf34bb-0f8f-477e-bd8a-7a275e4c248b", 00:14:09.857 "assigned_rate_limits": { 00:14:09.857 "rw_ios_per_sec": 0, 00:14:09.857 "rw_mbytes_per_sec": 0, 00:14:09.857 "r_mbytes_per_sec": 0, 00:14:09.857 "w_mbytes_per_sec": 0 00:14:09.857 }, 00:14:09.857 "claimed": false, 00:14:09.857 "zoned": false, 00:14:09.857 "supported_io_types": { 00:14:09.857 "read": true, 00:14:09.857 "write": true, 00:14:09.857 "unmap": true, 00:14:09.857 "write_zeroes": true, 00:14:09.857 "flush": false, 00:14:09.857 "reset": true, 00:14:09.857 "compare": false, 00:14:09.857 "compare_and_write": false, 00:14:09.857 "abort": false, 00:14:09.857 "nvme_admin": false, 00:14:09.857 "nvme_io": false 00:14:09.857 }, 00:14:09.857 "memory_domains": [ 00:14:09.857 { 00:14:09.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.857 "dma_device_type": 2 00:14:09.857 } 00:14:09.857 ], 00:14:09.857 "driver_specific": { 00:14:09.857 "lvol": { 00:14:09.857 "lvol_store_uuid": "1cf73c20-ced1-4c77-9079-5f310f2b6a26", 00:14:09.857 "base_bdev": "Malloc3", 00:14:09.857 "thin_provision": true, 00:14:09.857 "snapshot": false, 00:14:09.857 "clone": true, 00:14:09.857 "base_snapshot": "lvol_snapshot", 00:14:09.857 "esnap_clone": false 00:14:09.857 } 00:14:09.857 } 00:14:09.857 } 00:14:09.857 ]' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@193 -- # rpc_cmd bdev_get_bdevs -b lvs_test/lvol_test 00:14:09.857 12:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.857 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.857 12:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@193 -- # clone_bdev2='[ 00:14:09.857 { 00:14:09.857 "name": "37e39ecc-0c87-458d-bce3-0a41d7ee6e2f", 00:14:09.857 "aliases": [ 00:14:09.857 "lvs_test/lvol_test" 00:14:09.857 ], 00:14:09.857 "product_name": "Logical Volume", 00:14:09.857 "block_size": 512, 00:14:09.857 "num_blocks": 40960, 00:14:09.857 "uuid": "37e39ecc-0c87-458d-bce3-0a41d7ee6e2f", 00:14:09.857 "assigned_rate_limits": { 00:14:09.857 "rw_ios_per_sec": 0, 00:14:09.857 "rw_mbytes_per_sec": 0, 00:14:09.857 "r_mbytes_per_sec": 0, 00:14:09.857 "w_mbytes_per_sec": 0 00:14:09.857 }, 00:14:09.857 "claimed": false, 00:14:09.857 "zoned": false, 00:14:09.857 "supported_io_types": { 00:14:09.857 "read": true, 00:14:09.857 "write": true, 00:14:09.857 "unmap": true, 00:14:09.857 "write_zeroes": true, 00:14:09.857 "flush": false, 00:14:09.857 "reset": true, 00:14:09.857 "compare": false, 00:14:09.857 "compare_and_write": false, 00:14:09.857 "abort": false, 00:14:09.857 "nvme_admin": false, 00:14:09.857 "nvme_io": false 00:14:09.857 }, 00:14:09.857 "memory_domains": [ 00:14:09.857 { 00:14:09.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.857 "dma_device_type": 2 00:14:09.857 } 00:14:09.857 ], 00:14:09.857 "driver_specific": { 00:14:09.857 "lvol": { 00:14:09.857 "lvol_store_uuid": "1cf73c20-ced1-4c77-9079-5f310f2b6a26", 00:14:09.857 "base_bdev": "Malloc3", 00:14:09.857 "thin_provision": true, 00:14:09.857 "snapshot": false, 00:14:09.857 "clone": true, 00:14:09.857 "base_snapshot": "lvol_snapshot", 00:14:09.857 "esnap_clone": false 00:14:09.857 } 00:14:09.857 } 00:14:09.857 } 00:14:09.857 ]' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@196 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@196 -- # '[' true = true ']' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@197 -- # jq '.[].driver_specific.lvol.clone' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@197 -- # '[' false = false ']' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@198 -- # jq '.[].driver_specific.lvol.clones|sort' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@198 -- # jq '.|sort' 00:14:09.857 12:33:52 -- lvol/snapshot_clone.sh@198 -- # '[' '[ 00:14:09.857 "clone_test1", 00:14:09.857 "clone_test2", 00:14:09.857 "lvol_test" 00:14:09.857 ]' = '[ 00:14:09.857 "clone_test1", 00:14:09.858 "clone_test2", 00:14:09.858 "lvol_test" 00:14:09.858 ]' ']' 00:14:09.858 12:33:52 -- lvol/snapshot_clone.sh@201 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@201 -- # '[' false = false ']' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@202 -- # jq '.[].driver_specific.lvol.clone' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@202 -- # '[' true = true ']' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@203 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@203 -- # '[' '"lvol_snapshot"' = '"lvol_snapshot"' ']' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@206 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@206 -- # '[' false = false ']' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@207 -- # jq '.[].driver_specific.lvol.clone' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@207 -- # '[' true = true ']' 00:14:10.116 12:33:52 -- lvol/snapshot_clone.sh@208 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:10.375 12:33:52 -- lvol/snapshot_clone.sh@208 -- # '[' '"lvol_snapshot"' = '"lvol_snapshot"' ']' 00:14:10.375 12:33:52 -- lvol/snapshot_clone.sh@211 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@51 -- # local i 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.375 12:33:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@41 -- # break 00:14:10.634 12:33:52 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.634 12:33:52 -- lvol/snapshot_clone.sh@212 -- # rpc_cmd bdev_lvol_delete 1dcf34bb-0f8f-477e-bd8a-7a275e4c248b 00:14:10.634 12:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.634 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:10.634 12:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.635 12:33:52 -- lvol/snapshot_clone.sh@213 -- # rpc_cmd bdev_get_bdevs -b lvs_test/lvol_snapshot 00:14:10.635 12:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.635 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:10.635 12:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.635 12:33:53 -- lvol/snapshot_clone.sh@213 -- # snapshot_bdev='[ 00:14:10.635 { 00:14:10.635 "name": "bf89f260-8e52-4d85-9174-32b550fbf3d4", 00:14:10.635 "aliases": [ 00:14:10.635 "lvs_test/lvol_snapshot" 00:14:10.635 ], 00:14:10.635 "product_name": "Logical Volume", 00:14:10.635 "block_size": 512, 00:14:10.635 "num_blocks": 40960, 00:14:10.635 "uuid": "bf89f260-8e52-4d85-9174-32b550fbf3d4", 00:14:10.635 "assigned_rate_limits": { 00:14:10.635 "rw_ios_per_sec": 0, 00:14:10.635 "rw_mbytes_per_sec": 0, 00:14:10.635 "r_mbytes_per_sec": 0, 00:14:10.635 "w_mbytes_per_sec": 0 00:14:10.635 }, 00:14:10.635 "claimed": false, 00:14:10.635 "zoned": false, 00:14:10.635 "supported_io_types": { 00:14:10.635 "read": true, 00:14:10.635 "write": false, 00:14:10.635 "unmap": false, 00:14:10.635 "write_zeroes": false, 00:14:10.635 "flush": false, 00:14:10.635 "reset": true, 00:14:10.635 "compare": false, 00:14:10.635 "compare_and_write": false, 00:14:10.635 "abort": false, 00:14:10.635 "nvme_admin": false, 00:14:10.635 "nvme_io": false 00:14:10.635 }, 00:14:10.635 "memory_domains": [ 00:14:10.635 { 00:14:10.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.635 "dma_device_type": 2 00:14:10.635 } 00:14:10.635 ], 00:14:10.635 "driver_specific": { 00:14:10.635 "lvol": { 00:14:10.635 "lvol_store_uuid": "1cf73c20-ced1-4c77-9079-5f310f2b6a26", 00:14:10.635 "base_bdev": "Malloc3", 00:14:10.635 "thin_provision": false, 00:14:10.635 "snapshot": true, 00:14:10.635 "clone": false, 00:14:10.635 "clones": [ 00:14:10.635 "lvol_test", 00:14:10.635 "clone_test2" 00:14:10.635 ], 00:14:10.635 "esnap_clone": false 00:14:10.635 } 00:14:10.635 } 00:14:10.635 } 00:14:10.635 ]' 00:14:10.635 12:33:53 -- lvol/snapshot_clone.sh@214 -- # jq '.[].driver_specific.lvol.clones|sort' 00:14:10.635 12:33:53 -- lvol/snapshot_clone.sh@214 -- # jq '.|sort' 00:14:10.635 12:33:53 -- lvol/snapshot_clone.sh@214 -- # '[' '[ 00:14:10.635 "clone_test2", 00:14:10.635 "lvol_test" 00:14:10.635 ]' = '[ 00:14:10.635 "clone_test2", 00:14:10.635 "lvol_test" 00:14:10.635 ]' ']' 00:14:10.635 12:33:53 -- lvol/snapshot_clone.sh@217 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@51 -- # local i 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.635 12:33:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@41 -- # break 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.893 12:33:53 -- lvol/snapshot_clone.sh@218 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd2 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd2') 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@51 -- # local i 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.893 12:33:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd2 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@41 -- # break 00:14:11.459 12:33:53 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.459 12:33:53 -- lvol/snapshot_clone.sh@219 -- # rpc_cmd bdev_lvol_delete 37e39ecc-0c87-458d-bce3-0a41d7ee6e2f 00:14:11.459 12:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.459 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 12:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.459 12:33:53 -- lvol/snapshot_clone.sh@220 -- # rpc_cmd bdev_lvol_delete fb426244-ec70-4546-b667-2255130318a2 00:14:11.459 12:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.459 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 12:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.459 12:33:53 -- lvol/snapshot_clone.sh@221 -- # rpc_cmd bdev_lvol_delete bf89f260-8e52-4d85-9174-32b550fbf3d4 00:14:11.459 12:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.459 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 12:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.459 12:33:53 -- lvol/snapshot_clone.sh@222 -- # rpc_cmd bdev_lvol_delete_lvstore -u 1cf73c20-ced1-4c77-9079-5f310f2b6a26 00:14:11.459 12:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.459 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 12:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.459 12:33:53 -- lvol/snapshot_clone.sh@223 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:11.459 12:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.459 12:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.718 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.718 12:33:54 -- lvol/snapshot_clone.sh@224 -- # check_leftover_devices 00:14:11.718 12:33:54 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:11.718 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.718 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.718 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.719 12:33:54 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:11.719 12:33:54 -- lvol/common.sh@26 -- # jq length 00:14:11.719 12:33:54 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:11.719 12:33:54 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:11.719 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.719 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.719 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.719 12:33:54 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:11.719 12:33:54 -- lvol/common.sh@28 -- # jq length 00:14:11.719 ************************************ 00:14:11.719 END TEST test_clone_snapshot_relations 00:14:11.719 ************************************ 00:14:11.719 12:33:54 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:11.719 00:14:11.719 real 0m6.573s 00:14:11.719 user 0m2.822s 00:14:11.719 sys 0m0.649s 00:14:11.719 12:33:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.719 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.719 12:33:54 -- lvol/snapshot_clone.sh@612 -- # run_test test_clone_inflate test_clone_inflate 00:14:11.719 12:33:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:11.719 12:33:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.719 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.719 ************************************ 00:14:11.719 START TEST test_clone_inflate 00:14:11.719 ************************************ 00:14:11.719 12:33:54 -- common/autotest_common.sh@1104 -- # test_clone_inflate 00:14:11.719 12:33:54 -- lvol/snapshot_clone.sh@229 -- # rpc_cmd bdev_malloc_create 128 512 00:14:11.719 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.719 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.977 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@229 -- # malloc_name=Malloc4 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@230 -- # rpc_cmd bdev_lvol_create_lvstore Malloc4 lvs_test 00:14:11.977 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.977 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.977 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@230 -- # lvs_uuid=f098f8e6-0c2a-4366-a7e6-e7656d6e2911 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@233 -- # round_down 31 00:14:11.977 12:33:54 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:11.977 12:33:54 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:11.977 12:33:54 -- lvol/common.sh@36 -- # echo 28 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@233 -- # lvol_size_mb=28 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@235 -- # rpc_cmd bdev_lvol_create -u f098f8e6-0c2a-4366-a7e6-e7656d6e2911 lvol_test 28 00:14:11.977 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.977 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.977 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@235 -- # lvol_uuid=d9158845-6b6a-460e-9b14-c7ca894f3b8e 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@236 -- # rpc_cmd bdev_get_bdevs -b d9158845-6b6a-460e-9b14-c7ca894f3b8e 00:14:11.977 12:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.977 12:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.977 12:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.977 12:33:54 -- lvol/snapshot_clone.sh@236 -- # lvol='[ 00:14:11.977 { 00:14:11.977 "name": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:11.977 "aliases": [ 00:14:11.977 "lvs_test/lvol_test" 00:14:11.977 ], 00:14:11.977 "product_name": "Logical Volume", 00:14:11.977 "block_size": 512, 00:14:11.977 "num_blocks": 57344, 00:14:11.977 "uuid": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:11.977 "assigned_rate_limits": { 00:14:11.977 "rw_ios_per_sec": 0, 00:14:11.977 "rw_mbytes_per_sec": 0, 00:14:11.977 "r_mbytes_per_sec": 0, 00:14:11.977 "w_mbytes_per_sec": 0 00:14:11.977 }, 00:14:11.977 "claimed": false, 00:14:11.977 "zoned": false, 00:14:11.977 "supported_io_types": { 00:14:11.978 "read": true, 00:14:11.978 "write": true, 00:14:11.978 "unmap": true, 00:14:11.978 "write_zeroes": true, 00:14:11.978 "flush": false, 00:14:11.978 "reset": true, 00:14:11.978 "compare": false, 00:14:11.978 "compare_and_write": false, 00:14:11.978 "abort": false, 00:14:11.978 "nvme_admin": false, 00:14:11.978 "nvme_io": false 00:14:11.978 }, 00:14:11.978 "memory_domains": [ 00:14:11.978 { 00:14:11.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.978 "dma_device_type": 2 00:14:11.978 } 00:14:11.978 ], 00:14:11.978 "driver_specific": { 00:14:11.978 "lvol": { 00:14:11.978 "lvol_store_uuid": "f098f8e6-0c2a-4366-a7e6-e7656d6e2911", 00:14:11.978 "base_bdev": "Malloc4", 00:14:11.978 "thin_provision": false, 00:14:11.978 "snapshot": false, 00:14:11.978 "clone": false, 00:14:11.978 "esnap_clone": false 00:14:11.978 } 00:14:11.978 } 00:14:11.978 } 00:14:11.978 ]' 00:14:11.978 12:33:54 -- lvol/snapshot_clone.sh@239 -- # nbd_start_disks /var/tmp/spdk.sock d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('d9158845-6b6a-460e-9b14-c7ca894f3b8e') 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@12 -- # local i 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.978 12:33:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:12.236 /dev/nbd0 00:14:12.236 12:33:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.236 12:33:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.236 12:33:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:12.236 12:33:54 -- common/autotest_common.sh@857 -- # local i 00:14:12.236 12:33:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:12.236 12:33:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:12.236 12:33:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:12.236 12:33:54 -- common/autotest_common.sh@861 -- # break 00:14:12.236 12:33:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:12.236 12:33:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:12.236 12:33:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:12.236 1+0 records in 00:14:12.236 1+0 records out 00:14:12.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370251 s, 11.1 MB/s 00:14:12.236 12:33:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:12.236 12:33:54 -- common/autotest_common.sh@874 -- # size=4096 00:14:12.236 12:33:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:12.236 12:33:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:12.236 12:33:54 -- common/autotest_common.sh@877 -- # return 0 00:14:12.236 12:33:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.236 12:33:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.236 12:33:54 -- lvol/snapshot_clone.sh@240 -- # run_fio_test /dev/nbd0 0 29360128 write 0xcc 00:14:12.236 12:33:54 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:12.236 12:33:54 -- lvol/common.sh@41 -- # local offset=0 00:14:12.236 12:33:54 -- lvol/common.sh@42 -- # local size=29360128 00:14:12.236 12:33:54 -- lvol/common.sh@43 -- # local rw=write 00:14:12.236 12:33:54 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:12.236 12:33:54 -- lvol/common.sh@45 -- # local extra_params= 00:14:12.236 12:33:54 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:12.236 12:33:54 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:12.236 12:33:54 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:12.236 12:33:54 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=29360128 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:12.236 12:33:54 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=29360128 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:12.495 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:12.495 fio-3.35 00:14:12.495 Starting 1 process 00:14:13.871 00:14:13.871 fio_test: (groupid=0, jobs=1): err= 0: pid=61283: Tue Oct 1 12:33:56 2024 00:14:13.871 read: IOPS=12.5k, BW=48.8MiB/s (51.1MB/s)(28.0MiB/574msec) 00:14:13.871 clat (usec): min=58, max=437, avg=78.72, stdev=16.19 00:14:13.871 lat (usec): min=58, max=437, avg=78.81, stdev=16.20 00:14:13.871 clat percentiles (usec): 00:14:13.871 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 67], 00:14:13.871 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 82], 00:14:13.871 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 98], 95.00th=[ 108], 00:14:13.871 | 99.00th=[ 125], 99.50th=[ 135], 99.90th=[ 155], 99.95th=[ 297], 00:14:13.871 | 99.99th=[ 437] 00:14:13.872 write: IOPS=12.5k, BW=48.7MiB/s (51.1MB/s)(28.0MiB/575msec); 0 zone resets 00:14:13.872 clat (usec): min=59, max=1111, avg=78.44, stdev=21.32 00:14:13.872 lat (usec): min=60, max=1112, avg=79.33, stdev=21.56 00:14:13.872 clat percentiles (usec): 00:14:13.872 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 65], 00:14:13.872 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:14:13.872 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 98], 95.00th=[ 109], 00:14:13.872 | 99.00th=[ 129], 99.50th=[ 135], 99.90th=[ 161], 99.95th=[ 229], 00:14:13.872 | 99.99th=[ 1106] 00:14:13.872 bw ( KiB/s): min= 7176, max=50168, per=57.50%, avg=28672.00, stdev=30399.93, samples=2 00:14:13.872 iops : min= 1794, max=12542, avg=7168.00, stdev=7599.98, samples=2 00:14:13.872 lat (usec) : 100=91.34%, 250=8.61%, 500=0.04%, 1000=0.01% 00:14:13.872 lat (msec) : 2=0.01% 00:14:13.872 cpu : usr=3.75%, sys=7.75%, ctx=14460, majf=0, minf=196 00:14:13.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.872 issued rwts: total=7168,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.872 00:14:13.872 Run status group 0 (all jobs): 00:14:13.872 READ: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=28.0MiB (29.4MB), run=574-574msec 00:14:13.872 WRITE: bw=48.7MiB/s (51.1MB/s), 48.7MiB/s-48.7MiB/s (51.1MB/s-51.1MB/s), io=28.0MiB (29.4MB), run=575-575msec 00:14:13.872 00:14:13.872 Disk stats (read/write): 00:14:13.872 nbd0: ios=7098/7168, merge=0/0, ticks=514/511, in_queue=1024, util=92.02% 00:14:13.872 12:33:56 -- lvol/snapshot_clone.sh@241 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@51 -- # local i 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@41 -- # break 00:14:13.872 12:33:56 -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.872 12:33:56 -- lvol/snapshot_clone.sh@244 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:13.872 12:33:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.872 12:33:56 -- common/autotest_common.sh@10 -- # set +x 00:14:13.872 12:33:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.872 12:33:56 -- lvol/snapshot_clone.sh@244 -- # snapshot_uuid=e5fabf63-0ae5-4fc6-9832-9653cf93877e 00:14:13.872 12:33:56 -- lvol/snapshot_clone.sh@247 -- # rpc_cmd bdev_get_bdevs -b d9158845-6b6a-460e-9b14-c7ca894f3b8e 00:14:13.872 12:33:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.872 12:33:56 -- common/autotest_common.sh@10 -- # set +x 00:14:14.131 12:33:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:14.131 12:33:56 -- lvol/snapshot_clone.sh@247 -- # lvol='[ 00:14:14.131 { 00:14:14.131 "name": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:14.131 "aliases": [ 00:14:14.131 "lvs_test/lvol_test" 00:14:14.131 ], 00:14:14.131 "product_name": "Logical Volume", 00:14:14.131 "block_size": 512, 00:14:14.131 "num_blocks": 57344, 00:14:14.131 "uuid": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:14.131 "assigned_rate_limits": { 00:14:14.131 "rw_ios_per_sec": 0, 00:14:14.131 "rw_mbytes_per_sec": 0, 00:14:14.131 "r_mbytes_per_sec": 0, 00:14:14.131 "w_mbytes_per_sec": 0 00:14:14.131 }, 00:14:14.131 "claimed": false, 00:14:14.131 "zoned": false, 00:14:14.131 "supported_io_types": { 00:14:14.131 "read": true, 00:14:14.131 "write": true, 00:14:14.131 "unmap": true, 00:14:14.131 "write_zeroes": true, 00:14:14.131 "flush": false, 00:14:14.131 "reset": true, 00:14:14.131 "compare": false, 00:14:14.131 "compare_and_write": false, 00:14:14.131 "abort": false, 00:14:14.131 "nvme_admin": false, 00:14:14.131 "nvme_io": false 00:14:14.131 }, 00:14:14.131 "memory_domains": [ 00:14:14.131 { 00:14:14.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.131 "dma_device_type": 2 00:14:14.131 } 00:14:14.131 ], 00:14:14.131 "driver_specific": { 00:14:14.131 "lvol": { 00:14:14.131 "lvol_store_uuid": "f098f8e6-0c2a-4366-a7e6-e7656d6e2911", 00:14:14.131 "base_bdev": "Malloc4", 00:14:14.131 "thin_provision": true, 00:14:14.131 "snapshot": false, 00:14:14.131 "clone": true, 00:14:14.131 "base_snapshot": "lvol_snapshot", 00:14:14.131 "esnap_clone": false 00:14:14.131 } 00:14:14.131 } 00:14:14.131 } 00:14:14.131 ]' 00:14:14.131 12:33:56 -- lvol/snapshot_clone.sh@248 -- # jq '.[].driver_specific.lvol.thin_provision' 00:14:14.131 12:33:56 -- lvol/snapshot_clone.sh@248 -- # '[' true = true ']' 00:14:14.131 12:33:56 -- lvol/snapshot_clone.sh@251 -- # nbd_start_disks /var/tmp/spdk.sock d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('d9158845-6b6a-460e-9b14-c7ca894f3b8e') 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@12 -- # local i 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.131 12:33:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:14.389 /dev/nbd0 00:14:14.389 12:33:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.389 12:33:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.389 12:33:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:14.389 12:33:56 -- common/autotest_common.sh@857 -- # local i 00:14:14.389 12:33:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:14.389 12:33:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:14.389 12:33:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:14.389 12:33:56 -- common/autotest_common.sh@861 -- # break 00:14:14.389 12:33:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:14.389 12:33:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:14.389 12:33:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:14.389 1+0 records in 00:14:14.389 1+0 records out 00:14:14.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273532 s, 15.0 MB/s 00:14:14.389 12:33:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:14.389 12:33:56 -- common/autotest_common.sh@874 -- # size=4096 00:14:14.389 12:33:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:14.389 12:33:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:14.389 12:33:56 -- common/autotest_common.sh@877 -- # return 0 00:14:14.389 12:33:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.390 12:33:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.390 12:33:56 -- lvol/snapshot_clone.sh@252 -- # first_fill=0 00:14:14.390 12:33:56 -- lvol/snapshot_clone.sh@253 -- # second_fill=22020096 00:14:14.390 12:33:56 -- lvol/snapshot_clone.sh@254 -- # run_fio_test /dev/nbd0 0 1048576 write 0xdd 00:14:14.390 12:33:56 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:14.390 12:33:56 -- lvol/common.sh@41 -- # local offset=0 00:14:14.390 12:33:56 -- lvol/common.sh@42 -- # local size=1048576 00:14:14.390 12:33:56 -- lvol/common.sh@43 -- # local rw=write 00:14:14.390 12:33:56 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:14.390 12:33:56 -- lvol/common.sh@45 -- # local extra_params= 00:14:14.390 12:33:56 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:14.390 12:33:56 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:14.390 12:33:56 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:14.390 12:33:56 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=1048576 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:14.390 12:33:56 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=1048576 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:14.390 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:14.390 fio-3.35 00:14:14.390 Starting 1 process 00:14:14.649 00:14:14.649 fio_test: (groupid=0, jobs=1): err= 0: pid=61326: Tue Oct 1 12:33:56 2024 00:14:14.649 read: IOPS=9481, BW=37.0MiB/s (38.8MB/s)(1024KiB/27msec) 00:14:14.649 clat (usec): min=75, max=382, avg=99.89, stdev=34.68 00:14:14.649 lat (usec): min=75, max=382, avg=100.00, stdev=34.69 00:14:14.649 clat percentiles (usec): 00:14:14.649 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 79], 00:14:14.649 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 96], 00:14:14.649 | 70.00th=[ 106], 80.00th=[ 118], 90.00th=[ 133], 95.00th=[ 145], 00:14:14.649 | 99.00th=[ 217], 99.50th=[ 383], 99.90th=[ 383], 99.95th=[ 383], 00:14:14.649 | 99.99th=[ 383] 00:14:14.649 write: IOPS=9481, BW=37.0MiB/s (38.8MB/s)(1024KiB/27msec); 0 zone resets 00:14:14.649 clat (usec): min=70, max=1430, avg=100.73, stdev=85.78 00:14:14.649 lat (usec): min=71, max=1451, avg=101.92, stdev=87.18 00:14:14.649 clat percentiles (usec): 00:14:14.649 | 1.00th=[ 72], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:14:14.649 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 99], 00:14:14.649 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 133], 00:14:14.649 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 1434], 99.95th=[ 1434], 00:14:14.649 | 99.99th=[ 1434] 00:14:14.649 lat (usec) : 100=62.70%, 250=36.72%, 500=0.39% 00:14:14.649 lat (msec) : 2=0.20% 00:14:14.649 cpu : usr=0.00%, sys=13.46%, ctx=513, majf=0, minf=21 00:14:14.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.649 issued rwts: total=256,256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.649 00:14:14.649 Run status group 0 (all jobs): 00:14:14.649 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=1024KiB (1049kB), run=27-27msec 00:14:14.649 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=1024KiB (1049kB), run=27-27msec 00:14:14.649 00:14:14.649 Disk stats (read/write): 00:14:14.649 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:14.649 12:33:56 -- lvol/snapshot_clone.sh@255 -- # run_fio_test /dev/nbd0 22020096 1048576 write 0xdd 00:14:14.649 12:33:57 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:14.649 12:33:57 -- lvol/common.sh@41 -- # local offset=22020096 00:14:14.649 12:33:57 -- lvol/common.sh@42 -- # local size=1048576 00:14:14.649 12:33:57 -- lvol/common.sh@43 -- # local rw=write 00:14:14.649 12:33:57 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:14.649 12:33:57 -- lvol/common.sh@45 -- # local extra_params= 00:14:14.649 12:33:57 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:14.649 12:33:57 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:14.649 12:33:57 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:14.649 12:33:57 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=22020096 --size=1048576 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:14.649 12:33:57 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=22020096 --size=1048576 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:14.649 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:14.649 fio-3.35 00:14:14.649 Starting 1 process 00:14:14.909 00:14:14.909 fio_test: (groupid=0, jobs=1): err= 0: pid=61329: Tue Oct 1 12:33:57 2024 00:14:14.909 read: IOPS=9846, BW=38.5MiB/s (40.3MB/s)(1024KiB/26msec) 00:14:14.909 clat (usec): min=77, max=570, avg=100.04, stdev=41.41 00:14:14.909 lat (usec): min=77, max=570, avg=100.14, stdev=41.41 00:14:14.909 clat percentiles (usec): 00:14:14.909 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:14:14.909 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 95], 00:14:14.909 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 135], 95.00th=[ 141], 00:14:14.909 | 99.00th=[ 174], 99.50th=[ 445], 99.90th=[ 570], 99.95th=[ 570], 00:14:14.909 | 99.99th=[ 570] 00:14:14.909 write: IOPS=9846, BW=38.5MiB/s (40.3MB/s)(1024KiB/26msec); 0 zone resets 00:14:14.909 clat (usec): min=71, max=1298, avg=95.50, stdev=77.46 00:14:14.909 lat (usec): min=72, max=1318, avg=96.79, stdev=78.71 00:14:14.909 clat percentiles (usec): 00:14:14.909 | 1.00th=[ 73], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 76], 00:14:14.909 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 93], 00:14:14.909 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 126], 00:14:14.909 | 99.00th=[ 143], 99.50th=[ 143], 99.90th=[ 1303], 99.95th=[ 1303], 00:14:14.909 | 99.99th=[ 1303] 00:14:14.909 lat (usec) : 100=72.46%, 250=26.95%, 500=0.20%, 750=0.20% 00:14:14.909 lat (msec) : 2=0.20% 00:14:14.909 cpu : usr=0.00%, sys=11.76%, ctx=514, majf=0, minf=21 00:14:14.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.909 issued rwts: total=256,256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.909 00:14:14.909 Run status group 0 (all jobs): 00:14:14.909 READ: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=1024KiB (1049kB), run=26-26msec 00:14:14.909 WRITE: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=1024KiB (1049kB), run=26-26msec 00:14:14.909 00:14:14.909 Disk stats (read/write): 00:14:14.909 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:14.909 12:33:57 -- lvol/snapshot_clone.sh@256 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@51 -- # local i 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.909 12:33:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@41 -- # break 00:14:15.168 12:33:57 -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.168 12:33:57 -- lvol/snapshot_clone.sh@259 -- # rpc_cmd bdev_lvol_inflate lvs_test/lvol_test 00:14:15.168 12:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.168 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:15.168 12:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@260 -- # rpc_cmd bdev_get_bdevs -b d9158845-6b6a-460e-9b14-c7ca894f3b8e 00:14:15.169 12:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.169 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:15.169 12:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@260 -- # lvol='[ 00:14:15.169 { 00:14:15.169 "name": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:15.169 "aliases": [ 00:14:15.169 "lvs_test/lvol_test" 00:14:15.169 ], 00:14:15.169 "product_name": "Logical Volume", 00:14:15.169 "block_size": 512, 00:14:15.169 "num_blocks": 57344, 00:14:15.169 "uuid": "d9158845-6b6a-460e-9b14-c7ca894f3b8e", 00:14:15.169 "assigned_rate_limits": { 00:14:15.169 "rw_ios_per_sec": 0, 00:14:15.169 "rw_mbytes_per_sec": 0, 00:14:15.169 "r_mbytes_per_sec": 0, 00:14:15.169 "w_mbytes_per_sec": 0 00:14:15.169 }, 00:14:15.169 "claimed": false, 00:14:15.169 "zoned": false, 00:14:15.169 "supported_io_types": { 00:14:15.169 "read": true, 00:14:15.169 "write": true, 00:14:15.169 "unmap": true, 00:14:15.169 "write_zeroes": true, 00:14:15.169 "flush": false, 00:14:15.169 "reset": true, 00:14:15.169 "compare": false, 00:14:15.169 "compare_and_write": false, 00:14:15.169 "abort": false, 00:14:15.169 "nvme_admin": false, 00:14:15.169 "nvme_io": false 00:14:15.169 }, 00:14:15.169 "memory_domains": [ 00:14:15.169 { 00:14:15.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.169 "dma_device_type": 2 00:14:15.169 } 00:14:15.169 ], 00:14:15.169 "driver_specific": { 00:14:15.169 "lvol": { 00:14:15.169 "lvol_store_uuid": "f098f8e6-0c2a-4366-a7e6-e7656d6e2911", 00:14:15.169 "base_bdev": "Malloc4", 00:14:15.169 "thin_provision": false, 00:14:15.169 "snapshot": false, 00:14:15.169 "clone": false, 00:14:15.169 "esnap_clone": false 00:14:15.169 } 00:14:15.169 } 00:14:15.169 } 00:14:15.169 ]' 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@261 -- # jq '.[].driver_specific.lvol.thin_provision' 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@261 -- # '[' false = false ']' 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@264 -- # rpc_cmd bdev_lvol_delete e5fabf63-0ae5-4fc6-9832-9653cf93877e 00:14:15.169 12:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.169 12:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:15.169 12:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.169 12:33:57 -- lvol/snapshot_clone.sh@267 -- # nbd_start_disks /var/tmp/spdk.sock d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('d9158845-6b6a-460e-9b14-c7ca894f3b8e') 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@12 -- # local i 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.169 12:33:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk d9158845-6b6a-460e-9b14-c7ca894f3b8e /dev/nbd0 00:14:15.427 /dev/nbd0 00:14:15.427 12:33:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.427 12:33:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.427 12:33:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:15.427 12:33:57 -- common/autotest_common.sh@857 -- # local i 00:14:15.427 12:33:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:15.427 12:33:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:15.427 12:33:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:15.427 12:33:57 -- common/autotest_common.sh@861 -- # break 00:14:15.427 12:33:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:15.427 12:33:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:15.427 12:33:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:15.427 1+0 records in 00:14:15.427 1+0 records out 00:14:15.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371458 s, 11.0 MB/s 00:14:15.427 12:33:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:15.427 12:33:57 -- common/autotest_common.sh@874 -- # size=4096 00:14:15.427 12:33:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:15.427 12:33:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:15.427 12:33:57 -- common/autotest_common.sh@877 -- # return 0 00:14:15.427 12:33:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.427 12:33:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.427 12:33:57 -- lvol/snapshot_clone.sh@268 -- # run_fio_test /dev/nbd0 0 1048576 read 0xdd 00:14:15.427 12:33:57 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:15.428 12:33:57 -- lvol/common.sh@41 -- # local offset=0 00:14:15.428 12:33:57 -- lvol/common.sh@42 -- # local size=1048576 00:14:15.428 12:33:57 -- lvol/common.sh@43 -- # local rw=read 00:14:15.428 12:33:57 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:15.428 12:33:57 -- lvol/common.sh@45 -- # local extra_params= 00:14:15.428 12:33:57 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:15.428 12:33:57 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:15.428 12:33:57 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:15.428 12:33:57 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=1048576 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:15.428 12:33:57 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=1048576 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:15.686 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:15.686 fio-3.35 00:14:15.686 Starting 1 process 00:14:15.686 00:14:15.686 fio_test: (groupid=0, jobs=1): err= 0: pid=61361: Tue Oct 1 12:33:58 2024 00:14:15.686 read: IOPS=9846, BW=38.5MiB/s (40.3MB/s)(1024KiB/26msec) 00:14:15.686 clat (usec): min=69, max=241, avg=98.05, stdev=18.62 00:14:15.686 lat (usec): min=69, max=242, avg=98.20, stdev=18.67 00:14:15.686 clat percentiles (usec): 00:14:15.686 | 1.00th=[ 73], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:14:15.686 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 98], 00:14:15.686 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 133], 00:14:15.686 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 243], 99.95th=[ 243], 00:14:15.686 | 99.99th=[ 243] 00:14:15.686 lat (usec) : 100=63.28%, 250=36.72% 00:14:15.686 cpu : usr=0.00%, sys=12.00%, ctx=258, majf=0, minf=9 00:14:15.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.686 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.686 00:14:15.686 Run status group 0 (all jobs): 00:14:15.686 READ: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=1024KiB (1049kB), run=26-26msec 00:14:15.686 00:14:15.686 Disk stats (read/write): 00:14:15.686 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:15.686 12:33:58 -- lvol/snapshot_clone.sh@269 -- # run_fio_test /dev/nbd0 1048576 20971520 read 0xcc 00:14:15.686 12:33:58 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:15.686 12:33:58 -- lvol/common.sh@41 -- # local offset=1048576 00:14:15.686 12:33:58 -- lvol/common.sh@42 -- # local size=20971520 00:14:15.686 12:33:58 -- lvol/common.sh@43 -- # local rw=read 00:14:15.686 12:33:58 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:15.686 12:33:58 -- lvol/common.sh@45 -- # local extra_params= 00:14:15.686 12:33:58 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:15.686 12:33:58 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:15.686 12:33:58 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:15.686 12:33:58 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=1048576 --size=20971520 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:15.686 12:33:58 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=1048576 --size=20971520 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:15.944 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:15.944 fio-3.35 00:14:15.944 Starting 1 process 00:14:16.510 00:14:16.510 fio_test: (groupid=0, jobs=1): err= 0: pid=61368: Tue Oct 1 12:33:58 2024 00:14:16.510 read: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(20.0MiB/445msec) 00:14:16.510 clat (usec): min=59, max=454, avg=85.44, stdev=18.20 00:14:16.510 lat (usec): min=59, max=454, avg=85.56, stdev=18.21 00:14:16.510 clat percentiles (usec): 00:14:16.510 | 1.00th=[ 62], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 69], 00:14:16.510 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 89], 00:14:16.510 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 117], 00:14:16.510 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 200], 99.95th=[ 281], 00:14:16.511 | 99.99th=[ 453] 00:14:16.511 lat (usec) : 100=84.06%, 250=15.86%, 500=0.08% 00:14:16.511 cpu : usr=4.05%, sys=8.11%, ctx=5120, majf=0, minf=9 00:14:16.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.511 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.511 00:14:16.511 Run status group 0 (all jobs): 00:14:16.511 READ: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=20.0MiB (21.0MB), run=445-445msec 00:14:16.511 00:14:16.511 Disk stats (read/write): 00:14:16.511 nbd0: ios=4401/0, merge=0/0, ticks=346/0, in_queue=346, util=79.67% 00:14:16.511 12:33:58 -- lvol/snapshot_clone.sh@270 -- # run_fio_test /dev/nbd0 22020096 1048576 read 0xdd 00:14:16.511 12:33:58 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:16.511 12:33:58 -- lvol/common.sh@41 -- # local offset=22020096 00:14:16.511 12:33:58 -- lvol/common.sh@42 -- # local size=1048576 00:14:16.511 12:33:58 -- lvol/common.sh@43 -- # local rw=read 00:14:16.511 12:33:58 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:16.511 12:33:58 -- lvol/common.sh@45 -- # local extra_params= 00:14:16.511 12:33:58 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.511 12:33:58 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:16.511 12:33:58 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:16.511 12:33:58 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=22020096 --size=1048576 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:16.511 12:33:58 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=22020096 --size=1048576 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:16.511 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.511 fio-3.35 00:14:16.511 Starting 1 process 00:14:16.772 00:14:16.772 fio_test: (groupid=0, jobs=1): err= 0: pid=61378: Tue Oct 1 12:33:59 2024 00:14:16.772 read: IOPS=9481, BW=37.0MiB/s (38.8MB/s)(1024KiB/27msec) 00:14:16.772 clat (usec): min=66, max=359, avg=100.23, stdev=23.69 00:14:16.772 lat (usec): min=66, max=359, avg=100.52, stdev=23.94 00:14:16.772 clat percentiles (usec): 00:14:16.772 | 1.00th=[ 77], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 86], 00:14:16.772 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 102], 00:14:16.772 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 122], 95.00th=[ 135], 00:14:16.772 | 99.00th=[ 169], 99.50th=[ 196], 99.90th=[ 359], 99.95th=[ 359], 00:14:16.772 | 99.99th=[ 359] 00:14:16.772 lat (usec) : 100=54.69%, 250=44.92%, 500=0.39% 00:14:16.772 cpu : usr=3.85%, sys=7.69%, ctx=291, majf=0, minf=9 00:14:16.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.772 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.772 00:14:16.772 Run status group 0 (all jobs): 00:14:16.772 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=1024KiB (1049kB), run=27-27msec 00:14:16.772 00:14:16.772 Disk stats (read/write): 00:14:16.772 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:16.772 12:33:59 -- lvol/snapshot_clone.sh@271 -- # run_fio_test /dev/nbd0 23068672 6291456 read 0xcc 00:14:16.772 12:33:59 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:16.772 12:33:59 -- lvol/common.sh@41 -- # local offset=23068672 00:14:16.772 12:33:59 -- lvol/common.sh@42 -- # local size=6291456 00:14:16.772 12:33:59 -- lvol/common.sh@43 -- # local rw=read 00:14:16.772 12:33:59 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.772 12:33:59 -- lvol/common.sh@45 -- # local extra_params= 00:14:16.772 12:33:59 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.772 12:33:59 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.772 12:33:59 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.772 12:33:59 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=23068672 --size=6291456 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.772 12:33:59 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=23068672 --size=6291456 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:16.772 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.772 fio-3.35 00:14:16.772 Starting 1 process 00:14:17.056 00:14:17.056 fio_test: (groupid=0, jobs=1): err= 0: pid=61381: Tue Oct 1 12:33:59 2024 00:14:17.056 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(6144KiB/125msec) 00:14:17.056 clat (usec): min=62, max=326, avg=79.38, stdev=18.84 00:14:17.056 lat (usec): min=62, max=327, avg=79.49, stdev=18.86 00:14:17.056 clat percentiles (usec): 00:14:17.056 | 1.00th=[ 64], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 69], 00:14:17.056 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:14:17.056 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 112], 00:14:17.056 | 99.00th=[ 141], 99.50th=[ 172], 99.90th=[ 302], 99.95th=[ 326], 00:14:17.056 | 99.99th=[ 326] 00:14:17.056 lat (usec) : 100=90.62%, 250=9.18%, 500=0.20% 00:14:17.056 cpu : usr=3.23%, sys=8.06%, ctx=1569, majf=0, minf=10 00:14:17.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.056 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:17.056 00:14:17.056 Run status group 0 (all jobs): 00:14:17.056 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=6144KiB (6291kB), run=125-125msec 00:14:17.056 00:14:17.056 Disk stats (read/write): 00:14:17.056 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:17.056 12:33:59 -- lvol/snapshot_clone.sh@272 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@51 -- # local i 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.056 12:33:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@41 -- # break 00:14:17.315 12:33:59 -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.315 12:33:59 -- lvol/snapshot_clone.sh@275 -- # rpc_cmd bdev_lvol_delete d9158845-6b6a-460e-9b14-c7ca894f3b8e 00:14:17.315 12:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.315 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:14:17.315 12:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.315 12:33:59 -- lvol/snapshot_clone.sh@276 -- # rpc_cmd bdev_lvol_delete_lvstore -u f098f8e6-0c2a-4366-a7e6-e7656d6e2911 00:14:17.315 12:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.315 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:14:17.315 12:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.315 12:33:59 -- lvol/snapshot_clone.sh@277 -- # rpc_cmd bdev_malloc_delete Malloc4 00:14:17.315 12:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.315 12:33:59 -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.883 12:34:00 -- lvol/snapshot_clone.sh@278 -- # check_leftover_devices 00:14:17.883 12:34:00 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:17.883 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.883 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.883 12:34:00 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:17.883 12:34:00 -- lvol/common.sh@26 -- # jq length 00:14:17.883 12:34:00 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:17.883 12:34:00 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:17.883 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.883 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.883 12:34:00 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:17.883 12:34:00 -- lvol/common.sh@28 -- # jq length 00:14:17.883 ************************************ 00:14:17.883 END TEST test_clone_inflate 00:14:17.883 ************************************ 00:14:17.883 12:34:00 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:17.883 00:14:17.883 real 0m6.042s 00:14:17.883 user 0m2.326s 00:14:17.883 sys 0m0.639s 00:14:17.883 12:34:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.883 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 12:34:00 -- lvol/snapshot_clone.sh@613 -- # run_test test_clone_decouple_parent test_clone_decouple_parent 00:14:17.883 12:34:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:17.883 12:34:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.883 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 ************************************ 00:14:17.883 START TEST test_clone_decouple_parent 00:14:17.883 ************************************ 00:14:17.883 12:34:00 -- common/autotest_common.sh@1104 -- # test_clone_decouple_parent 00:14:17.883 12:34:00 -- lvol/snapshot_clone.sh@285 -- # rpc_cmd bdev_malloc_create 128 512 00:14:17.883 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.883 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.142 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@285 -- # malloc_name=Malloc5 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@286 -- # rpc_cmd bdev_lvol_create_lvstore Malloc5 lvs_test 00:14:18.142 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.142 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.142 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@286 -- # lvs_uuid=52a6beb0-c037-4071-9e5b-0eab045ba89a 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@289 -- # lvol_size_mb=20 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@290 -- # rpc_cmd bdev_lvol_create -u 52a6beb0-c037-4071-9e5b-0eab045ba89a lvol_test 20 -t 00:14:18.142 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.142 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.142 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@290 -- # lvol_uuid=cb505df3-843a-40b5-8934-3ebca7bfcdf7 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@291 -- # rpc_cmd bdev_get_bdevs -b cb505df3-843a-40b5-8934-3ebca7bfcdf7 00:14:18.142 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.142 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.142 12:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@291 -- # lvol='[ 00:14:18.142 { 00:14:18.142 "name": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:18.142 "aliases": [ 00:14:18.142 "lvs_test/lvol_test" 00:14:18.142 ], 00:14:18.142 "product_name": "Logical Volume", 00:14:18.142 "block_size": 512, 00:14:18.142 "num_blocks": 40960, 00:14:18.142 "uuid": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:18.142 "assigned_rate_limits": { 00:14:18.142 "rw_ios_per_sec": 0, 00:14:18.142 "rw_mbytes_per_sec": 0, 00:14:18.142 "r_mbytes_per_sec": 0, 00:14:18.142 "w_mbytes_per_sec": 0 00:14:18.142 }, 00:14:18.142 "claimed": false, 00:14:18.142 "zoned": false, 00:14:18.142 "supported_io_types": { 00:14:18.142 "read": true, 00:14:18.142 "write": true, 00:14:18.142 "unmap": true, 00:14:18.142 "write_zeroes": true, 00:14:18.142 "flush": false, 00:14:18.142 "reset": true, 00:14:18.142 "compare": false, 00:14:18.142 "compare_and_write": false, 00:14:18.142 "abort": false, 00:14:18.142 "nvme_admin": false, 00:14:18.142 "nvme_io": false 00:14:18.142 }, 00:14:18.142 "memory_domains": [ 00:14:18.142 { 00:14:18.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.142 "dma_device_type": 2 00:14:18.142 } 00:14:18.142 ], 00:14:18.142 "driver_specific": { 00:14:18.142 "lvol": { 00:14:18.142 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:18.142 "base_bdev": "Malloc5", 00:14:18.142 "thin_provision": true, 00:14:18.142 "snapshot": false, 00:14:18.142 "clone": false, 00:14:18.142 "esnap_clone": false 00:14:18.142 } 00:14:18.142 } 00:14:18.142 } 00:14:18.142 ]' 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@294 -- # rpc_cmd bdev_lvol_decouple_parent lvs_test/lvol_test 00:14:18.142 12:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.142 12:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.142 [2024-10-01 12:34:00.474458] blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:14:18.142 [2024-10-01 12:34:00.474523] lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:14:18.142 request: 00:14:18.142 { 00:14:18.142 "name": "lvs_test/lvol_test", 00:14:18.142 "method": "bdev_lvol_decouple_parent", 00:14:18.142 "req_id": 1 00:14:18.142 } 00:14:18.142 Got JSON-RPC error response 00:14:18.142 response: 00:14:18.142 { 00:14:18.142 "code": -32602, 00:14:18.142 "message": "Invalid argument" 00:14:18.142 } 00:14:18.142 12:34:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:18.142 12:34:00 -- lvol/snapshot_clone.sh@297 -- # nbd_start_disks /var/tmp/spdk.sock cb505df3-843a-40b5-8934-3ebca7bfcdf7 /dev/nbd0 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('cb505df3-843a-40b5-8934-3ebca7bfcdf7') 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@12 -- # local i 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.142 12:34:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk cb505df3-843a-40b5-8934-3ebca7bfcdf7 /dev/nbd0 00:14:18.401 /dev/nbd0 00:14:18.401 12:34:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.401 12:34:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.401 12:34:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:18.401 12:34:00 -- common/autotest_common.sh@857 -- # local i 00:14:18.401 12:34:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:18.401 12:34:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:18.401 12:34:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:18.401 12:34:00 -- common/autotest_common.sh@861 -- # break 00:14:18.401 12:34:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:18.401 12:34:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:18.401 12:34:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:18.401 1+0 records in 00:14:18.401 1+0 records out 00:14:18.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300894 s, 13.6 MB/s 00:14:18.401 12:34:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:18.401 12:34:00 -- common/autotest_common.sh@874 -- # size=4096 00:14:18.401 12:34:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:18.401 12:34:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:18.401 12:34:00 -- common/autotest_common.sh@877 -- # return 0 00:14:18.401 12:34:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.401 12:34:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.401 12:34:00 -- lvol/snapshot_clone.sh@298 -- # begin_fill=0 00:14:18.401 12:34:00 -- lvol/snapshot_clone.sh@299 -- # end_fill=16777216 00:14:18.401 12:34:00 -- lvol/snapshot_clone.sh@300 -- # run_fio_test /dev/nbd0 0 16777216 write 0xdd 00:14:18.401 12:34:00 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:18.401 12:34:00 -- lvol/common.sh@41 -- # local offset=0 00:14:18.401 12:34:00 -- lvol/common.sh@42 -- # local size=16777216 00:14:18.401 12:34:00 -- lvol/common.sh@43 -- # local rw=write 00:14:18.401 12:34:00 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:18.401 12:34:00 -- lvol/common.sh@45 -- # local extra_params= 00:14:18.401 12:34:00 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:18.401 12:34:00 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:18.401 12:34:00 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:18.401 12:34:00 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=16777216 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:18.401 12:34:00 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=16777216 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:18.401 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:18.401 fio-3.35 00:14:18.401 Starting 1 process 00:14:19.335 00:14:19.335 fio_test: (groupid=0, jobs=1): err= 0: pid=61447: Tue Oct 1 12:34:01 2024 00:14:19.335 read: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(16.0MiB/307msec) 00:14:19.335 clat (usec): min=62, max=651, avg=73.57, stdev=17.72 00:14:19.335 lat (usec): min=62, max=651, avg=73.65, stdev=17.73 00:14:19.335 clat percentiles (usec): 00:14:19.335 | 1.00th=[ 64], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 66], 00:14:19.335 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:14:19.335 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 97], 00:14:19.335 | 99.00th=[ 114], 99.50th=[ 128], 99.90th=[ 249], 99.95th=[ 400], 00:14:19.335 | 99.99th=[ 652] 00:14:19.335 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(16.0MiB/356msec); 0 zone resets 00:14:19.335 clat (usec): min=59, max=635, avg=85.23, stdev=18.89 00:14:19.335 lat (usec): min=60, max=656, avg=86.05, stdev=19.18 00:14:19.335 clat percentiles (usec): 00:14:19.335 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 71], 00:14:19.335 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 86], 00:14:19.335 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 117], 00:14:19.335 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 174], 99.95th=[ 233], 00:14:19.335 | 99.99th=[ 635] 00:14:19.335 bw ( KiB/s): min=32768, max=32768, per=71.20%, avg=32768.00, stdev= 0.00, samples=1 00:14:19.335 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:14:19.335 lat (usec) : 100=90.53%, 250=9.41%, 500=0.02%, 750=0.04% 00:14:19.335 cpu : usr=4.38%, sys=7.10%, ctx=8197, majf=0, minf=122 00:14:19.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:19.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.335 issued rwts: total=4096,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:19.335 00:14:19.335 Run status group 0 (all jobs): 00:14:19.335 READ: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=16.0MiB (16.8MB), run=307-307msec 00:14:19.335 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=16.0MiB (16.8MB), run=356-356msec 00:14:19.335 00:14:19.335 Disk stats (read/write): 00:14:19.335 nbd0: ios=3845/4096, merge=0/0, ticks=262/315, in_queue=576, util=86.52% 00:14:19.335 12:34:01 -- lvol/snapshot_clone.sh@303 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:19.335 12:34:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.335 12:34:01 -- common/autotest_common.sh@10 -- # set +x 00:14:19.335 12:34:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.335 12:34:01 -- lvol/snapshot_clone.sh@303 -- # snapshot_uuid=cca2a86a-14e4-49d9-b691-25c3431b43cc 00:14:19.335 12:34:01 -- lvol/snapshot_clone.sh@306 -- # start_fill=4194304 00:14:19.335 12:34:01 -- lvol/snapshot_clone.sh@307 -- # fill_range=4194304 00:14:19.335 12:34:01 -- lvol/snapshot_clone.sh@308 -- # run_fio_test /dev/nbd0 4194304 4194304 write 0xcc 00:14:19.335 12:34:01 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:19.335 12:34:01 -- lvol/common.sh@41 -- # local offset=4194304 00:14:19.335 12:34:01 -- lvol/common.sh@42 -- # local size=4194304 00:14:19.335 12:34:01 -- lvol/common.sh@43 -- # local rw=write 00:14:19.335 12:34:01 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:19.336 12:34:01 -- lvol/common.sh@45 -- # local extra_params= 00:14:19.336 12:34:01 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:19.336 12:34:01 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:19.336 12:34:01 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:19.336 12:34:01 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:19.336 12:34:01 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:19.336 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:19.336 fio-3.35 00:14:19.336 Starting 1 process 00:14:19.903 00:14:19.903 fio_test: (groupid=0, jobs=1): err= 0: pid=61462: Tue Oct 1 12:34:02 2024 00:14:19.903 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(4096KiB/98msec) 00:14:19.903 clat (usec): min=64, max=519, avg=93.65, stdev=20.27 00:14:19.903 lat (usec): min=64, max=519, avg=93.76, stdev=20.27 00:14:19.903 clat percentiles (usec): 00:14:19.903 | 1.00th=[ 71], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 84], 00:14:19.903 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 93], 00:14:19.903 | 70.00th=[ 97], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 120], 00:14:19.903 | 99.00th=[ 137], 99.50th=[ 147], 99.90th=[ 265], 99.95th=[ 519], 00:14:19.903 | 99.99th=[ 519] 00:14:19.903 write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(4096KiB/101msec); 0 zone resets 00:14:19.903 clat (usec): min=71, max=1334, avg=95.56, stdev=40.51 00:14:19.903 lat (usec): min=71, max=1355, avg=96.51, stdev=41.18 00:14:19.903 clat percentiles (usec): 00:14:19.903 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:14:19.903 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 92], 00:14:19.903 | 70.00th=[ 98], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 118], 00:14:19.903 | 99.00th=[ 133], 99.50th=[ 143], 99.90th=[ 182], 99.95th=[ 1336], 00:14:19.903 | 99.99th=[ 1336] 00:14:19.903 lat (usec) : 100=74.46%, 250=25.39%, 500=0.05%, 750=0.05% 00:14:19.903 lat (msec) : 2=0.05% 00:14:19.903 cpu : usr=1.52%, sys=10.15%, ctx=2050, majf=0, minf=45 00:14:19.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:19.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.903 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:19.903 00:14:19.903 Run status group 0 (all jobs): 00:14:19.903 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=4096KiB (4194kB), run=98-98msec 00:14:19.903 WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=4096KiB (4194kB), run=101-101msec 00:14:19.903 00:14:19.903 Disk stats (read/write): 00:14:19.903 nbd0: ios=423/1024, merge=0/0, ticks=40/88, in_queue=127, util=58.33% 00:14:19.903 12:34:02 -- lvol/snapshot_clone.sh@309 -- # start_fill=12582912 00:14:19.903 12:34:02 -- lvol/snapshot_clone.sh@310 -- # run_fio_test /dev/nbd0 12582912 4194304 write 0xcc 00:14:19.903 12:34:02 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:19.903 12:34:02 -- lvol/common.sh@41 -- # local offset=12582912 00:14:19.903 12:34:02 -- lvol/common.sh@42 -- # local size=4194304 00:14:19.903 12:34:02 -- lvol/common.sh@43 -- # local rw=write 00:14:19.903 12:34:02 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:19.903 12:34:02 -- lvol/common.sh@45 -- # local extra_params= 00:14:19.903 12:34:02 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:19.903 12:34:02 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:19.903 12:34:02 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:19.903 12:34:02 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:19.903 12:34:02 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:19.903 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:19.903 fio-3.35 00:14:19.903 Starting 1 process 00:14:20.162 00:14:20.162 fio_test: (groupid=0, jobs=1): err= 0: pid=61466: Tue Oct 1 12:34:02 2024 00:14:20.162 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(4096KiB/94msec) 00:14:20.162 clat (usec): min=62, max=477, avg=89.50, stdev=24.99 00:14:20.162 lat (usec): min=62, max=477, avg=89.60, stdev=25.00 00:14:20.162 clat percentiles (usec): 00:14:20.162 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 70], 00:14:20.162 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 91], 00:14:20.162 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 112], 95.00th=[ 119], 00:14:20.162 | 99.00th=[ 139], 99.50th=[ 159], 99.90th=[ 433], 99.95th=[ 478], 00:14:20.162 | 99.99th=[ 478] 00:14:20.162 write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(4096KiB/101msec); 0 zone resets 00:14:20.162 clat (usec): min=67, max=1293, avg=96.00, stdev=44.22 00:14:20.162 lat (usec): min=68, max=1337, avg=97.04, stdev=45.49 00:14:20.162 clat percentiles (usec): 00:14:20.162 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:14:20.162 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 96], 00:14:20.162 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 125], 00:14:20.162 | 99.00th=[ 145], 99.50th=[ 153], 99.90th=[ 668], 99.95th=[ 1287], 00:14:20.162 | 99.99th=[ 1287] 00:14:20.162 lat (usec) : 100=73.44%, 250=26.32%, 500=0.15%, 750=0.05% 00:14:20.162 lat (msec) : 2=0.05% 00:14:20.162 cpu : usr=3.63%, sys=8.81%, ctx=2049, majf=0, minf=43 00:14:20.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.162 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.162 00:14:20.162 Run status group 0 (all jobs): 00:14:20.162 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=4096KiB (4194kB), run=94-94msec 00:14:20.162 WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=4096KiB (4194kB), run=101-101msec 00:14:20.162 00:14:20.162 Disk stats (read/write): 00:14:20.162 nbd0: ios=402/1024, merge=0/0, ticks=38/87, in_queue=125, util=57.68% 00:14:20.162 12:34:02 -- lvol/snapshot_clone.sh@313 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot2 00:14:20.162 12:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.162 12:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:20.162 12:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.162 12:34:02 -- lvol/snapshot_clone.sh@313 -- # snapshot_uuid2=6f86c434-172a-4218-a2ed-6746289cb459 00:14:20.162 12:34:02 -- lvol/snapshot_clone.sh@316 -- # start_fill=4194304 00:14:20.162 12:34:02 -- lvol/snapshot_clone.sh@317 -- # run_fio_test /dev/nbd0 4194304 4194304 write 0xee 00:14:20.162 12:34:02 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:20.162 12:34:02 -- lvol/common.sh@41 -- # local offset=4194304 00:14:20.162 12:34:02 -- lvol/common.sh@42 -- # local size=4194304 00:14:20.162 12:34:02 -- lvol/common.sh@43 -- # local rw=write 00:14:20.162 12:34:02 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:20.162 12:34:02 -- lvol/common.sh@45 -- # local extra_params= 00:14:20.162 12:34:02 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:20.162 12:34:02 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:20.162 12:34:02 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:20.162 12:34:02 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:20.162 12:34:02 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:20.419 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:20.419 fio-3.35 00:14:20.419 Starting 1 process 00:14:20.678 00:14:20.678 fio_test: (groupid=0, jobs=1): err= 0: pid=61474: Tue Oct 1 12:34:03 2024 00:14:20.678 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(4096KiB/94msec) 00:14:20.678 clat (usec): min=69, max=381, avg=90.30, stdev=21.78 00:14:20.678 lat (usec): min=70, max=381, avg=90.39, stdev=21.80 00:14:20.678 clat percentiles (usec): 00:14:20.678 | 1.00th=[ 76], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 77], 00:14:20.678 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 87], 00:14:20.678 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 113], 95.00th=[ 123], 00:14:20.678 | 99.00th=[ 151], 99.50th=[ 178], 99.90th=[ 371], 99.95th=[ 383], 00:14:20.678 | 99.99th=[ 383] 00:14:20.678 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(4096KiB/100msec); 0 zone resets 00:14:20.678 clat (usec): min=66, max=2875, avg=95.14, stdev=123.57 00:14:20.678 lat (usec): min=67, max=2876, avg=96.04, stdev=123.97 00:14:20.678 clat percentiles (usec): 00:14:20.678 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 73], 20.00th=[ 74], 00:14:20.678 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 84], 00:14:20.678 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 124], 00:14:20.678 | 99.00th=[ 153], 99.50th=[ 660], 99.90th=[ 1778], 99.95th=[ 2868], 00:14:20.678 | 99.99th=[ 2868] 00:14:20.678 lat (usec) : 100=80.42%, 250=19.09%, 500=0.20%, 750=0.05% 00:14:20.678 lat (msec) : 2=0.20%, 4=0.05% 00:14:20.678 cpu : usr=1.55%, sys=8.81%, ctx=2049, majf=0, minf=43 00:14:20.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.678 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.678 00:14:20.678 Run status group 0 (all jobs): 00:14:20.678 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=4096KiB (4194kB), run=94-94msec 00:14:20.678 WRITE: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=4096KiB (4194kB), run=100-100msec 00:14:20.678 00:14:20.678 Disk stats (read/write): 00:14:20.678 nbd0: ios=427/1024, merge=0/0, ticks=41/89, in_queue=130, util=57.26% 00:14:20.678 12:34:03 -- lvol/snapshot_clone.sh@320 -- # pattern=("0xdd" "0xee" "0xdd" "0xcc" "0x00") 00:14:20.678 12:34:03 -- lvol/snapshot_clone.sh@321 -- # for i in "${!pattern[@]}" 00:14:20.679 12:34:03 -- lvol/snapshot_clone.sh@322 -- # start_fill=0 00:14:20.679 12:34:03 -- lvol/snapshot_clone.sh@323 -- # run_fio_test /dev/nbd0 0 4194304 read 0xdd 00:14:20.679 12:34:03 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:20.679 12:34:03 -- lvol/common.sh@41 -- # local offset=0 00:14:20.679 12:34:03 -- lvol/common.sh@42 -- # local size=4194304 00:14:20.679 12:34:03 -- lvol/common.sh@43 -- # local rw=read 00:14:20.679 12:34:03 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:20.679 12:34:03 -- lvol/common.sh@45 -- # local extra_params= 00:14:20.679 12:34:03 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:20.679 12:34:03 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:20.679 12:34:03 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:20.679 12:34:03 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:20.679 12:34:03 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:20.679 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:20.679 fio-3.35 00:14:20.679 Starting 1 process 00:14:20.937 00:14:20.937 fio_test: (groupid=0, jobs=1): err= 0: pid=61484: Tue Oct 1 12:34:03 2024 00:14:20.937 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(4096KiB/101msec) 00:14:20.937 clat (usec): min=78, max=302, avg=96.15, stdev=17.58 00:14:20.937 lat (usec): min=78, max=304, avg=96.28, stdev=17.60 00:14:20.937 clat percentiles (usec): 00:14:20.937 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 82], 00:14:20.938 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 98], 00:14:20.938 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 130], 00:14:20.938 | 99.00th=[ 147], 99.50th=[ 159], 99.90th=[ 202], 99.95th=[ 302], 00:14:20.938 | 99.99th=[ 302] 00:14:20.938 lat (usec) : 100=65.23%, 250=34.67%, 500=0.10% 00:14:20.938 cpu : usr=1.00%, sys=10.00%, ctx=1026, majf=0, minf=9 00:14:20.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.938 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.938 00:14:20.938 Run status group 0 (all jobs): 00:14:20.938 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=4096KiB (4194kB), run=101-101msec 00:14:20.938 00:14:20.938 Disk stats (read/write): 00:14:20.938 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:20.938 12:34:03 -- lvol/snapshot_clone.sh@321 -- # for i in "${!pattern[@]}" 00:14:20.938 12:34:03 -- lvol/snapshot_clone.sh@322 -- # start_fill=4194304 00:14:20.938 12:34:03 -- lvol/snapshot_clone.sh@323 -- # run_fio_test /dev/nbd0 4194304 4194304 read 0xee 00:14:20.938 12:34:03 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:20.938 12:34:03 -- lvol/common.sh@41 -- # local offset=4194304 00:14:20.938 12:34:03 -- lvol/common.sh@42 -- # local size=4194304 00:14:20.938 12:34:03 -- lvol/common.sh@43 -- # local rw=read 00:14:20.938 12:34:03 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:20.938 12:34:03 -- lvol/common.sh@45 -- # local extra_params= 00:14:20.938 12:34:03 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:20.938 12:34:03 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:20.938 12:34:03 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:20.938 12:34:03 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:20.938 12:34:03 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:21.197 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:21.197 fio-3.35 00:14:21.197 Starting 1 process 00:14:21.197 00:14:21.197 fio_test: (groupid=0, jobs=1): err= 0: pid=61487: Tue Oct 1 12:34:03 2024 00:14:21.197 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(4096KiB/94msec) 00:14:21.197 clat (usec): min=74, max=298, avg=89.85, stdev=16.88 00:14:21.197 lat (usec): min=74, max=299, avg=89.98, stdev=16.90 00:14:21.197 clat percentiles (usec): 00:14:21.197 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 78], 00:14:21.197 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 91], 00:14:21.197 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 111], 95.00th=[ 124], 00:14:21.197 | 99.00th=[ 145], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 297], 00:14:21.197 | 99.99th=[ 297] 00:14:21.197 lat (usec) : 100=78.81%, 250=21.09%, 500=0.10% 00:14:21.197 cpu : usr=2.15%, sys=10.75%, ctx=1024, majf=0, minf=9 00:14:21.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.197 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.197 00:14:21.197 Run status group 0 (all jobs): 00:14:21.197 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=4096KiB (4194kB), run=94-94msec 00:14:21.197 00:14:21.197 Disk stats (read/write): 00:14:21.197 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:21.197 12:34:03 -- lvol/snapshot_clone.sh@321 -- # for i in "${!pattern[@]}" 00:14:21.197 12:34:03 -- lvol/snapshot_clone.sh@322 -- # start_fill=8388608 00:14:21.197 12:34:03 -- lvol/snapshot_clone.sh@323 -- # run_fio_test /dev/nbd0 8388608 4194304 read 0xdd 00:14:21.197 12:34:03 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:21.197 12:34:03 -- lvol/common.sh@41 -- # local offset=8388608 00:14:21.197 12:34:03 -- lvol/common.sh@42 -- # local size=4194304 00:14:21.197 12:34:03 -- lvol/common.sh@43 -- # local rw=read 00:14:21.197 12:34:03 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:21.197 12:34:03 -- lvol/common.sh@45 -- # local extra_params= 00:14:21.197 12:34:03 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:21.197 12:34:03 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:21.197 12:34:03 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:21.197 12:34:03 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:21.197 12:34:03 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:21.455 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:21.455 fio-3.35 00:14:21.455 Starting 1 process 00:14:21.714 00:14:21.714 fio_test: (groupid=0, jobs=1): err= 0: pid=61494: Tue Oct 1 12:34:04 2024 00:14:21.714 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(4096KiB/102msec) 00:14:21.714 clat (usec): min=79, max=308, avg=97.56, stdev=19.54 00:14:21.714 lat (usec): min=79, max=309, avg=97.71, stdev=19.57 00:14:21.714 clat percentiles (usec): 00:14:21.714 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:14:21.714 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 100], 00:14:21.714 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 135], 00:14:21.714 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 190], 99.95th=[ 310], 00:14:21.714 | 99.99th=[ 310] 00:14:21.714 lat (usec) : 100=62.70%, 250=37.21%, 500=0.10% 00:14:21.714 cpu : usr=1.98%, sys=11.88%, ctx=2048, majf=0, minf=9 00:14:21.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.714 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.714 00:14:21.714 Run status group 0 (all jobs): 00:14:21.714 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=4096KiB (4194kB), run=102-102msec 00:14:21.714 00:14:21.714 Disk stats (read/write): 00:14:21.714 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:21.714 12:34:04 -- lvol/snapshot_clone.sh@321 -- # for i in "${!pattern[@]}" 00:14:21.714 12:34:04 -- lvol/snapshot_clone.sh@322 -- # start_fill=12582912 00:14:21.714 12:34:04 -- lvol/snapshot_clone.sh@323 -- # run_fio_test /dev/nbd0 12582912 4194304 read 0xcc 00:14:21.714 12:34:04 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:21.714 12:34:04 -- lvol/common.sh@41 -- # local offset=12582912 00:14:21.714 12:34:04 -- lvol/common.sh@42 -- # local size=4194304 00:14:21.714 12:34:04 -- lvol/common.sh@43 -- # local rw=read 00:14:21.714 12:34:04 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:21.714 12:34:04 -- lvol/common.sh@45 -- # local extra_params= 00:14:21.714 12:34:04 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:21.714 12:34:04 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:21.714 12:34:04 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:21.714 12:34:04 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:21.714 12:34:04 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:21.714 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:21.714 fio-3.35 00:14:21.714 Starting 1 process 00:14:21.972 00:14:21.972 fio_test: (groupid=0, jobs=1): err= 0: pid=61504: Tue Oct 1 12:34:04 2024 00:14:21.972 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(4096KiB/96msec) 00:14:21.972 clat (usec): min=76, max=270, avg=91.97, stdev=15.37 00:14:21.972 lat (usec): min=76, max=271, avg=92.10, stdev=15.39 00:14:21.972 clat percentiles (usec): 00:14:21.972 | 1.00th=[ 78], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 80], 00:14:21.972 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 94], 00:14:21.972 | 70.00th=[ 98], 80.00th=[ 104], 90.00th=[ 114], 95.00th=[ 119], 00:14:21.972 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 165], 99.95th=[ 269], 00:14:21.972 | 99.99th=[ 269] 00:14:21.972 lat (usec) : 100=73.83%, 250=26.07%, 500=0.10% 00:14:21.972 cpu : usr=4.21%, sys=6.32%, ctx=1024, majf=0, minf=10 00:14:21.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.972 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.972 00:14:21.972 Run status group 0 (all jobs): 00:14:21.972 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=4096KiB (4194kB), run=96-96msec 00:14:21.972 00:14:21.972 Disk stats (read/write): 00:14:21.972 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:21.972 12:34:04 -- lvol/snapshot_clone.sh@321 -- # for i in "${!pattern[@]}" 00:14:21.972 12:34:04 -- lvol/snapshot_clone.sh@322 -- # start_fill=16777216 00:14:21.972 12:34:04 -- lvol/snapshot_clone.sh@323 -- # run_fio_test /dev/nbd0 16777216 4194304 read 0x00 00:14:21.972 12:34:04 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:21.972 12:34:04 -- lvol/common.sh@41 -- # local offset=16777216 00:14:21.972 12:34:04 -- lvol/common.sh@42 -- # local size=4194304 00:14:21.972 12:34:04 -- lvol/common.sh@43 -- # local rw=read 00:14:21.972 12:34:04 -- lvol/common.sh@44 -- # local pattern=0x00 00:14:21.972 12:34:04 -- lvol/common.sh@45 -- # local extra_params= 00:14:21.972 12:34:04 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:21.972 12:34:04 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:14:21.972 12:34:04 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:21.973 12:34:04 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:21.973 12:34:04 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:14:21.973 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:21.973 fio-3.35 00:14:21.973 Starting 1 process 00:14:22.231 00:14:22.231 fio_test: (groupid=0, jobs=1): err= 0: pid=61507: Tue Oct 1 12:34:04 2024 00:14:22.231 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(4096KiB/81msec) 00:14:22.231 clat (usec): min=63, max=218, avg=77.47, stdev=15.78 00:14:22.231 lat (usec): min=63, max=219, avg=77.60, stdev=15.81 00:14:22.231 clat percentiles (usec): 00:14:22.231 | 1.00th=[ 65], 5.00th=[ 65], 10.00th=[ 65], 20.00th=[ 66], 00:14:22.231 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 77], 00:14:22.231 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 111], 00:14:22.231 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 149], 99.95th=[ 219], 00:14:22.231 | 99.99th=[ 219] 00:14:22.231 lat (usec) : 100=90.53%, 250=9.47% 00:14:22.231 cpu : usr=2.50%, sys=12.50%, ctx=2048, majf=0, minf=9 00:14:22.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.231 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.231 00:14:22.231 Run status group 0 (all jobs): 00:14:22.231 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=4096KiB (4194kB), run=81-81msec 00:14:22.231 00:14:22.231 Disk stats (read/write): 00:14:22.231 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:22.231 12:34:04 -- lvol/snapshot_clone.sh@329 -- # rpc_cmd bdev_lvol_decouple_parent lvs_test/lvol_test 00:14:22.231 12:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.231 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:22.231 12:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.231 12:34:04 -- lvol/snapshot_clone.sh@330 -- # rpc_cmd bdev_get_bdevs -b cb505df3-843a-40b5-8934-3ebca7bfcdf7 00:14:22.231 12:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.231 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:22.231 12:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.231 12:34:04 -- lvol/snapshot_clone.sh@330 -- # lvol='[ 00:14:22.231 { 00:14:22.231 "name": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:22.231 "aliases": [ 00:14:22.231 "lvs_test/lvol_test" 00:14:22.231 ], 00:14:22.231 "product_name": "Logical Volume", 00:14:22.231 "block_size": 512, 00:14:22.231 "num_blocks": 40960, 00:14:22.231 "uuid": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:22.231 "assigned_rate_limits": { 00:14:22.231 "rw_ios_per_sec": 0, 00:14:22.231 "rw_mbytes_per_sec": 0, 00:14:22.231 "r_mbytes_per_sec": 0, 00:14:22.231 "w_mbytes_per_sec": 0 00:14:22.231 }, 00:14:22.231 "claimed": false, 00:14:22.231 "zoned": false, 00:14:22.231 "supported_io_types": { 00:14:22.231 "read": true, 00:14:22.231 "write": true, 00:14:22.231 "unmap": true, 00:14:22.231 "write_zeroes": true, 00:14:22.231 "flush": false, 00:14:22.231 "reset": true, 00:14:22.231 "compare": false, 00:14:22.231 "compare_and_write": false, 00:14:22.231 "abort": false, 00:14:22.231 "nvme_admin": false, 00:14:22.231 "nvme_io": false 00:14:22.231 }, 00:14:22.231 "memory_domains": [ 00:14:22.231 { 00:14:22.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.231 "dma_device_type": 2 00:14:22.231 } 00:14:22.231 ], 00:14:22.231 "driver_specific": { 00:14:22.231 "lvol": { 00:14:22.231 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:22.231 "base_bdev": "Malloc5", 00:14:22.231 "thin_provision": true, 00:14:22.231 "snapshot": false, 00:14:22.231 "clone": true, 00:14:22.231 "base_snapshot": "lvol_snapshot", 00:14:22.231 "esnap_clone": false 00:14:22.231 } 00:14:22.231 } 00:14:22.231 } 00:14:22.231 ]' 00:14:22.231 12:34:04 -- lvol/snapshot_clone.sh@331 -- # rpc_cmd bdev_get_bdevs -b cca2a86a-14e4-49d9-b691-25c3431b43cc 00:14:22.231 12:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.231 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:22.231 12:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.231 12:34:04 -- lvol/snapshot_clone.sh@331 -- # snapshot='[ 00:14:22.231 { 00:14:22.231 "name": "cca2a86a-14e4-49d9-b691-25c3431b43cc", 00:14:22.231 "aliases": [ 00:14:22.231 "lvs_test/lvol_snapshot" 00:14:22.231 ], 00:14:22.231 "product_name": "Logical Volume", 00:14:22.231 "block_size": 512, 00:14:22.231 "num_blocks": 40960, 00:14:22.231 "uuid": "cca2a86a-14e4-49d9-b691-25c3431b43cc", 00:14:22.231 "assigned_rate_limits": { 00:14:22.232 "rw_ios_per_sec": 0, 00:14:22.232 "rw_mbytes_per_sec": 0, 00:14:22.232 "r_mbytes_per_sec": 0, 00:14:22.232 "w_mbytes_per_sec": 0 00:14:22.232 }, 00:14:22.232 "claimed": false, 00:14:22.232 "zoned": false, 00:14:22.232 "supported_io_types": { 00:14:22.232 "read": true, 00:14:22.232 "write": false, 00:14:22.232 "unmap": false, 00:14:22.232 "write_zeroes": false, 00:14:22.232 "flush": false, 00:14:22.232 "reset": true, 00:14:22.232 "compare": false, 00:14:22.232 "compare_and_write": false, 00:14:22.232 "abort": false, 00:14:22.232 "nvme_admin": false, 00:14:22.232 "nvme_io": false 00:14:22.232 }, 00:14:22.232 "memory_domains": [ 00:14:22.232 { 00:14:22.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.232 "dma_device_type": 2 00:14:22.232 } 00:14:22.232 ], 00:14:22.232 "driver_specific": { 00:14:22.232 "lvol": { 00:14:22.232 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:22.232 "base_bdev": "Malloc5", 00:14:22.232 "thin_provision": true, 00:14:22.232 "snapshot": true, 00:14:22.232 "clone": false, 00:14:22.232 "clones": [ 00:14:22.232 "lvol_snapshot2", 00:14:22.232 "lvol_test" 00:14:22.232 ], 00:14:22.232 "esnap_clone": false 00:14:22.232 } 00:14:22.232 } 00:14:22.232 } 00:14:22.232 ]' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@332 -- # rpc_cmd bdev_get_bdevs -b 6f86c434-172a-4218-a2ed-6746289cb459 00:14:22.491 12:34:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.491 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:22.491 12:34:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@332 -- # snapshot2='[ 00:14:22.491 { 00:14:22.491 "name": "6f86c434-172a-4218-a2ed-6746289cb459", 00:14:22.491 "aliases": [ 00:14:22.491 "lvs_test/lvol_snapshot2" 00:14:22.491 ], 00:14:22.491 "product_name": "Logical Volume", 00:14:22.491 "block_size": 512, 00:14:22.491 "num_blocks": 40960, 00:14:22.491 "uuid": "6f86c434-172a-4218-a2ed-6746289cb459", 00:14:22.491 "assigned_rate_limits": { 00:14:22.491 "rw_ios_per_sec": 0, 00:14:22.491 "rw_mbytes_per_sec": 0, 00:14:22.491 "r_mbytes_per_sec": 0, 00:14:22.491 "w_mbytes_per_sec": 0 00:14:22.491 }, 00:14:22.491 "claimed": false, 00:14:22.491 "zoned": false, 00:14:22.491 "supported_io_types": { 00:14:22.491 "read": true, 00:14:22.491 "write": false, 00:14:22.491 "unmap": false, 00:14:22.491 "write_zeroes": false, 00:14:22.491 "flush": false, 00:14:22.491 "reset": true, 00:14:22.491 "compare": false, 00:14:22.491 "compare_and_write": false, 00:14:22.491 "abort": false, 00:14:22.491 "nvme_admin": false, 00:14:22.491 "nvme_io": false 00:14:22.491 }, 00:14:22.491 "memory_domains": [ 00:14:22.491 { 00:14:22.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.491 "dma_device_type": 2 00:14:22.491 } 00:14:22.491 ], 00:14:22.491 "driver_specific": { 00:14:22.491 "lvol": { 00:14:22.491 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:22.491 "base_bdev": "Malloc5", 00:14:22.491 "thin_provision": true, 00:14:22.491 "snapshot": true, 00:14:22.491 "clone": true, 00:14:22.491 "base_snapshot": "lvol_snapshot", 00:14:22.491 "esnap_clone": false 00:14:22.491 } 00:14:22.491 } 00:14:22.491 } 00:14:22.491 ]' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@333 -- # jq '.[].driver_specific.lvol.thin_provision' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@333 -- # '[' true = true ']' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@334 -- # jq '.[].driver_specific.lvol.clone' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@334 -- # '[' true = true ']' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@335 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@335 -- # '[' false = false ']' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@336 -- # jq '.[].driver_specific.lvol.clone' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@336 -- # '[' false = false ']' 00:14:22.491 12:34:04 -- lvol/snapshot_clone.sh@337 -- # jq '.[].driver_specific.lvol.clone' 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@337 -- # '[' true = true ']' 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@338 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@338 -- # '[' true = true ']' 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@341 -- # rpc_cmd bdev_lvol_delete 6f86c434-172a-4218-a2ed-6746289cb459 00:14:22.750 12:34:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.750 12:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:22.750 12:34:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@344 -- # for i in "${!pattern[@]}" 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@345 -- # start_fill=0 00:14:22.750 12:34:05 -- lvol/snapshot_clone.sh@346 -- # run_fio_test /dev/nbd0 0 4194304 read 0xdd 00:14:22.750 12:34:05 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:22.750 12:34:05 -- lvol/common.sh@41 -- # local offset=0 00:14:22.750 12:34:05 -- lvol/common.sh@42 -- # local size=4194304 00:14:22.750 12:34:05 -- lvol/common.sh@43 -- # local rw=read 00:14:22.750 12:34:05 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:22.750 12:34:05 -- lvol/common.sh@45 -- # local extra_params= 00:14:22.750 12:34:05 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:22.750 12:34:05 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:22.750 12:34:05 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:22.750 12:34:05 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:22.750 12:34:05 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:22.750 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:22.750 fio-3.35 00:14:22.750 Starting 1 process 00:14:23.009 00:14:23.009 fio_test: (groupid=0, jobs=1): err= 0: pid=61536: Tue Oct 1 12:34:05 2024 00:14:23.009 read: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(4096KiB/97msec) 00:14:23.009 clat (usec): min=76, max=274, avg=93.13, stdev=15.79 00:14:23.009 lat (usec): min=76, max=275, avg=93.28, stdev=15.82 00:14:23.009 clat percentiles (usec): 00:14:23.009 | 1.00th=[ 78], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:14:23.009 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 93], 00:14:23.009 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 122], 00:14:23.009 | 99.00th=[ 143], 99.50th=[ 147], 99.90th=[ 174], 99.95th=[ 277], 00:14:23.009 | 99.99th=[ 277] 00:14:23.009 lat (usec) : 100=71.78%, 250=28.12%, 500=0.10% 00:14:23.009 cpu : usr=5.21%, sys=7.29%, ctx=1024, majf=0, minf=9 00:14:23.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.009 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.009 00:14:23.009 Run status group 0 (all jobs): 00:14:23.009 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=4096KiB (4194kB), run=97-97msec 00:14:23.009 00:14:23.009 Disk stats (read/write): 00:14:23.009 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:23.009 12:34:05 -- lvol/snapshot_clone.sh@344 -- # for i in "${!pattern[@]}" 00:14:23.009 12:34:05 -- lvol/snapshot_clone.sh@345 -- # start_fill=4194304 00:14:23.009 12:34:05 -- lvol/snapshot_clone.sh@346 -- # run_fio_test /dev/nbd0 4194304 4194304 read 0xee 00:14:23.009 12:34:05 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:23.009 12:34:05 -- lvol/common.sh@41 -- # local offset=4194304 00:14:23.009 12:34:05 -- lvol/common.sh@42 -- # local size=4194304 00:14:23.009 12:34:05 -- lvol/common.sh@43 -- # local rw=read 00:14:23.009 12:34:05 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:23.009 12:34:05 -- lvol/common.sh@45 -- # local extra_params= 00:14:23.009 12:34:05 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:23.009 12:34:05 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:23.009 12:34:05 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:23.009 12:34:05 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:23.009 12:34:05 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:23.009 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:23.009 fio-3.35 00:14:23.009 Starting 1 process 00:14:23.268 00:14:23.268 fio_test: (groupid=0, jobs=1): err= 0: pid=61539: Tue Oct 1 12:34:05 2024 00:14:23.268 read: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(4096KiB/97msec) 00:14:23.268 clat (usec): min=73, max=257, avg=92.83, stdev=17.10 00:14:23.268 lat (usec): min=73, max=258, avg=92.98, stdev=17.15 00:14:23.268 clat percentiles (usec): 00:14:23.268 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 79], 00:14:23.268 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 95], 00:14:23.268 | 70.00th=[ 98], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 127], 00:14:23.268 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 258], 00:14:23.268 | 99.99th=[ 258] 00:14:23.268 lat (usec) : 100=75.39%, 250=24.51%, 500=0.10% 00:14:23.268 cpu : usr=2.08%, sys=9.38%, ctx=1024, majf=0, minf=10 00:14:23.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.268 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.268 00:14:23.268 Run status group 0 (all jobs): 00:14:23.268 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=4096KiB (4194kB), run=97-97msec 00:14:23.268 00:14:23.268 Disk stats (read/write): 00:14:23.268 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:23.268 12:34:05 -- lvol/snapshot_clone.sh@344 -- # for i in "${!pattern[@]}" 00:14:23.268 12:34:05 -- lvol/snapshot_clone.sh@345 -- # start_fill=8388608 00:14:23.268 12:34:05 -- lvol/snapshot_clone.sh@346 -- # run_fio_test /dev/nbd0 8388608 4194304 read 0xdd 00:14:23.268 12:34:05 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:23.268 12:34:05 -- lvol/common.sh@41 -- # local offset=8388608 00:14:23.268 12:34:05 -- lvol/common.sh@42 -- # local size=4194304 00:14:23.268 12:34:05 -- lvol/common.sh@43 -- # local rw=read 00:14:23.268 12:34:05 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:23.268 12:34:05 -- lvol/common.sh@45 -- # local extra_params= 00:14:23.268 12:34:05 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:23.268 12:34:05 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:23.268 12:34:05 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:23.268 12:34:05 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:23.268 12:34:05 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:23.527 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:23.527 fio-3.35 00:14:23.527 Starting 1 process 00:14:23.785 00:14:23.785 fio_test: (groupid=0, jobs=1): err= 0: pid=61543: Tue Oct 1 12:34:06 2024 00:14:23.785 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(4096KiB/100msec) 00:14:23.785 clat (usec): min=77, max=285, avg=95.44, stdev=17.18 00:14:23.785 lat (usec): min=78, max=286, avg=95.56, stdev=17.20 00:14:23.785 clat percentiles (usec): 00:14:23.785 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:14:23.785 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 96], 00:14:23.785 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 128], 00:14:23.785 | 99.00th=[ 147], 99.50th=[ 163], 99.90th=[ 215], 99.95th=[ 285], 00:14:23.785 | 99.99th=[ 285] 00:14:23.785 lat (usec) : 100=68.16%, 250=31.74%, 500=0.10% 00:14:23.785 cpu : usr=1.01%, sys=10.10%, ctx=1025, majf=0, minf=11 00:14:23.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.785 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:23.785 00:14:23.785 Run status group 0 (all jobs): 00:14:23.786 READ: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=4096KiB (4194kB), run=100-100msec 00:14:23.786 00:14:23.786 Disk stats (read/write): 00:14:23.786 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:23.786 12:34:06 -- lvol/snapshot_clone.sh@344 -- # for i in "${!pattern[@]}" 00:14:23.786 12:34:06 -- lvol/snapshot_clone.sh@345 -- # start_fill=12582912 00:14:23.786 12:34:06 -- lvol/snapshot_clone.sh@346 -- # run_fio_test /dev/nbd0 12582912 4194304 read 0xcc 00:14:23.786 12:34:06 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:23.786 12:34:06 -- lvol/common.sh@41 -- # local offset=12582912 00:14:23.786 12:34:06 -- lvol/common.sh@42 -- # local size=4194304 00:14:23.786 12:34:06 -- lvol/common.sh@43 -- # local rw=read 00:14:23.786 12:34:06 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:23.786 12:34:06 -- lvol/common.sh@45 -- # local extra_params= 00:14:23.786 12:34:06 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:23.786 12:34:06 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:23.786 12:34:06 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:23.786 12:34:06 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:23.786 12:34:06 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:23.786 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:23.786 fio-3.35 00:14:23.786 Starting 1 process 00:14:24.044 00:14:24.044 fio_test: (groupid=0, jobs=1): err= 0: pid=61556: Tue Oct 1 12:34:06 2024 00:14:24.044 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(4096KiB/96msec) 00:14:24.044 clat (usec): min=75, max=259, avg=91.77, stdev=15.55 00:14:24.044 lat (usec): min=75, max=260, avg=91.90, stdev=15.57 00:14:24.044 clat percentiles (usec): 00:14:24.044 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 78], 20.00th=[ 80], 00:14:24.044 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 93], 00:14:24.044 | 70.00th=[ 98], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 120], 00:14:24.044 | 99.00th=[ 135], 99.50th=[ 155], 99.90th=[ 204], 99.95th=[ 260], 00:14:24.044 | 99.99th=[ 260] 00:14:24.044 lat (usec) : 100=77.25%, 250=22.66%, 500=0.10% 00:14:24.044 cpu : usr=1.05%, sys=10.53%, ctx=1024, majf=0, minf=9 00:14:24.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.044 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.044 00:14:24.044 Run status group 0 (all jobs): 00:14:24.044 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=4096KiB (4194kB), run=96-96msec 00:14:24.044 00:14:24.044 Disk stats (read/write): 00:14:24.044 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:24.044 12:34:06 -- lvol/snapshot_clone.sh@344 -- # for i in "${!pattern[@]}" 00:14:24.045 12:34:06 -- lvol/snapshot_clone.sh@345 -- # start_fill=16777216 00:14:24.045 12:34:06 -- lvol/snapshot_clone.sh@346 -- # run_fio_test /dev/nbd0 16777216 4194304 read 0x00 00:14:24.045 12:34:06 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:24.045 12:34:06 -- lvol/common.sh@41 -- # local offset=16777216 00:14:24.045 12:34:06 -- lvol/common.sh@42 -- # local size=4194304 00:14:24.045 12:34:06 -- lvol/common.sh@43 -- # local rw=read 00:14:24.045 12:34:06 -- lvol/common.sh@44 -- # local pattern=0x00 00:14:24.045 12:34:06 -- lvol/common.sh@45 -- # local extra_params= 00:14:24.045 12:34:06 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:24.045 12:34:06 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:14:24.045 12:34:06 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:24.045 12:34:06 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:24.045 12:34:06 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:14:24.045 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:24.045 fio-3.35 00:14:24.045 Starting 1 process 00:14:24.303 00:14:24.303 fio_test: (groupid=0, jobs=1): err= 0: pid=61559: Tue Oct 1 12:34:06 2024 00:14:24.303 read: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(4096KiB/80msec) 00:14:24.303 clat (usec): min=62, max=218, avg=75.62, stdev=16.81 00:14:24.303 lat (usec): min=62, max=218, avg=75.75, stdev=16.83 00:14:24.303 clat percentiles (usec): 00:14:24.303 | 1.00th=[ 63], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 64], 00:14:24.303 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 74], 00:14:24.303 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 98], 95.00th=[ 108], 00:14:24.303 | 99.00th=[ 135], 99.50th=[ 147], 99.90th=[ 196], 99.95th=[ 219], 00:14:24.303 | 99.99th=[ 219] 00:14:24.303 lat (usec) : 100=91.80%, 250=8.20% 00:14:24.303 cpu : usr=5.06%, sys=10.13%, ctx=1251, majf=0, minf=9 00:14:24.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.303 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.303 00:14:24.303 Run status group 0 (all jobs): 00:14:24.303 READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=4096KiB (4194kB), run=80-80msec 00:14:24.303 00:14:24.303 Disk stats (read/write): 00:14:24.303 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:24.303 12:34:06 -- lvol/snapshot_clone.sh@352 -- # rpc_cmd bdev_lvol_decouple_parent lvs_test/lvol_test 00:14:24.303 12:34:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.303 12:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:24.303 12:34:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.303 12:34:06 -- lvol/snapshot_clone.sh@353 -- # rpc_cmd bdev_get_bdevs -b cb505df3-843a-40b5-8934-3ebca7bfcdf7 00:14:24.303 12:34:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.303 12:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:24.303 12:34:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.303 12:34:06 -- lvol/snapshot_clone.sh@353 -- # lvol='[ 00:14:24.303 { 00:14:24.303 "name": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:24.303 "aliases": [ 00:14:24.303 "lvs_test/lvol_test" 00:14:24.303 ], 00:14:24.303 "product_name": "Logical Volume", 00:14:24.303 "block_size": 512, 00:14:24.303 "num_blocks": 40960, 00:14:24.303 "uuid": "cb505df3-843a-40b5-8934-3ebca7bfcdf7", 00:14:24.303 "assigned_rate_limits": { 00:14:24.303 "rw_ios_per_sec": 0, 00:14:24.303 "rw_mbytes_per_sec": 0, 00:14:24.303 "r_mbytes_per_sec": 0, 00:14:24.303 "w_mbytes_per_sec": 0 00:14:24.303 }, 00:14:24.303 "claimed": false, 00:14:24.303 "zoned": false, 00:14:24.303 "supported_io_types": { 00:14:24.303 "read": true, 00:14:24.303 "write": true, 00:14:24.303 "unmap": true, 00:14:24.303 "write_zeroes": true, 00:14:24.303 "flush": false, 00:14:24.303 "reset": true, 00:14:24.303 "compare": false, 00:14:24.303 "compare_and_write": false, 00:14:24.303 "abort": false, 00:14:24.303 "nvme_admin": false, 00:14:24.303 "nvme_io": false 00:14:24.303 }, 00:14:24.303 "memory_domains": [ 00:14:24.303 { 00:14:24.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.303 "dma_device_type": 2 00:14:24.303 } 00:14:24.303 ], 00:14:24.303 "driver_specific": { 00:14:24.303 "lvol": { 00:14:24.303 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:24.303 "base_bdev": "Malloc5", 00:14:24.303 "thin_provision": true, 00:14:24.303 "snapshot": false, 00:14:24.303 "clone": false, 00:14:24.303 "esnap_clone": false 00:14:24.303 } 00:14:24.303 } 00:14:24.303 } 00:14:24.303 ]' 00:14:24.303 12:34:06 -- lvol/snapshot_clone.sh@354 -- # rpc_cmd bdev_get_bdevs -b cca2a86a-14e4-49d9-b691-25c3431b43cc 00:14:24.303 12:34:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.303 12:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:24.303 12:34:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.303 12:34:06 -- lvol/snapshot_clone.sh@354 -- # snapshot='[ 00:14:24.303 { 00:14:24.303 "name": "cca2a86a-14e4-49d9-b691-25c3431b43cc", 00:14:24.303 "aliases": [ 00:14:24.303 "lvs_test/lvol_snapshot" 00:14:24.303 ], 00:14:24.304 "product_name": "Logical Volume", 00:14:24.304 "block_size": 512, 00:14:24.304 "num_blocks": 40960, 00:14:24.304 "uuid": "cca2a86a-14e4-49d9-b691-25c3431b43cc", 00:14:24.304 "assigned_rate_limits": { 00:14:24.304 "rw_ios_per_sec": 0, 00:14:24.304 "rw_mbytes_per_sec": 0, 00:14:24.304 "r_mbytes_per_sec": 0, 00:14:24.304 "w_mbytes_per_sec": 0 00:14:24.304 }, 00:14:24.304 "claimed": false, 00:14:24.304 "zoned": false, 00:14:24.304 "supported_io_types": { 00:14:24.304 "read": true, 00:14:24.304 "write": false, 00:14:24.304 "unmap": false, 00:14:24.304 "write_zeroes": false, 00:14:24.304 "flush": false, 00:14:24.304 "reset": true, 00:14:24.304 "compare": false, 00:14:24.304 "compare_and_write": false, 00:14:24.304 "abort": false, 00:14:24.304 "nvme_admin": false, 00:14:24.304 "nvme_io": false 00:14:24.304 }, 00:14:24.304 "memory_domains": [ 00:14:24.304 { 00:14:24.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.304 "dma_device_type": 2 00:14:24.304 } 00:14:24.304 ], 00:14:24.304 "driver_specific": { 00:14:24.304 "lvol": { 00:14:24.304 "lvol_store_uuid": "52a6beb0-c037-4071-9e5b-0eab045ba89a", 00:14:24.304 "base_bdev": "Malloc5", 00:14:24.304 "thin_provision": true, 00:14:24.304 "snapshot": true, 00:14:24.304 "clone": false, 00:14:24.304 "esnap_clone": false 00:14:24.304 } 00:14:24.304 } 00:14:24.304 } 00:14:24.304 ]' 00:14:24.304 12:34:06 -- lvol/snapshot_clone.sh@355 -- # jq '.[].driver_specific.lvol.thin_provision' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@355 -- # '[' true = true ']' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@356 -- # jq '.[].driver_specific.lvol.clone' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@356 -- # '[' false = false ']' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@357 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@357 -- # '[' false = false ']' 00:14:24.563 12:34:06 -- lvol/snapshot_clone.sh@358 -- # jq '.[].driver_specific.lvol.clone' 00:14:24.563 12:34:07 -- lvol/snapshot_clone.sh@358 -- # '[' false = false ']' 00:14:24.563 12:34:07 -- lvol/snapshot_clone.sh@361 -- # rpc_cmd bdev_lvol_delete cca2a86a-14e4-49d9-b691-25c3431b43cc 00:14:24.563 12:34:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.563 12:34:07 -- common/autotest_common.sh@10 -- # set +x 00:14:24.563 12:34:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.563 12:34:07 -- lvol/snapshot_clone.sh@364 -- # for i in "${!pattern[@]}" 00:14:24.563 12:34:07 -- lvol/snapshot_clone.sh@365 -- # start_fill=0 00:14:24.563 12:34:07 -- lvol/snapshot_clone.sh@366 -- # run_fio_test /dev/nbd0 0 4194304 read 0xdd 00:14:24.563 12:34:07 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:24.563 12:34:07 -- lvol/common.sh@41 -- # local offset=0 00:14:24.563 12:34:07 -- lvol/common.sh@42 -- # local size=4194304 00:14:24.563 12:34:07 -- lvol/common.sh@43 -- # local rw=read 00:14:24.563 12:34:07 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:24.563 12:34:07 -- lvol/common.sh@45 -- # local extra_params= 00:14:24.563 12:34:07 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:24.563 12:34:07 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:24.563 12:34:07 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:24.563 12:34:07 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:24.563 12:34:07 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:24.821 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:24.821 fio-3.35 00:14:24.821 Starting 1 process 00:14:25.112 00:14:25.112 fio_test: (groupid=0, jobs=1): err= 0: pid=61582: Tue Oct 1 12:34:07 2024 00:14:25.112 read: IOPS=10.8k, BW=42.1MiB/s (44.1MB/s)(4096KiB/95msec) 00:14:25.112 clat (usec): min=74, max=331, avg=90.73, stdev=18.03 00:14:25.112 lat (usec): min=74, max=332, avg=90.86, stdev=18.05 00:14:25.112 clat percentiles (usec): 00:14:25.112 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 78], 00:14:25.112 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 91], 00:14:25.112 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 115], 95.00th=[ 124], 00:14:25.112 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 174], 99.95th=[ 330], 00:14:25.112 | 99.99th=[ 330] 00:14:25.112 lat (usec) : 100=76.66%, 250=23.24%, 500=0.10% 00:14:25.112 cpu : usr=1.06%, sys=11.70%, ctx=1024, majf=0, minf=10 00:14:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.112 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.112 00:14:25.112 Run status group 0 (all jobs): 00:14:25.112 READ: bw=42.1MiB/s (44.1MB/s), 42.1MiB/s-42.1MiB/s (44.1MB/s-44.1MB/s), io=4096KiB (4194kB), run=95-95msec 00:14:25.112 00:14:25.112 Disk stats (read/write): 00:14:25.112 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:25.112 12:34:07 -- lvol/snapshot_clone.sh@364 -- # for i in "${!pattern[@]}" 00:14:25.112 12:34:07 -- lvol/snapshot_clone.sh@365 -- # start_fill=4194304 00:14:25.112 12:34:07 -- lvol/snapshot_clone.sh@366 -- # run_fio_test /dev/nbd0 4194304 4194304 read 0xee 00:14:25.112 12:34:07 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:25.112 12:34:07 -- lvol/common.sh@41 -- # local offset=4194304 00:14:25.112 12:34:07 -- lvol/common.sh@42 -- # local size=4194304 00:14:25.112 12:34:07 -- lvol/common.sh@43 -- # local rw=read 00:14:25.112 12:34:07 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:25.112 12:34:07 -- lvol/common.sh@45 -- # local extra_params= 00:14:25.112 12:34:07 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:25.112 12:34:07 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:25.112 12:34:07 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:25.112 12:34:07 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:25.112 12:34:07 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=4194304 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:25.112 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:25.112 fio-3.35 00:14:25.112 Starting 1 process 00:14:25.371 00:14:25.371 fio_test: (groupid=0, jobs=1): err= 0: pid=61586: Tue Oct 1 12:34:07 2024 00:14:25.371 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(4096KiB/94msec) 00:14:25.371 clat (usec): min=74, max=928, avg=90.16, stdev=30.48 00:14:25.371 lat (usec): min=75, max=928, avg=90.28, stdev=30.48 00:14:25.371 clat percentiles (usec): 00:14:25.371 | 1.00th=[ 76], 5.00th=[ 77], 10.00th=[ 77], 20.00th=[ 78], 00:14:25.371 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 89], 00:14:25.371 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 117], 00:14:25.371 | 99.00th=[ 133], 99.50th=[ 145], 99.90th=[ 285], 99.95th=[ 930], 00:14:25.371 | 99.99th=[ 930] 00:14:25.371 lat (usec) : 100=80.08%, 250=19.73%, 500=0.10%, 1000=0.10% 00:14:25.371 cpu : usr=2.15%, sys=8.60%, ctx=1025, majf=0, minf=10 00:14:25.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.371 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.371 00:14:25.371 Run status group 0 (all jobs): 00:14:25.371 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=4096KiB (4194kB), run=94-94msec 00:14:25.371 00:14:25.371 Disk stats (read/write): 00:14:25.371 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:25.371 12:34:07 -- lvol/snapshot_clone.sh@364 -- # for i in "${!pattern[@]}" 00:14:25.371 12:34:07 -- lvol/snapshot_clone.sh@365 -- # start_fill=8388608 00:14:25.371 12:34:07 -- lvol/snapshot_clone.sh@366 -- # run_fio_test /dev/nbd0 8388608 4194304 read 0xdd 00:14:25.371 12:34:07 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:25.371 12:34:07 -- lvol/common.sh@41 -- # local offset=8388608 00:14:25.371 12:34:07 -- lvol/common.sh@42 -- # local size=4194304 00:14:25.371 12:34:07 -- lvol/common.sh@43 -- # local rw=read 00:14:25.371 12:34:07 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:25.371 12:34:07 -- lvol/common.sh@45 -- # local extra_params= 00:14:25.371 12:34:07 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:25.371 12:34:07 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:25.371 12:34:07 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:25.371 12:34:07 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:25.371 12:34:07 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:25.371 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:25.371 fio-3.35 00:14:25.371 Starting 1 process 00:14:25.630 00:14:25.630 fio_test: (groupid=0, jobs=1): err= 0: pid=61589: Tue Oct 1 12:34:08 2024 00:14:25.630 read: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(4096KiB/97msec) 00:14:25.630 clat (usec): min=76, max=268, avg=92.49, stdev=18.80 00:14:25.630 lat (usec): min=76, max=269, avg=92.62, stdev=18.83 00:14:25.630 clat percentiles (usec): 00:14:25.630 | 1.00th=[ 77], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 78], 00:14:25.630 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 93], 00:14:25.630 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 127], 00:14:25.630 | 99.00th=[ 151], 99.50th=[ 163], 99.90th=[ 247], 99.95th=[ 269], 00:14:25.630 | 99.99th=[ 269] 00:14:25.630 lat (usec) : 100=71.97%, 250=27.93%, 500=0.10% 00:14:25.630 cpu : usr=4.17%, sys=6.25%, ctx=1025, majf=0, minf=11 00:14:25.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.630 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.630 00:14:25.630 Run status group 0 (all jobs): 00:14:25.630 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=4096KiB (4194kB), run=97-97msec 00:14:25.630 00:14:25.630 Disk stats (read/write): 00:14:25.630 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:25.630 12:34:08 -- lvol/snapshot_clone.sh@364 -- # for i in "${!pattern[@]}" 00:14:25.630 12:34:08 -- lvol/snapshot_clone.sh@365 -- # start_fill=12582912 00:14:25.630 12:34:08 -- lvol/snapshot_clone.sh@366 -- # run_fio_test /dev/nbd0 12582912 4194304 read 0xcc 00:14:25.630 12:34:08 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:25.630 12:34:08 -- lvol/common.sh@41 -- # local offset=12582912 00:14:25.630 12:34:08 -- lvol/common.sh@42 -- # local size=4194304 00:14:25.630 12:34:08 -- lvol/common.sh@43 -- # local rw=read 00:14:25.630 12:34:08 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:25.630 12:34:08 -- lvol/common.sh@45 -- # local extra_params= 00:14:25.630 12:34:08 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:25.630 12:34:08 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:25.630 12:34:08 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:25.630 12:34:08 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:25.630 12:34:08 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:25.630 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:25.630 fio-3.35 00:14:25.630 Starting 1 process 00:14:25.889 00:14:25.889 fio_test: (groupid=0, jobs=1): err= 0: pid=61598: Tue Oct 1 12:34:08 2024 00:14:25.889 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(4096KiB/96msec) 00:14:25.889 clat (usec): min=77, max=239, avg=91.92, stdev=15.64 00:14:25.889 lat (usec): min=77, max=240, avg=92.06, stdev=15.68 00:14:25.889 clat percentiles (usec): 00:14:25.889 | 1.00th=[ 79], 5.00th=[ 79], 10.00th=[ 79], 20.00th=[ 80], 00:14:25.889 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 90], 00:14:25.889 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 113], 95.00th=[ 122], 00:14:25.889 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 225], 99.95th=[ 239], 00:14:25.889 | 99.99th=[ 239] 00:14:25.889 lat (usec) : 100=78.42%, 250=21.58% 00:14:25.889 cpu : usr=3.16%, sys=8.42%, ctx=1024, majf=0, minf=10 00:14:25.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.889 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.889 00:14:25.889 Run status group 0 (all jobs): 00:14:25.889 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=4096KiB (4194kB), run=96-96msec 00:14:25.889 00:14:25.889 Disk stats (read/write): 00:14:25.889 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:25.889 12:34:08 -- lvol/snapshot_clone.sh@364 -- # for i in "${!pattern[@]}" 00:14:25.889 12:34:08 -- lvol/snapshot_clone.sh@365 -- # start_fill=16777216 00:14:25.889 12:34:08 -- lvol/snapshot_clone.sh@366 -- # run_fio_test /dev/nbd0 16777216 4194304 read 0x00 00:14:25.889 12:34:08 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:25.889 12:34:08 -- lvol/common.sh@41 -- # local offset=16777216 00:14:25.889 12:34:08 -- lvol/common.sh@42 -- # local size=4194304 00:14:25.889 12:34:08 -- lvol/common.sh@43 -- # local rw=read 00:14:25.889 12:34:08 -- lvol/common.sh@44 -- # local pattern=0x00 00:14:25.889 12:34:08 -- lvol/common.sh@45 -- # local extra_params= 00:14:25.889 12:34:08 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:25.889 12:34:08 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:14:25.889 12:34:08 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:25.889 12:34:08 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:14:25.889 12:34:08 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=4194304 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:14:26.148 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:26.148 fio-3.35 00:14:26.148 Starting 1 process 00:14:26.407 00:14:26.407 fio_test: (groupid=0, jobs=1): err= 0: pid=61606: Tue Oct 1 12:34:08 2024 00:14:26.407 read: IOPS=12.5k, BW=48.8MiB/s (51.1MB/s)(4096KiB/82msec) 00:14:26.407 clat (usec): min=61, max=221, avg=77.99, stdev=17.24 00:14:26.407 lat (usec): min=62, max=222, avg=78.18, stdev=17.31 00:14:26.407 clat percentiles (usec): 00:14:26.407 | 1.00th=[ 63], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 65], 00:14:26.407 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 80], 00:14:26.407 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 112], 00:14:26.407 | 99.00th=[ 131], 99.50th=[ 139], 99.90th=[ 190], 99.95th=[ 223], 00:14:26.407 | 99.99th=[ 223] 00:14:26.407 lat (usec) : 100=89.06%, 250=10.94% 00:14:26.407 cpu : usr=4.94%, sys=11.11%, ctx=2048, majf=0, minf=11 00:14:26.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.407 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.407 00:14:26.407 Run status group 0 (all jobs): 00:14:26.407 READ: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=4096KiB (4194kB), run=82-82msec 00:14:26.407 00:14:26.407 Disk stats (read/write): 00:14:26.407 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:26.407 12:34:08 -- lvol/snapshot_clone.sh@370 -- # rpc_cmd bdev_lvol_delete cb505df3-843a-40b5-8934-3ebca7bfcdf7 00:14:26.407 12:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.407 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.407 12:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.407 12:34:08 -- lvol/snapshot_clone.sh@371 -- # rpc_cmd bdev_lvol_delete_lvstore -u 52a6beb0-c037-4071-9e5b-0eab045ba89a 00:14:26.407 12:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.407 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.407 12:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.407 12:34:08 -- lvol/snapshot_clone.sh@372 -- # rpc_cmd bdev_malloc_delete Malloc5 00:14:26.407 12:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.407 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.665 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.665 12:34:09 -- lvol/snapshot_clone.sh@373 -- # check_leftover_devices 00:14:26.665 12:34:09 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.665 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.665 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.665 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.665 12:34:09 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.665 12:34:09 -- lvol/common.sh@26 -- # jq length 00:14:26.665 12:34:09 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.665 12:34:09 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.665 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.665 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.665 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.665 12:34:09 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.665 12:34:09 -- lvol/common.sh@28 -- # jq length 00:14:26.665 ************************************ 00:14:26.665 END TEST test_clone_decouple_parent 00:14:26.665 ************************************ 00:14:26.665 12:34:09 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.665 00:14:26.665 real 0m8.857s 00:14:26.665 user 0m2.741s 00:14:26.665 sys 0m0.956s 00:14:26.665 12:34:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.665 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@614 -- # run_test test_lvol_bdev_readonly test_lvol_bdev_readonly 00:14:26.923 12:34:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:26.923 12:34:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.923 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 ************************************ 00:14:26.923 START TEST test_lvol_bdev_readonly 00:14:26.923 ************************************ 00:14:26.923 12:34:09 -- common/autotest_common.sh@1104 -- # test_lvol_bdev_readonly 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@378 -- # rpc_cmd bdev_malloc_create 128 512 00:14:26.923 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.923 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@378 -- # malloc_name=Malloc6 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@379 -- # rpc_cmd bdev_lvol_create_lvstore Malloc6 lvs_test 00:14:26.923 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.923 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@379 -- # lvs_uuid=1ce89174-4654-4c03-942b-8b2024e9e69b 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@382 -- # round_down 62 00:14:26.923 12:34:09 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:26.923 12:34:09 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:26.923 12:34:09 -- lvol/common.sh@36 -- # echo 60 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@382 -- # lvol_size_mb=60 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@384 -- # rpc_cmd bdev_lvol_create -u 1ce89174-4654-4c03-942b-8b2024e9e69b lvol_test 60 00:14:26.923 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.923 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@384 -- # lvol_uuid=97762974-963d-439f-95de-83277d1bac1a 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@385 -- # rpc_cmd bdev_get_bdevs -b 97762974-963d-439f-95de-83277d1bac1a 00:14:26.923 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.923 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.923 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.923 12:34:09 -- lvol/snapshot_clone.sh@385 -- # lvol='[ 00:14:26.923 { 00:14:26.923 "name": "97762974-963d-439f-95de-83277d1bac1a", 00:14:26.923 "aliases": [ 00:14:26.923 "lvs_test/lvol_test" 00:14:26.923 ], 00:14:26.923 "product_name": "Logical Volume", 00:14:26.923 "block_size": 512, 00:14:26.923 "num_blocks": 122880, 00:14:26.923 "uuid": "97762974-963d-439f-95de-83277d1bac1a", 00:14:26.923 "assigned_rate_limits": { 00:14:26.923 "rw_ios_per_sec": 0, 00:14:26.923 "rw_mbytes_per_sec": 0, 00:14:26.923 "r_mbytes_per_sec": 0, 00:14:26.924 "w_mbytes_per_sec": 0 00:14:26.924 }, 00:14:26.924 "claimed": false, 00:14:26.924 "zoned": false, 00:14:26.924 "supported_io_types": { 00:14:26.924 "read": true, 00:14:26.924 "write": true, 00:14:26.924 "unmap": true, 00:14:26.924 "write_zeroes": true, 00:14:26.924 "flush": false, 00:14:26.924 "reset": true, 00:14:26.924 "compare": false, 00:14:26.924 "compare_and_write": false, 00:14:26.924 "abort": false, 00:14:26.924 "nvme_admin": false, 00:14:26.924 "nvme_io": false 00:14:26.924 }, 00:14:26.924 "memory_domains": [ 00:14:26.924 { 00:14:26.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.924 "dma_device_type": 2 00:14:26.924 } 00:14:26.924 ], 00:14:26.924 "driver_specific": { 00:14:26.924 "lvol": { 00:14:26.924 "lvol_store_uuid": "1ce89174-4654-4c03-942b-8b2024e9e69b", 00:14:26.924 "base_bdev": "Malloc6", 00:14:26.924 "thin_provision": false, 00:14:26.924 "snapshot": false, 00:14:26.924 "clone": false, 00:14:26.924 "esnap_clone": false 00:14:26.924 } 00:14:26.924 } 00:14:26.924 } 00:14:26.924 ]' 00:14:26.924 12:34:09 -- lvol/snapshot_clone.sh@388 -- # rpc_cmd bdev_lvol_set_read_only lvs_test/lvol_test 00:14:26.924 12:34:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.924 12:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.924 12:34:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.924 12:34:09 -- lvol/snapshot_clone.sh@391 -- # nbd_start_disks /var/tmp/spdk.sock 97762974-963d-439f-95de-83277d1bac1a /dev/nbd0 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('97762974-963d-439f-95de-83277d1bac1a') 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@12 -- # local i 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:26.924 12:34:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 97762974-963d-439f-95de-83277d1bac1a /dev/nbd0 00:14:27.183 /dev/nbd0 00:14:27.183 12:34:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.183 12:34:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.183 12:34:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:27.183 12:34:09 -- common/autotest_common.sh@857 -- # local i 00:14:27.183 12:34:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:27.183 12:34:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:27.183 12:34:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:27.183 12:34:09 -- common/autotest_common.sh@861 -- # break 00:14:27.183 12:34:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:27.183 12:34:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:27.183 12:34:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:27.183 1+0 records in 00:14:27.183 1+0 records out 00:14:27.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536696 s, 7.6 MB/s 00:14:27.183 12:34:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:27.443 12:34:09 -- common/autotest_common.sh@874 -- # size=4096 00:14:27.443 12:34:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:27.443 12:34:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:27.443 12:34:09 -- common/autotest_common.sh@877 -- # return 0 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.443 12:34:09 -- lvol/snapshot_clone.sh@392 -- # run_fio_test /dev/nbd0 0 20971520 write 0xcc 00:14:27.443 12:34:09 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:27.443 12:34:09 -- lvol/common.sh@41 -- # local offset=0 00:14:27.443 12:34:09 -- lvol/common.sh@42 -- # local size=20971520 00:14:27.443 12:34:09 -- lvol/common.sh@43 -- # local rw=write 00:14:27.443 12:34:09 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:27.443 12:34:09 -- lvol/common.sh@45 -- # local extra_params= 00:14:27.443 12:34:09 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:27.443 12:34:09 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:27.443 12:34:09 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:27.443 12:34:09 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:27.443 12:34:09 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:27.443 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:27.443 fio-3.35 00:14:27.443 Starting 1 process 00:14:27.443 fio: first I/O failed. If /dev/nbd0 is a zoned block device, consider --zonemode=zbd 00:14:27.443 fio: pid=61658, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:27.443 fio: io_u error on file /dev/nbd0: Input/output error: write offset=0, buflen=4096 00:14:27.443 00:14:27.443 fio_test: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=61658: Tue Oct 1 12:34:09 2024 00:14:27.443 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=20 00:14:27.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.443 complete : 0=50.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.443 issued rwts: total=0,1,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.443 00:14:27.443 Run status group 0 (all jobs): 00:14:27.443 00:14:27.443 Disk stats (read/write): 00:14:27.443 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:27.443 12:34:09 -- lvol/snapshot_clone.sh@393 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@51 -- # local i 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.443 12:34:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@41 -- # break 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.011 12:34:10 -- lvol/snapshot_clone.sh@396 -- # rpc_cmd bdev_lvol_clone lvs_test/lvol_test clone_test 00:14:28.011 12:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.011 12:34:10 -- common/autotest_common.sh@10 -- # set +x 00:14:28.011 12:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.011 12:34:10 -- lvol/snapshot_clone.sh@396 -- # clone_uuid=f08e3dc3-ddae-4b1a-ac7c-d4264d7cd551 00:14:28.011 12:34:10 -- lvol/snapshot_clone.sh@399 -- # nbd_start_disks /var/tmp/spdk.sock f08e3dc3-ddae-4b1a-ac7c-d4264d7cd551 /dev/nbd0 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('f08e3dc3-ddae-4b1a-ac7c-d4264d7cd551') 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@12 -- # local i 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.011 12:34:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk f08e3dc3-ddae-4b1a-ac7c-d4264d7cd551 /dev/nbd0 00:14:28.011 /dev/nbd0 00:14:28.269 12:34:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.269 12:34:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.269 12:34:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:28.269 12:34:10 -- common/autotest_common.sh@857 -- # local i 00:14:28.269 12:34:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:28.269 12:34:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:28.269 12:34:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:28.269 12:34:10 -- common/autotest_common.sh@861 -- # break 00:14:28.269 12:34:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:28.269 12:34:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:28.269 12:34:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:28.269 1+0 records in 00:14:28.269 1+0 records out 00:14:28.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268732 s, 15.2 MB/s 00:14:28.269 12:34:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:28.269 12:34:10 -- common/autotest_common.sh@874 -- # size=4096 00:14:28.269 12:34:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:28.269 12:34:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:28.269 12:34:10 -- common/autotest_common.sh@877 -- # return 0 00:14:28.269 12:34:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.269 12:34:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.269 12:34:10 -- lvol/snapshot_clone.sh@400 -- # run_fio_test /dev/nbd0 0 20971520 write 0xcc 00:14:28.269 12:34:10 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:28.269 12:34:10 -- lvol/common.sh@41 -- # local offset=0 00:14:28.269 12:34:10 -- lvol/common.sh@42 -- # local size=20971520 00:14:28.269 12:34:10 -- lvol/common.sh@43 -- # local rw=write 00:14:28.269 12:34:10 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:28.269 12:34:10 -- lvol/common.sh@45 -- # local extra_params= 00:14:28.269 12:34:10 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:28.269 12:34:10 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:28.269 12:34:10 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:28.269 12:34:10 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:28.269 12:34:10 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=20971520 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:28.269 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:28.269 fio-3.35 00:14:28.269 Starting 1 process 00:14:29.204 00:14:29.204 fio_test: (groupid=0, jobs=1): err= 0: pid=61687: Tue Oct 1 12:34:11 2024 00:14:29.204 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(20.0MiB/422msec) 00:14:29.204 clat (usec): min=58, max=705, avg=81.05, stdev=24.96 00:14:29.204 lat (usec): min=58, max=705, avg=81.14, stdev=24.97 00:14:29.204 clat percentiles (usec): 00:14:29.204 | 1.00th=[ 60], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 63], 00:14:29.204 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 84], 00:14:29.204 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 122], 00:14:29.204 | 99.00th=[ 147], 99.50th=[ 172], 99.90th=[ 235], 99.95th=[ 457], 00:14:29.204 | 99.99th=[ 709] 00:14:29.204 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(20.0MiB/446msec); 0 zone resets 00:14:29.204 clat (usec): min=56, max=1580, avg=85.01, stdev=44.73 00:14:29.204 lat (usec): min=57, max=1581, avg=85.96, stdev=44.95 00:14:29.204 clat percentiles (usec): 00:14:29.204 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 71], 00:14:29.204 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 79], 60.00th=[ 82], 00:14:29.204 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 124], 00:14:29.204 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 1020], 99.95th=[ 1205], 00:14:29.204 | 99.99th=[ 1582] 00:14:29.204 bw ( KiB/s): min=40960, max=40960, per=89.20%, avg=40960.00, stdev= 0.00, samples=1 00:14:29.204 iops : min=10240, max=10240, avg=10240.00, stdev= 0.00, samples=1 00:14:29.204 lat (usec) : 100=83.77%, 250=16.11%, 500=0.04%, 750=0.02% 00:14:29.204 lat (msec) : 2=0.06% 00:14:29.204 cpu : usr=2.42%, sys=8.66%, ctx=14227, majf=0, minf=149 00:14:29.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.204 issued rwts: total=5120,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.204 00:14:29.204 Run status group 0 (all jobs): 00:14:29.205 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=20.0MiB (21.0MB), run=422-422msec 00:14:29.205 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=20.0MiB (21.0MB), run=446-446msec 00:14:29.205 00:14:29.205 Disk stats (read/write): 00:14:29.205 nbd0: ios=2446/5120, merge=0/0, ticks=186/391, in_queue=578, util=86.66% 00:14:29.205 12:34:11 -- lvol/snapshot_clone.sh@401 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@51 -- # local i 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.205 12:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@41 -- # break 00:14:29.464 12:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.464 12:34:11 -- lvol/snapshot_clone.sh@404 -- # rpc_cmd bdev_lvol_delete f08e3dc3-ddae-4b1a-ac7c-d4264d7cd551 00:14:29.464 12:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.464 12:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.464 12:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.464 12:34:11 -- lvol/snapshot_clone.sh@405 -- # rpc_cmd bdev_lvol_delete 97762974-963d-439f-95de-83277d1bac1a 00:14:29.464 12:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.464 12:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.464 12:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.464 12:34:11 -- lvol/snapshot_clone.sh@406 -- # rpc_cmd bdev_lvol_delete_lvstore -u 1ce89174-4654-4c03-942b-8b2024e9e69b 00:14:29.464 12:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.464 12:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.464 12:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.464 12:34:11 -- lvol/snapshot_clone.sh@407 -- # rpc_cmd bdev_malloc_delete Malloc6 00:14:29.464 12:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.464 12:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.031 12:34:12 -- lvol/snapshot_clone.sh@408 -- # check_leftover_devices 00:14:30.031 12:34:12 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:30.031 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.031 12:34:12 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:30.031 12:34:12 -- lvol/common.sh@26 -- # jq length 00:14:30.031 12:34:12 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:30.031 12:34:12 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:30.031 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.031 12:34:12 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:30.031 12:34:12 -- lvol/common.sh@28 -- # jq length 00:14:30.031 ************************************ 00:14:30.031 END TEST test_lvol_bdev_readonly 00:14:30.031 ************************************ 00:14:30.031 12:34:12 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:30.031 00:14:30.031 real 0m3.178s 00:14:30.031 user 0m1.262s 00:14:30.031 sys 0m0.353s 00:14:30.031 12:34:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 12:34:12 -- lvol/snapshot_clone.sh@615 -- # run_test test_delete_snapshot_with_clone test_delete_snapshot_with_clone 00:14:30.031 12:34:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:30.031 12:34:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 ************************************ 00:14:30.031 START TEST test_delete_snapshot_with_clone 00:14:30.031 ************************************ 00:14:30.031 12:34:12 -- common/autotest_common.sh@1104 -- # test_delete_snapshot_with_clone 00:14:30.031 12:34:12 -- lvol/snapshot_clone.sh@413 -- # rpc_cmd bdev_malloc_create 128 512 00:14:30.031 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.031 12:34:12 -- lvol/snapshot_clone.sh@413 -- # malloc_name=Malloc7 00:14:30.031 12:34:12 -- lvol/snapshot_clone.sh@414 -- # rpc_cmd bdev_lvol_create_lvstore Malloc7 lvs_test 00:14:30.031 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.031 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.289 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@414 -- # lvs_uuid=76bdf23a-5576-40d9-8145-b85987bf9aa4 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@417 -- # round_down 62 00:14:30.289 12:34:12 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:30.289 12:34:12 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:30.289 12:34:12 -- lvol/common.sh@36 -- # echo 60 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@417 -- # lvol_size_mb=60 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@418 -- # lvol_size=62914560 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@420 -- # rpc_cmd bdev_lvol_create -u 76bdf23a-5576-40d9-8145-b85987bf9aa4 lvol_test 60 00:14:30.289 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.289 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.289 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@420 -- # lvol_uuid=1f698358-54aa-47af-a701-9a4607cb277c 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@421 -- # rpc_cmd bdev_get_bdevs -b 1f698358-54aa-47af-a701-9a4607cb277c 00:14:30.289 12:34:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.289 12:34:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.289 12:34:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.289 12:34:12 -- lvol/snapshot_clone.sh@421 -- # lvol='[ 00:14:30.289 { 00:14:30.289 "name": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:30.289 "aliases": [ 00:14:30.289 "lvs_test/lvol_test" 00:14:30.289 ], 00:14:30.289 "product_name": "Logical Volume", 00:14:30.289 "block_size": 512, 00:14:30.289 "num_blocks": 122880, 00:14:30.289 "uuid": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:30.289 "assigned_rate_limits": { 00:14:30.289 "rw_ios_per_sec": 0, 00:14:30.289 "rw_mbytes_per_sec": 0, 00:14:30.289 "r_mbytes_per_sec": 0, 00:14:30.289 "w_mbytes_per_sec": 0 00:14:30.289 }, 00:14:30.289 "claimed": false, 00:14:30.289 "zoned": false, 00:14:30.289 "supported_io_types": { 00:14:30.289 "read": true, 00:14:30.289 "write": true, 00:14:30.289 "unmap": true, 00:14:30.289 "write_zeroes": true, 00:14:30.289 "flush": false, 00:14:30.289 "reset": true, 00:14:30.289 "compare": false, 00:14:30.289 "compare_and_write": false, 00:14:30.289 "abort": false, 00:14:30.289 "nvme_admin": false, 00:14:30.289 "nvme_io": false 00:14:30.289 }, 00:14:30.289 "memory_domains": [ 00:14:30.289 { 00:14:30.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.290 "dma_device_type": 2 00:14:30.290 } 00:14:30.290 ], 00:14:30.290 "driver_specific": { 00:14:30.290 "lvol": { 00:14:30.290 "lvol_store_uuid": "76bdf23a-5576-40d9-8145-b85987bf9aa4", 00:14:30.290 "base_bdev": "Malloc7", 00:14:30.290 "thin_provision": false, 00:14:30.290 "snapshot": false, 00:14:30.290 "clone": false, 00:14:30.290 "esnap_clone": false 00:14:30.290 } 00:14:30.290 } 00:14:30.290 } 00:14:30.290 ]' 00:14:30.290 12:34:12 -- lvol/snapshot_clone.sh@424 -- # nbd_start_disks /var/tmp/spdk.sock 1f698358-54aa-47af-a701-9a4607cb277c /dev/nbd0 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('1f698358-54aa-47af-a701-9a4607cb277c') 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@12 -- # local i 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.290 12:34:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 1f698358-54aa-47af-a701-9a4607cb277c /dev/nbd0 00:14:30.549 /dev/nbd0 00:14:30.549 12:34:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.549 12:34:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.549 12:34:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:30.549 12:34:12 -- common/autotest_common.sh@857 -- # local i 00:14:30.549 12:34:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:30.549 12:34:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:30.549 12:34:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:30.549 12:34:12 -- common/autotest_common.sh@861 -- # break 00:14:30.549 12:34:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:30.549 12:34:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:30.549 12:34:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:30.549 1+0 records in 00:14:30.549 1+0 records out 00:14:30.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329447 s, 12.4 MB/s 00:14:30.549 12:34:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:30.549 12:34:12 -- common/autotest_common.sh@874 -- # size=4096 00:14:30.549 12:34:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:30.549 12:34:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:30.549 12:34:12 -- common/autotest_common.sh@877 -- # return 0 00:14:30.549 12:34:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.549 12:34:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.549 12:34:12 -- lvol/snapshot_clone.sh@425 -- # run_fio_test /dev/nbd0 0 62914560 write 0xcc 00:14:30.549 12:34:12 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:30.549 12:34:12 -- lvol/common.sh@41 -- # local offset=0 00:14:30.549 12:34:12 -- lvol/common.sh@42 -- # local size=62914560 00:14:30.549 12:34:12 -- lvol/common.sh@43 -- # local rw=write 00:14:30.549 12:34:12 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:30.549 12:34:12 -- lvol/common.sh@45 -- # local extra_params= 00:14:30.549 12:34:12 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:30.549 12:34:12 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:30.549 12:34:12 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:30.549 12:34:12 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:30.549 12:34:12 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:30.549 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:30.549 fio-3.35 00:14:30.549 Starting 1 process 00:14:33.838 00:14:33.838 fio_test: (groupid=0, jobs=1): err= 0: pid=61760: Tue Oct 1 12:34:15 2024 00:14:33.838 read: IOPS=12.9k, BW=50.3MiB/s (52.7MB/s)(60.0MiB/1194msec) 00:14:33.838 clat (usec): min=59, max=533, avg=76.40, stdev=15.83 00:14:33.838 lat (usec): min=59, max=533, avg=76.49, stdev=15.84 00:14:33.838 clat percentiles (usec): 00:14:33.838 | 1.00th=[ 62], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 65], 00:14:33.839 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 76], 00:14:33.839 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 105], 00:14:33.839 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 178], 99.95th=[ 253], 00:14:33.839 | 99.99th=[ 400] 00:14:33.839 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(60.0MiB/1302msec); 0 zone resets 00:14:33.839 clat (usec): min=59, max=328, avg=82.94, stdev=15.71 00:14:33.839 lat (usec): min=59, max=328, avg=83.89, stdev=15.94 00:14:33.839 clat percentiles (usec): 00:14:33.839 | 1.00th=[ 62], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 67], 00:14:33.839 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 86], 00:14:33.839 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 111], 00:14:33.839 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 163], 99.95th=[ 180], 00:14:33.839 | 99.99th=[ 219] 00:14:33.839 bw ( KiB/s): min=31920, max=46256, per=86.80%, avg=40960.00, stdev=7867.23, samples=3 00:14:33.839 iops : min= 7980, max=11564, avg=10240.00, stdev=1966.81, samples=3 00:14:33.839 lat (usec) : 100=89.74%, 250=10.23%, 500=0.03%, 750=0.01% 00:14:33.839 cpu : usr=3.61%, sys=7.86%, ctx=30723, majf=0, minf=392 00:14:33.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.839 issued rwts: total=15360,15360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.839 00:14:33.839 Run status group 0 (all jobs): 00:14:33.839 READ: bw=50.3MiB/s (52.7MB/s), 50.3MiB/s-50.3MiB/s (52.7MB/s-52.7MB/s), io=60.0MiB (62.9MB), run=1194-1194msec 00:14:33.839 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=60.0MiB (62.9MB), run=1302-1302msec 00:14:33.839 00:14:33.839 Disk stats (read/write): 00:14:33.839 nbd0: ios=13934/15360, merge=0/0, ticks=982/1156, in_queue=2138, util=96.02% 00:14:33.839 12:34:15 -- lvol/snapshot_clone.sh@428 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:33.839 12:34:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.839 12:34:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.839 12:34:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.839 12:34:15 -- lvol/snapshot_clone.sh@428 -- # snapshot_uuid=fdc51fe1-0f52-42c4-9c2d-0c693dcf8061 00:14:33.839 12:34:15 -- lvol/snapshot_clone.sh@431 -- # half_size=31457279 00:14:33.839 12:34:15 -- lvol/snapshot_clone.sh@432 -- # run_fio_test /dev/nbd0 0 31457279 write 0xee 00:14:33.839 12:34:15 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:33.839 12:34:15 -- lvol/common.sh@41 -- # local offset=0 00:14:33.839 12:34:15 -- lvol/common.sh@42 -- # local size=31457279 00:14:33.839 12:34:15 -- lvol/common.sh@43 -- # local rw=write 00:14:33.839 12:34:15 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:33.839 12:34:15 -- lvol/common.sh@45 -- # local extra_params= 00:14:33.839 12:34:15 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:33.839 12:34:15 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:33.839 12:34:15 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:33.839 12:34:15 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:33.839 12:34:15 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:33.839 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:33.839 fio-3.35 00:14:33.839 Starting 1 process 00:14:34.776 00:14:34.776 fio_test: (groupid=0, jobs=1): err= 0: pid=61786: Tue Oct 1 12:34:17 2024 00:14:34.776 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(30.0MiB/577msec) 00:14:34.776 clat (usec): min=56, max=289, avg=73.79, stdev=17.46 00:14:34.776 lat (usec): min=56, max=289, avg=73.89, stdev=17.48 00:14:34.776 clat percentiles (usec): 00:14:34.776 | 1.00th=[ 59], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:14:34.776 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 72], 00:14:34.776 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:14:34.776 | 99.00th=[ 130], 99.50th=[ 145], 99.90th=[ 227], 99.95th=[ 260], 00:14:34.776 | 99.99th=[ 289] 00:14:34.776 write: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(30.0MiB/610msec); 0 zone resets 00:14:34.776 clat (usec): min=56, max=1645, avg=77.43, stdev=40.56 00:14:34.776 lat (usec): min=57, max=1666, avg=78.37, stdev=40.86 00:14:34.776 clat percentiles (usec): 00:14:34.776 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 63], 00:14:34.776 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 78], 00:14:34.776 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 111], 00:14:34.776 | 99.00th=[ 130], 99.50th=[ 139], 99.90th=[ 1029], 99.95th=[ 1090], 00:14:34.776 | 99.99th=[ 1647] 00:14:34.776 bw ( KiB/s): min=11544, max=49896, per=61.00%, avg=30720.00, stdev=27118.96, samples=2 00:14:34.776 iops : min= 2886, max=12474, avg=7680.00, stdev=6779.74, samples=2 00:14:34.776 lat (usec) : 100=90.97%, 250=8.93%, 500=0.05%, 1000=0.01% 00:14:34.776 lat (msec) : 2=0.05% 00:14:34.776 cpu : usr=3.04%, sys=7.93%, ctx=15884, majf=0, minf=208 00:14:34.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:34.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.776 issued rwts: total=7680,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:34.776 00:14:34.776 Run status group 0 (all jobs): 00:14:34.776 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=30.0MiB (31.5MB), run=577-577msec 00:14:34.776 WRITE: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=30.0MiB (31.5MB), run=610-610msec 00:14:34.776 00:14:34.776 Disk stats (read/write): 00:14:34.776 nbd0: ios=7141/7680, merge=0/0, ticks=487/545, in_queue=1032, util=92.18% 00:14:34.776 12:34:17 -- lvol/snapshot_clone.sh@435 -- # nbd_start_disks /var/tmp/spdk.sock fdc51fe1-0f52-42c4-9c2d-0c693dcf8061 /dev/nbd1 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('fdc51fe1-0f52-42c4-9c2d-0c693dcf8061') 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@12 -- # local i 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.776 12:34:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk fdc51fe1-0f52-42c4-9c2d-0c693dcf8061 /dev/nbd1 00:14:35.035 /dev/nbd1 00:14:35.035 12:34:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.035 12:34:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.035 12:34:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:35.035 12:34:17 -- common/autotest_common.sh@857 -- # local i 00:14:35.035 12:34:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:35.035 12:34:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:35.035 12:34:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:35.035 12:34:17 -- common/autotest_common.sh@861 -- # break 00:14:35.035 12:34:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:35.035 12:34:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:35.035 12:34:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:35.035 1+0 records in 00:14:35.035 1+0 records out 00:14:35.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030017 s, 13.6 MB/s 00:14:35.035 12:34:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:35.035 12:34:17 -- common/autotest_common.sh@874 -- # size=4096 00:14:35.035 12:34:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:35.035 12:34:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:35.035 12:34:17 -- common/autotest_common.sh@877 -- # return 0 00:14:35.035 12:34:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.035 12:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.035 12:34:17 -- lvol/snapshot_clone.sh@436 -- # run_fio_test /dev/nbd1 0 31457279 read 0xcc 00:14:35.036 12:34:17 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:14:35.036 12:34:17 -- lvol/common.sh@41 -- # local offset=0 00:14:35.036 12:34:17 -- lvol/common.sh@42 -- # local size=31457279 00:14:35.036 12:34:17 -- lvol/common.sh@43 -- # local rw=read 00:14:35.036 12:34:17 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:35.036 12:34:17 -- lvol/common.sh@45 -- # local extra_params= 00:14:35.036 12:34:17 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:35.036 12:34:17 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:35.036 12:34:17 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:35.036 12:34:17 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:35.036 12:34:17 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:35.036 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:35.036 fio-3.35 00:14:35.036 Starting 1 process 00:14:35.973 00:14:35.973 fio_test: (groupid=0, jobs=1): err= 0: pid=61820: Tue Oct 1 12:34:18 2024 00:14:35.973 read: IOPS=9331, BW=36.5MiB/s (38.2MB/s)(30.0MiB/823msec) 00:14:35.973 clat (usec): min=86, max=517, avg=105.65, stdev=16.78 00:14:35.973 lat (usec): min=86, max=517, avg=105.78, stdev=16.79 00:14:35.973 clat percentiles (usec): 00:14:35.973 | 1.00th=[ 89], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 93], 00:14:35.973 | 30.00th=[ 94], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 106], 00:14:35.973 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 137], 00:14:35.973 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 215], 99.95th=[ 227], 00:14:35.973 | 99.99th=[ 519] 00:14:35.973 bw ( KiB/s): min=37504, max=37504, per=100.00%, avg=37504.00, stdev= 0.00, samples=1 00:14:35.973 iops : min= 9376, max= 9376, avg=9376.00, stdev= 0.00, samples=1 00:14:35.973 lat (usec) : 100=45.22%, 250=54.75%, 500=0.01%, 750=0.01% 00:14:35.973 cpu : usr=2.80%, sys=5.72%, ctx=7684, majf=0, minf=10 00:14:35.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.973 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.973 00:14:35.973 Run status group 0 (all jobs): 00:14:35.973 READ: bw=36.5MiB/s (38.2MB/s), 36.5MiB/s-36.5MiB/s (38.2MB/s-38.2MB/s), io=30.0MiB (31.5MB), run=823-823msec 00:14:35.973 00:14:35.973 Disk stats (read/write): 00:14:35.973 nbd1: ios=6029/0, merge=0/0, ticks=592/0, in_queue=592, util=86.61% 00:14:35.973 12:34:18 -- lvol/snapshot_clone.sh@439 -- # run_fio_test /dev/nbd0 0 31457279 read 0xee 00:14:35.973 12:34:18 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:35.973 12:34:18 -- lvol/common.sh@41 -- # local offset=0 00:14:35.973 12:34:18 -- lvol/common.sh@42 -- # local size=31457279 00:14:35.973 12:34:18 -- lvol/common.sh@43 -- # local rw=read 00:14:35.973 12:34:18 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:35.973 12:34:18 -- lvol/common.sh@45 -- # local extra_params= 00:14:35.973 12:34:18 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:35.973 12:34:18 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:35.973 12:34:18 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:35.973 12:34:18 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:35.973 12:34:18 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:36.233 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:36.233 fio-3.35 00:14:36.233 Starting 1 process 00:14:37.171 00:14:37.171 fio_test: (groupid=0, jobs=1): err= 0: pid=61834: Tue Oct 1 12:34:19 2024 00:14:37.171 read: IOPS=9758, BW=38.1MiB/s (40.0MB/s)(30.0MiB/787msec) 00:14:37.171 clat (usec): min=74, max=604, avg=100.96, stdev=24.54 00:14:37.171 lat (usec): min=74, max=604, avg=101.07, stdev=24.56 00:14:37.171 clat percentiles (usec): 00:14:37.171 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 78], 20.00th=[ 80], 00:14:37.171 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 94], 60.00th=[ 103], 00:14:37.171 | 70.00th=[ 115], 80.00th=[ 122], 90.00th=[ 133], 95.00th=[ 145], 00:14:37.171 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 247], 99.95th=[ 281], 00:14:37.171 | 99.99th=[ 603] 00:14:37.171 bw ( KiB/s): min=36920, max=36920, per=94.58%, avg=36920.00, stdev= 0.00, samples=1 00:14:37.171 iops : min= 9230, max= 9230, avg=9230.00, stdev= 0.00, samples=1 00:14:37.171 lat (usec) : 100=57.90%, 250=42.01%, 500=0.07%, 750=0.03% 00:14:37.171 cpu : usr=3.31%, sys=6.11%, ctx=7682, majf=0, minf=10 00:14:37.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.171 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.171 00:14:37.171 Run status group 0 (all jobs): 00:14:37.171 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=30.0MiB (31.5MB), run=787-787msec 00:14:37.171 00:14:37.171 Disk stats (read/write): 00:14:37.171 nbd0: ios=6121/0, merge=0/0, ticks=585/0, in_queue=584, util=86.54% 00:14:37.171 12:34:19 -- lvol/snapshot_clone.sh@440 -- # rpc_cmd bdev_get_bdevs -b 1f698358-54aa-47af-a701-9a4607cb277c 00:14:37.171 12:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.171 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 12:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.171 12:34:19 -- lvol/snapshot_clone.sh@440 -- # lvol='[ 00:14:37.171 { 00:14:37.171 "name": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:37.171 "aliases": [ 00:14:37.171 "lvs_test/lvol_test" 00:14:37.171 ], 00:14:37.171 "product_name": "Logical Volume", 00:14:37.171 "block_size": 512, 00:14:37.171 "num_blocks": 122880, 00:14:37.171 "uuid": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:37.171 "assigned_rate_limits": { 00:14:37.171 "rw_ios_per_sec": 0, 00:14:37.171 "rw_mbytes_per_sec": 0, 00:14:37.171 "r_mbytes_per_sec": 0, 00:14:37.171 "w_mbytes_per_sec": 0 00:14:37.171 }, 00:14:37.171 "claimed": false, 00:14:37.171 "zoned": false, 00:14:37.171 "supported_io_types": { 00:14:37.171 "read": true, 00:14:37.171 "write": true, 00:14:37.171 "unmap": true, 00:14:37.171 "write_zeroes": true, 00:14:37.171 "flush": false, 00:14:37.171 "reset": true, 00:14:37.171 "compare": false, 00:14:37.171 "compare_and_write": false, 00:14:37.171 "abort": false, 00:14:37.171 "nvme_admin": false, 00:14:37.171 "nvme_io": false 00:14:37.171 }, 00:14:37.171 "memory_domains": [ 00:14:37.171 { 00:14:37.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.171 "dma_device_type": 2 00:14:37.171 } 00:14:37.171 ], 00:14:37.171 "driver_specific": { 00:14:37.171 "lvol": { 00:14:37.171 "lvol_store_uuid": "76bdf23a-5576-40d9-8145-b85987bf9aa4", 00:14:37.171 "base_bdev": "Malloc7", 00:14:37.171 "thin_provision": true, 00:14:37.171 "snapshot": false, 00:14:37.171 "clone": true, 00:14:37.171 "base_snapshot": "lvol_snapshot", 00:14:37.171 "esnap_clone": false 00:14:37.171 } 00:14:37.171 } 00:14:37.171 } 00:14:37.171 ]' 00:14:37.171 12:34:19 -- lvol/snapshot_clone.sh@441 -- # jq '.[].driver_specific.lvol.clone' 00:14:37.171 12:34:19 -- lvol/snapshot_clone.sh@441 -- # '[' true = true ']' 00:14:37.171 12:34:19 -- lvol/snapshot_clone.sh@444 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@51 -- # local i 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.171 12:34:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@41 -- # break 00:14:37.430 12:34:19 -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@445 -- # rpc_cmd bdev_lvol_delete fdc51fe1-0f52-42c4-9c2d-0c693dcf8061 00:14:37.430 12:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.430 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.430 12:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@448 -- # rpc_cmd bdev_get_bdevs -b 1f698358-54aa-47af-a701-9a4607cb277c 00:14:37.430 12:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.430 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.430 12:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@448 -- # lvol='[ 00:14:37.430 { 00:14:37.430 "name": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:37.430 "aliases": [ 00:14:37.430 "lvs_test/lvol_test" 00:14:37.430 ], 00:14:37.430 "product_name": "Logical Volume", 00:14:37.430 "block_size": 512, 00:14:37.430 "num_blocks": 122880, 00:14:37.430 "uuid": "1f698358-54aa-47af-a701-9a4607cb277c", 00:14:37.430 "assigned_rate_limits": { 00:14:37.430 "rw_ios_per_sec": 0, 00:14:37.430 "rw_mbytes_per_sec": 0, 00:14:37.430 "r_mbytes_per_sec": 0, 00:14:37.430 "w_mbytes_per_sec": 0 00:14:37.430 }, 00:14:37.430 "claimed": false, 00:14:37.430 "zoned": false, 00:14:37.430 "supported_io_types": { 00:14:37.430 "read": true, 00:14:37.430 "write": true, 00:14:37.430 "unmap": true, 00:14:37.430 "write_zeroes": true, 00:14:37.430 "flush": false, 00:14:37.430 "reset": true, 00:14:37.430 "compare": false, 00:14:37.430 "compare_and_write": false, 00:14:37.430 "abort": false, 00:14:37.430 "nvme_admin": false, 00:14:37.430 "nvme_io": false 00:14:37.430 }, 00:14:37.430 "memory_domains": [ 00:14:37.430 { 00:14:37.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.430 "dma_device_type": 2 00:14:37.430 } 00:14:37.430 ], 00:14:37.430 "driver_specific": { 00:14:37.430 "lvol": { 00:14:37.430 "lvol_store_uuid": "76bdf23a-5576-40d9-8145-b85987bf9aa4", 00:14:37.430 "base_bdev": "Malloc7", 00:14:37.430 "thin_provision": true, 00:14:37.430 "snapshot": false, 00:14:37.430 "clone": false, 00:14:37.430 "esnap_clone": false 00:14:37.430 } 00:14:37.430 } 00:14:37.430 } 00:14:37.430 ]' 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@449 -- # jq '.[].driver_specific.lvol.clone' 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@449 -- # '[' false = false ']' 00:14:37.430 12:34:19 -- lvol/snapshot_clone.sh@450 -- # run_fio_test /dev/nbd0 0 31457279 read 0xee 00:14:37.430 12:34:19 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:37.430 12:34:19 -- lvol/common.sh@41 -- # local offset=0 00:14:37.431 12:34:19 -- lvol/common.sh@42 -- # local size=31457279 00:14:37.431 12:34:19 -- lvol/common.sh@43 -- # local rw=read 00:14:37.431 12:34:19 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:37.431 12:34:19 -- lvol/common.sh@45 -- # local extra_params= 00:14:37.431 12:34:19 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:37.431 12:34:19 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:37.431 12:34:19 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:37.431 12:34:19 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:37.431 12:34:19 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:37.689 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:37.689 fio-3.35 00:14:37.689 Starting 1 process 00:14:38.258 00:14:38.258 fio_test: (groupid=0, jobs=1): err= 0: pid=61859: Tue Oct 1 12:34:20 2024 00:14:38.258 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(30.0MiB/623msec) 00:14:38.258 clat (usec): min=59, max=312, avg=79.66, stdev=16.17 00:14:38.258 lat (usec): min=59, max=313, avg=79.78, stdev=16.19 00:14:38.258 clat percentiles (usec): 00:14:38.258 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 67], 00:14:38.258 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 84], 00:14:38.258 | 70.00th=[ 86], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 110], 00:14:38.258 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 182], 99.95th=[ 215], 00:14:38.258 | 99.99th=[ 314] 00:14:38.258 bw ( KiB/s): min=48536, max=48536, per=98.43%, avg=48536.00, stdev= 0.00, samples=1 00:14:38.258 iops : min=12134, max=12134, avg=12134.00, stdev= 0.00, samples=1 00:14:38.258 lat (usec) : 100=89.91%, 250=10.07%, 500=0.03% 00:14:38.258 cpu : usr=3.22%, sys=8.20%, ctx=7680, majf=0, minf=9 00:14:38.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.258 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.258 00:14:38.258 Run status group 0 (all jobs): 00:14:38.258 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=30.0MiB (31.5MB), run=623-623msec 00:14:38.258 00:14:38.258 Disk stats (read/write): 00:14:38.258 nbd0: ios=4546/0, merge=0/0, ticks=349/0, in_queue=349, util=79.63% 00:14:38.258 12:34:20 -- lvol/snapshot_clone.sh@451 -- # run_fio_test /dev/nbd0 31457280 31457279 read 0xcc 00:14:38.258 12:34:20 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:38.258 12:34:20 -- lvol/common.sh@41 -- # local offset=31457280 00:14:38.258 12:34:20 -- lvol/common.sh@42 -- # local size=31457279 00:14:38.258 12:34:20 -- lvol/common.sh@43 -- # local rw=read 00:14:38.258 12:34:20 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:38.258 12:34:20 -- lvol/common.sh@45 -- # local extra_params= 00:14:38.258 12:34:20 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:38.258 12:34:20 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:38.258 12:34:20 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:38.258 12:34:20 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=31457280 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:38.258 12:34:20 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=31457280 --size=31457279 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:38.516 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:38.517 fio-3.35 00:14:38.517 Starting 1 process 00:14:39.454 00:14:39.454 fio_test: (groupid=0, jobs=1): err= 0: pid=61873: Tue Oct 1 12:34:21 2024 00:14:39.454 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(30.0MiB/647msec) 00:14:39.454 clat (usec): min=59, max=719, avg=82.79, stdev=20.45 00:14:39.454 lat (usec): min=59, max=719, avg=82.91, stdev=20.48 00:14:39.454 clat percentiles (usec): 00:14:39.454 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 67], 00:14:39.454 | 30.00th=[ 70], 40.00th=[ 76], 50.00th=[ 82], 60.00th=[ 84], 00:14:39.454 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 118], 00:14:39.454 | 99.00th=[ 139], 99.50th=[ 151], 99.90th=[ 198], 99.95th=[ 289], 00:14:39.454 | 99.99th=[ 717] 00:14:39.454 bw ( KiB/s): min=45720, max=45720, per=96.29%, avg=45720.00, stdev= 0.00, samples=1 00:14:39.454 iops : min=11430, max=11430, avg=11430.00, stdev= 0.00, samples=1 00:14:39.454 lat (usec) : 100=84.82%, 250=15.12%, 500=0.04%, 750=0.03% 00:14:39.454 cpu : usr=5.26%, sys=7.43%, ctx=7682, majf=0, minf=10 00:14:39.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.454 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.454 00:14:39.454 Run status group 0 (all jobs): 00:14:39.454 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=30.0MiB (31.5MB), run=647-647msec 00:14:39.454 00:14:39.454 Disk stats (read/write): 00:14:39.454 nbd0: ios=7601/0, merge=0/0, ticks=564/0, in_queue=563, util=86.50% 00:14:39.454 12:34:21 -- lvol/snapshot_clone.sh@454 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@51 -- # local i 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.454 12:34:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.455 12:34:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.455 12:34:21 -- bdev/nbd_common.sh@41 -- # break 00:14:39.455 12:34:21 -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.455 12:34:21 -- lvol/snapshot_clone.sh@455 -- # rpc_cmd bdev_lvol_delete 1f698358-54aa-47af-a701-9a4607cb277c 00:14:39.455 12:34:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.455 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:14:39.455 12:34:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.455 12:34:21 -- lvol/snapshot_clone.sh@456 -- # rpc_cmd bdev_lvol_delete_lvstore -u 76bdf23a-5576-40d9-8145-b85987bf9aa4 00:14:39.455 12:34:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.455 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:14:39.455 12:34:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.455 12:34:21 -- lvol/snapshot_clone.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc7 00:14:39.455 12:34:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.455 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:14:39.714 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.714 12:34:22 -- lvol/snapshot_clone.sh@458 -- # check_leftover_devices 00:14:39.714 12:34:22 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:39.714 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.974 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.974 12:34:22 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:39.974 12:34:22 -- lvol/common.sh@26 -- # jq length 00:14:39.974 12:34:22 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:39.974 12:34:22 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:39.974 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.974 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.974 12:34:22 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:39.974 12:34:22 -- lvol/common.sh@28 -- # jq length 00:14:39.974 ************************************ 00:14:39.974 END TEST test_delete_snapshot_with_clone 00:14:39.974 ************************************ 00:14:39.974 12:34:22 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:39.974 00:14:39.974 real 0m9.927s 00:14:39.974 user 0m1.993s 00:14:39.974 sys 0m0.857s 00:14:39.974 12:34:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.974 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 12:34:22 -- lvol/snapshot_clone.sh@616 -- # run_test test_delete_snapshot_with_snapshot test_delete_snapshot_with_snapshot 00:14:39.974 12:34:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:39.974 12:34:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:39.974 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 ************************************ 00:14:39.974 START TEST test_delete_snapshot_with_snapshot 00:14:39.974 ************************************ 00:14:39.974 12:34:22 -- common/autotest_common.sh@1104 -- # test_delete_snapshot_with_snapshot 00:14:39.974 12:34:22 -- lvol/snapshot_clone.sh@463 -- # rpc_cmd bdev_malloc_create 128 512 00:14:39.974 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.974 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:40.234 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@463 -- # malloc_name=Malloc8 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@464 -- # rpc_cmd bdev_lvol_create_lvstore Malloc8 lvs_test 00:14:40.234 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.234 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:40.234 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@464 -- # lvs_uuid=6d29695e-3603-4a47-acbd-6aa3fdd9efdd 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@467 -- # round_down 24 00:14:40.234 12:34:22 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:40.234 12:34:22 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:40.234 12:34:22 -- lvol/common.sh@36 -- # echo 24 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@467 -- # lvol_size_mb=24 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@468 -- # lvol_size=25165824 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@470 -- # rpc_cmd bdev_lvol_create -u 6d29695e-3603-4a47-acbd-6aa3fdd9efdd lvol_test 24 00:14:40.234 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.234 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:40.234 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@470 -- # lvol_uuid=30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@471 -- # rpc_cmd bdev_get_bdevs -b 30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:40.234 12:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.234 12:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:40.234 12:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@471 -- # lvol='[ 00:14:40.234 { 00:14:40.234 "name": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:40.234 "aliases": [ 00:14:40.234 "lvs_test/lvol_test" 00:14:40.234 ], 00:14:40.234 "product_name": "Logical Volume", 00:14:40.234 "block_size": 512, 00:14:40.234 "num_blocks": 49152, 00:14:40.234 "uuid": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:40.234 "assigned_rate_limits": { 00:14:40.234 "rw_ios_per_sec": 0, 00:14:40.234 "rw_mbytes_per_sec": 0, 00:14:40.234 "r_mbytes_per_sec": 0, 00:14:40.234 "w_mbytes_per_sec": 0 00:14:40.234 }, 00:14:40.234 "claimed": false, 00:14:40.234 "zoned": false, 00:14:40.234 "supported_io_types": { 00:14:40.234 "read": true, 00:14:40.234 "write": true, 00:14:40.234 "unmap": true, 00:14:40.234 "write_zeroes": true, 00:14:40.234 "flush": false, 00:14:40.234 "reset": true, 00:14:40.234 "compare": false, 00:14:40.234 "compare_and_write": false, 00:14:40.234 "abort": false, 00:14:40.234 "nvme_admin": false, 00:14:40.234 "nvme_io": false 00:14:40.234 }, 00:14:40.234 "memory_domains": [ 00:14:40.234 { 00:14:40.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.234 "dma_device_type": 2 00:14:40.234 } 00:14:40.234 ], 00:14:40.234 "driver_specific": { 00:14:40.234 "lvol": { 00:14:40.234 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:40.234 "base_bdev": "Malloc8", 00:14:40.234 "thin_provision": false, 00:14:40.234 "snapshot": false, 00:14:40.234 "clone": false, 00:14:40.234 "esnap_clone": false 00:14:40.234 } 00:14:40.234 } 00:14:40.234 } 00:14:40.234 ]' 00:14:40.234 12:34:22 -- lvol/snapshot_clone.sh@474 -- # nbd_start_disks /var/tmp/spdk.sock 30308baa-e8b0-4afd-acb9-f4afe5e70928 /dev/nbd0 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('30308baa-e8b0-4afd-acb9-f4afe5e70928') 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@12 -- # local i 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.234 12:34:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 30308baa-e8b0-4afd-acb9-f4afe5e70928 /dev/nbd0 00:14:40.494 /dev/nbd0 00:14:40.494 12:34:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.494 12:34:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.494 12:34:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:40.494 12:34:22 -- common/autotest_common.sh@857 -- # local i 00:14:40.494 12:34:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:40.494 12:34:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:40.494 12:34:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:40.494 12:34:22 -- common/autotest_common.sh@861 -- # break 00:14:40.494 12:34:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:40.494 12:34:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:40.494 12:34:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:40.494 1+0 records in 00:14:40.494 1+0 records out 00:14:40.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295213 s, 13.9 MB/s 00:14:40.494 12:34:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:40.494 12:34:22 -- common/autotest_common.sh@874 -- # size=4096 00:14:40.494 12:34:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:40.494 12:34:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:40.494 12:34:22 -- common/autotest_common.sh@877 -- # return 0 00:14:40.494 12:34:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.494 12:34:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.494 12:34:22 -- lvol/snapshot_clone.sh@475 -- # run_fio_test /dev/nbd0 0 25165824 write 0xcc 00:14:40.494 12:34:22 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:40.494 12:34:22 -- lvol/common.sh@41 -- # local offset=0 00:14:40.494 12:34:22 -- lvol/common.sh@42 -- # local size=25165824 00:14:40.494 12:34:22 -- lvol/common.sh@43 -- # local rw=write 00:14:40.494 12:34:22 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:40.494 12:34:22 -- lvol/common.sh@45 -- # local extra_params= 00:14:40.494 12:34:22 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:40.494 12:34:22 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:40.494 12:34:22 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:40.494 12:34:22 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=25165824 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:40.494 12:34:22 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=25165824 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:40.494 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:40.494 fio-3.35 00:14:40.494 Starting 1 process 00:14:41.872 00:14:41.872 fio_test: (groupid=0, jobs=1): err= 0: pid=61940: Tue Oct 1 12:34:24 2024 00:14:41.872 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(24.0MiB/555msec) 00:14:41.872 clat (usec): min=71, max=456, avg=88.91, stdev=15.88 00:14:41.872 lat (usec): min=71, max=456, avg=89.01, stdev=15.89 00:14:41.872 clat percentiles (usec): 00:14:41.872 | 1.00th=[ 74], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:14:41.872 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 88], 00:14:41.872 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 118], 00:14:41.872 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 169], 99.95th=[ 198], 00:14:41.872 | 99.99th=[ 457] 00:14:41.872 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(24.0MiB/533msec); 0 zone resets 00:14:41.872 clat (usec): min=63, max=1393, avg=84.82, stdev=22.65 00:14:41.872 lat (usec): min=67, max=1394, avg=85.73, stdev=22.87 00:14:41.872 clat percentiles (usec): 00:14:41.872 | 1.00th=[ 70], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:14:41.872 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 86], 00:14:41.872 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 117], 00:14:41.872 | 99.00th=[ 133], 99.50th=[ 139], 99.90th=[ 165], 99.95th=[ 192], 00:14:41.872 | 99.99th=[ 1401] 00:14:41.872 bw ( KiB/s): min= 2992, max=46160, per=53.30%, avg=24576.00, stdev=30524.39, samples=2 00:14:41.872 iops : min= 748, max=11540, avg=6144.00, stdev=7631.10, samples=2 00:14:41.872 lat (usec) : 100=82.63%, 250=17.34%, 500=0.02% 00:14:41.872 lat (msec) : 2=0.01% 00:14:41.872 cpu : usr=4.51%, sys=7.45%, ctx=12293, majf=0, minf=170 00:14:41.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:41.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.872 issued rwts: total=6144,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:41.872 00:14:41.872 Run status group 0 (all jobs): 00:14:41.872 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=24.0MiB (25.2MB), run=555-555msec 00:14:41.872 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=24.0MiB (25.2MB), run=533-533msec 00:14:41.872 00:14:41.872 Disk stats (read/write): 00:14:41.872 nbd0: ios=3969/6144, merge=0/0, ticks=330/468, in_queue=798, util=90.10% 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@478 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot 00:14:41.872 12:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.872 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 12:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@478 -- # snapshot_uuid=b9e47511-09bf-45c6-b358-06b268f40668 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@479 -- # rpc_cmd bdev_get_bdevs -b 30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:41.872 12:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.872 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 12:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@479 -- # lvol='[ 00:14:41.872 { 00:14:41.872 "name": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:41.872 "aliases": [ 00:14:41.872 "lvs_test/lvol_test" 00:14:41.872 ], 00:14:41.872 "product_name": "Logical Volume", 00:14:41.872 "block_size": 512, 00:14:41.872 "num_blocks": 49152, 00:14:41.872 "uuid": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:41.872 "assigned_rate_limits": { 00:14:41.872 "rw_ios_per_sec": 0, 00:14:41.872 "rw_mbytes_per_sec": 0, 00:14:41.872 "r_mbytes_per_sec": 0, 00:14:41.872 "w_mbytes_per_sec": 0 00:14:41.872 }, 00:14:41.872 "claimed": false, 00:14:41.872 "zoned": false, 00:14:41.872 "supported_io_types": { 00:14:41.872 "read": true, 00:14:41.872 "write": true, 00:14:41.872 "unmap": true, 00:14:41.872 "write_zeroes": true, 00:14:41.872 "flush": false, 00:14:41.872 "reset": true, 00:14:41.872 "compare": false, 00:14:41.872 "compare_and_write": false, 00:14:41.872 "abort": false, 00:14:41.872 "nvme_admin": false, 00:14:41.872 "nvme_io": false 00:14:41.872 }, 00:14:41.872 "memory_domains": [ 00:14:41.872 { 00:14:41.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.872 "dma_device_type": 2 00:14:41.872 } 00:14:41.872 ], 00:14:41.872 "driver_specific": { 00:14:41.872 "lvol": { 00:14:41.872 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:41.872 "base_bdev": "Malloc8", 00:14:41.872 "thin_provision": true, 00:14:41.872 "snapshot": false, 00:14:41.872 "clone": true, 00:14:41.872 "base_snapshot": "lvol_snapshot", 00:14:41.872 "esnap_clone": false 00:14:41.872 } 00:14:41.872 } 00:14:41.872 } 00:14:41.872 ]' 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@480 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@480 -- # '[' '"lvol_snapshot"' = '"lvol_snapshot"' ']' 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@483 -- # first_part=8388608 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@484 -- # second_part=16777216 00:14:41.872 12:34:24 -- lvol/snapshot_clone.sh@485 -- # run_fio_test /dev/nbd0 8388608 8388608 write 0xee 00:14:41.872 12:34:24 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:41.872 12:34:24 -- lvol/common.sh@41 -- # local offset=8388608 00:14:41.872 12:34:24 -- lvol/common.sh@42 -- # local size=8388608 00:14:41.872 12:34:24 -- lvol/common.sh@43 -- # local rw=write 00:14:41.872 12:34:24 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:41.872 12:34:24 -- lvol/common.sh@45 -- # local extra_params= 00:14:41.872 12:34:24 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:41.872 12:34:24 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:41.872 12:34:24 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:41.872 12:34:24 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:41.872 12:34:24 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:41.872 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:41.872 fio-3.35 00:14:41.872 Starting 1 process 00:14:42.444 00:14:42.444 fio_test: (groupid=0, jobs=1): err= 0: pid=61964: Tue Oct 1 12:34:24 2024 00:14:42.444 read: IOPS=13.0k, BW=51.0MiB/s (53.4MB/s)(8192KiB/157msec) 00:14:42.444 clat (usec): min=56, max=615, avg=75.05, stdev=20.30 00:14:42.444 lat (usec): min=56, max=615, avg=75.14, stdev=20.30 00:14:42.444 clat percentiles (usec): 00:14:42.444 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 64], 00:14:42.444 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 74], 00:14:42.444 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 109], 00:14:42.444 | 99.00th=[ 130], 99.50th=[ 143], 99.90th=[ 174], 99.95th=[ 297], 00:14:42.444 | 99.99th=[ 619] 00:14:42.444 write: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(8192KiB/180msec); 0 zone resets 00:14:42.444 clat (usec): min=61, max=1487, avg=85.72, stdev=50.72 00:14:42.444 lat (usec): min=62, max=1487, avg=86.58, stdev=51.07 00:14:42.444 clat percentiles (usec): 00:14:42.444 | 1.00th=[ 71], 5.00th=[ 71], 10.00th=[ 71], 20.00th=[ 72], 00:14:42.444 | 30.00th=[ 73], 40.00th=[ 76], 50.00th=[ 80], 60.00th=[ 83], 00:14:42.444 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 118], 00:14:42.444 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 1188], 99.95th=[ 1352], 00:14:42.444 | 99.99th=[ 1483] 00:14:42.444 lat (usec) : 100=88.72%, 250=11.16%, 500=0.02%, 750=0.02% 00:14:42.444 lat (msec) : 2=0.07% 00:14:42.444 cpu : usr=2.99%, sys=8.36%, ctx=4115, majf=0, minf=71 00:14:42.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.444 issued rwts: total=2048,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.444 00:14:42.444 Run status group 0 (all jobs): 00:14:42.444 READ: bw=51.0MiB/s (53.4MB/s), 51.0MiB/s-51.0MiB/s (53.4MB/s-53.4MB/s), io=8192KiB (8389kB), run=157-157msec 00:14:42.444 WRITE: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=8192KiB (8389kB), run=180-180msec 00:14:42.444 00:14:42.444 Disk stats (read/write): 00:14:42.444 nbd0: ios=0/1589, merge=0/0, ticks=0/122, in_queue=122, util=57.92% 00:14:42.444 12:34:24 -- lvol/snapshot_clone.sh@488 -- # nbd_start_disks /var/tmp/spdk.sock b9e47511-09bf-45c6-b358-06b268f40668 /dev/nbd1 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('b9e47511-09bf-45c6-b358-06b268f40668') 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@12 -- # local i 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.444 12:34:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk b9e47511-09bf-45c6-b358-06b268f40668 /dev/nbd1 00:14:42.712 /dev/nbd1 00:14:42.712 12:34:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:42.712 12:34:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:42.712 12:34:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:42.712 12:34:25 -- common/autotest_common.sh@857 -- # local i 00:14:42.712 12:34:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:42.712 12:34:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:42.712 12:34:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:42.712 12:34:25 -- common/autotest_common.sh@861 -- # break 00:14:42.712 12:34:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:42.712 12:34:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:42.712 12:34:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:42.712 1+0 records in 00:14:42.712 1+0 records out 00:14:42.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351857 s, 11.6 MB/s 00:14:42.712 12:34:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:42.712 12:34:25 -- common/autotest_common.sh@874 -- # size=4096 00:14:42.712 12:34:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:42.712 12:34:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:42.712 12:34:25 -- common/autotest_common.sh@877 -- # return 0 00:14:42.712 12:34:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.712 12:34:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.712 12:34:25 -- lvol/snapshot_clone.sh@489 -- # run_fio_test /dev/nbd1 0 25165824 read 0xcc 00:14:42.712 12:34:25 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:14:42.712 12:34:25 -- lvol/common.sh@41 -- # local offset=0 00:14:42.712 12:34:25 -- lvol/common.sh@42 -- # local size=25165824 00:14:42.712 12:34:25 -- lvol/common.sh@43 -- # local rw=read 00:14:42.712 12:34:25 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:42.712 12:34:25 -- lvol/common.sh@45 -- # local extra_params= 00:14:42.712 12:34:25 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:42.712 12:34:25 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:42.712 12:34:25 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:42.712 12:34:25 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=25165824 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:42.712 12:34:25 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=25165824 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:42.970 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:42.970 fio-3.35 00:14:42.970 Starting 1 process 00:14:43.537 00:14:43.537 fio_test: (groupid=0, jobs=1): err= 0: pid=61987: Tue Oct 1 12:34:26 2024 00:14:43.537 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(24.0MiB/595msec) 00:14:43.537 clat (usec): min=74, max=368, avg=95.43, stdev=19.63 00:14:43.537 lat (usec): min=74, max=369, avg=95.54, stdev=19.65 00:14:43.537 clat percentiles (usec): 00:14:43.537 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 80], 00:14:43.537 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 94], 00:14:43.537 | 70.00th=[ 99], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 135], 00:14:43.537 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 190], 99.95th=[ 212], 00:14:43.537 | 99.99th=[ 367] 00:14:43.537 bw ( KiB/s): min=41000, max=41000, per=99.26%, avg=41000.00, stdev= 0.00, samples=1 00:14:43.537 iops : min=10250, max=10250, avg=10250.00, stdev= 0.00, samples=1 00:14:43.537 lat (usec) : 100=70.91%, 250=29.05%, 500=0.03% 00:14:43.537 cpu : usr=2.69%, sys=5.72%, ctx=6148, majf=0, minf=10 00:14:43.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:43.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.537 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:43.537 00:14:43.537 Run status group 0 (all jobs): 00:14:43.537 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=24.0MiB (25.2MB), run=595-595msec 00:14:43.537 00:14:43.537 Disk stats (read/write): 00:14:43.537 nbd1: ios=4013/0, merge=0/0, ticks=364/0, in_queue=364, util=79.68% 00:14:43.537 12:34:26 -- lvol/snapshot_clone.sh@493 -- # rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot2 00:14:43.537 12:34:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.537 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:43.537 12:34:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.537 12:34:26 -- lvol/snapshot_clone.sh@493 -- # snapshot_uuid2=883b155f-e488-48c0-b88e-fae51c0acdde 00:14:43.537 12:34:26 -- lvol/snapshot_clone.sh@494 -- # rpc_cmd bdev_get_bdevs -b 30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:43.537 12:34:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.537 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:43.537 12:34:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.537 12:34:26 -- lvol/snapshot_clone.sh@494 -- # lvol='[ 00:14:43.537 { 00:14:43.537 "name": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:43.537 "aliases": [ 00:14:43.537 "lvs_test/lvol_test" 00:14:43.537 ], 00:14:43.537 "product_name": "Logical Volume", 00:14:43.537 "block_size": 512, 00:14:43.537 "num_blocks": 49152, 00:14:43.537 "uuid": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:43.537 "assigned_rate_limits": { 00:14:43.537 "rw_ios_per_sec": 0, 00:14:43.537 "rw_mbytes_per_sec": 0, 00:14:43.537 "r_mbytes_per_sec": 0, 00:14:43.537 "w_mbytes_per_sec": 0 00:14:43.537 }, 00:14:43.537 "claimed": false, 00:14:43.537 "zoned": false, 00:14:43.537 "supported_io_types": { 00:14:43.537 "read": true, 00:14:43.537 "write": true, 00:14:43.537 "unmap": true, 00:14:43.537 "write_zeroes": true, 00:14:43.537 "flush": false, 00:14:43.537 "reset": true, 00:14:43.537 "compare": false, 00:14:43.537 "compare_and_write": false, 00:14:43.537 "abort": false, 00:14:43.537 "nvme_admin": false, 00:14:43.537 "nvme_io": false 00:14:43.537 }, 00:14:43.537 "memory_domains": [ 00:14:43.537 { 00:14:43.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.537 "dma_device_type": 2 00:14:43.537 } 00:14:43.537 ], 00:14:43.537 "driver_specific": { 00:14:43.537 "lvol": { 00:14:43.537 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:43.537 "base_bdev": "Malloc8", 00:14:43.537 "thin_provision": true, 00:14:43.537 "snapshot": false, 00:14:43.537 "clone": true, 00:14:43.537 "base_snapshot": "lvol_snapshot2", 00:14:43.537 "esnap_clone": false 00:14:43.537 } 00:14:43.537 } 00:14:43.537 } 00:14:43.537 ]' 00:14:43.537 12:34:26 -- lvol/snapshot_clone.sh@495 -- # rpc_cmd bdev_get_bdevs -b b9e47511-09bf-45c6-b358-06b268f40668 00:14:43.537 12:34:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.537 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:43.795 12:34:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.795 12:34:26 -- lvol/snapshot_clone.sh@495 -- # snapshot='[ 00:14:43.795 { 00:14:43.795 "name": "b9e47511-09bf-45c6-b358-06b268f40668", 00:14:43.795 "aliases": [ 00:14:43.795 "lvs_test/lvol_snapshot" 00:14:43.795 ], 00:14:43.795 "product_name": "Logical Volume", 00:14:43.795 "block_size": 512, 00:14:43.795 "num_blocks": 49152, 00:14:43.796 "uuid": "b9e47511-09bf-45c6-b358-06b268f40668", 00:14:43.796 "assigned_rate_limits": { 00:14:43.796 "rw_ios_per_sec": 0, 00:14:43.796 "rw_mbytes_per_sec": 0, 00:14:43.796 "r_mbytes_per_sec": 0, 00:14:43.796 "w_mbytes_per_sec": 0 00:14:43.796 }, 00:14:43.796 "claimed": false, 00:14:43.796 "zoned": false, 00:14:43.796 "supported_io_types": { 00:14:43.796 "read": true, 00:14:43.796 "write": false, 00:14:43.796 "unmap": false, 00:14:43.796 "write_zeroes": false, 00:14:43.796 "flush": false, 00:14:43.796 "reset": true, 00:14:43.796 "compare": false, 00:14:43.796 "compare_and_write": false, 00:14:43.796 "abort": false, 00:14:43.796 "nvme_admin": false, 00:14:43.796 "nvme_io": false 00:14:43.796 }, 00:14:43.796 "memory_domains": [ 00:14:43.796 { 00:14:43.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.796 "dma_device_type": 2 00:14:43.796 } 00:14:43.796 ], 00:14:43.796 "driver_specific": { 00:14:43.796 "lvol": { 00:14:43.796 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:43.796 "base_bdev": "Malloc8", 00:14:43.796 "thin_provision": false, 00:14:43.796 "snapshot": true, 00:14:43.796 "clone": false, 00:14:43.796 "clones": [ 00:14:43.796 "lvol_snapshot2" 00:14:43.796 ], 00:14:43.796 "esnap_clone": false 00:14:43.796 } 00:14:43.796 } 00:14:43.796 } 00:14:43.796 ]' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@496 -- # rpc_cmd bdev_get_bdevs -b 883b155f-e488-48c0-b88e-fae51c0acdde 00:14:43.796 12:34:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.796 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:43.796 12:34:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@496 -- # snapshot2='[ 00:14:43.796 { 00:14:43.796 "name": "883b155f-e488-48c0-b88e-fae51c0acdde", 00:14:43.796 "aliases": [ 00:14:43.796 "lvs_test/lvol_snapshot2" 00:14:43.796 ], 00:14:43.796 "product_name": "Logical Volume", 00:14:43.796 "block_size": 512, 00:14:43.796 "num_blocks": 49152, 00:14:43.796 "uuid": "883b155f-e488-48c0-b88e-fae51c0acdde", 00:14:43.796 "assigned_rate_limits": { 00:14:43.796 "rw_ios_per_sec": 0, 00:14:43.796 "rw_mbytes_per_sec": 0, 00:14:43.796 "r_mbytes_per_sec": 0, 00:14:43.796 "w_mbytes_per_sec": 0 00:14:43.796 }, 00:14:43.796 "claimed": false, 00:14:43.796 "zoned": false, 00:14:43.796 "supported_io_types": { 00:14:43.796 "read": true, 00:14:43.796 "write": false, 00:14:43.796 "unmap": false, 00:14:43.796 "write_zeroes": false, 00:14:43.796 "flush": false, 00:14:43.796 "reset": true, 00:14:43.796 "compare": false, 00:14:43.796 "compare_and_write": false, 00:14:43.796 "abort": false, 00:14:43.796 "nvme_admin": false, 00:14:43.796 "nvme_io": false 00:14:43.796 }, 00:14:43.796 "memory_domains": [ 00:14:43.796 { 00:14:43.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.796 "dma_device_type": 2 00:14:43.796 } 00:14:43.796 ], 00:14:43.796 "driver_specific": { 00:14:43.796 "lvol": { 00:14:43.796 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:43.796 "base_bdev": "Malloc8", 00:14:43.796 "thin_provision": true, 00:14:43.796 "snapshot": true, 00:14:43.796 "clone": true, 00:14:43.796 "base_snapshot": "lvol_snapshot", 00:14:43.796 "clones": [ 00:14:43.796 "lvol_test" 00:14:43.796 ], 00:14:43.796 "esnap_clone": false 00:14:43.796 } 00:14:43.796 } 00:14:43.796 } 00:14:43.796 ]' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@497 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@497 -- # '[' '"lvol_snapshot"' = '"lvol_snapshot"' ']' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@498 -- # jq '.[].driver_specific.lvol.clones|sort' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@498 -- # jq '.|sort' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@498 -- # '[' '[ 00:14:43.796 "lvol_test" 00:14:43.796 ]' = '[ 00:14:43.796 "lvol_test" 00:14:43.796 ]' ']' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@499 -- # jq '.[].driver_specific.lvol.clone' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@499 -- # '[' true = true ']' 00:14:43.796 12:34:26 -- lvol/snapshot_clone.sh@500 -- # jq '.[].driver_specific.lvol.snapshot' 00:14:44.054 12:34:26 -- lvol/snapshot_clone.sh@500 -- # '[' true = true ']' 00:14:44.054 12:34:26 -- lvol/snapshot_clone.sh@501 -- # jq '.[].driver_specific.lvol.clones|sort' 00:14:44.054 12:34:26 -- lvol/snapshot_clone.sh@501 -- # jq '.|sort' 00:14:44.054 12:34:26 -- lvol/snapshot_clone.sh@501 -- # '[' '[ 00:14:44.054 "lvol_snapshot2" 00:14:44.054 ]' = '[ 00:14:44.054 "lvol_snapshot2" 00:14:44.054 ]' ']' 00:14:44.054 12:34:26 -- lvol/snapshot_clone.sh@504 -- # run_fio_test /dev/nbd1 0 4096 read 0xcc 00:14:44.054 12:34:26 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:14:44.054 12:34:26 -- lvol/common.sh@41 -- # local offset=0 00:14:44.054 12:34:26 -- lvol/common.sh@42 -- # local size=4096 00:14:44.054 12:34:26 -- lvol/common.sh@43 -- # local rw=read 00:14:44.054 12:34:26 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:44.054 12:34:26 -- lvol/common.sh@45 -- # local extra_params= 00:14:44.054 12:34:26 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:44.054 12:34:26 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:44.054 12:34:26 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.054 12:34:26 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=4096 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.054 12:34:26 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=4096 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:44.054 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:44.054 fio-3.35 00:14:44.054 Starting 1 process 00:14:44.312 00:14:44.312 fio_test: (groupid=0, jobs=1): err= 0: pid=62019: Tue Oct 1 12:34:26 2024 00:14:44.312 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096B/1msec) 00:14:44.312 clat (nsec): min=359160, max=359160, avg=359160.00, stdev= 0.00 00:14:44.312 lat (nsec): min=360038, max=360038, avg=360038.00, stdev= 0.00 00:14:44.312 clat percentiles (usec): 00:14:44.312 | 1.00th=[ 359], 5.00th=[ 359], 10.00th=[ 359], 20.00th=[ 359], 00:14:44.312 | 30.00th=[ 359], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 359], 00:14:44.312 | 70.00th=[ 359], 80.00th=[ 359], 90.00th=[ 359], 95.00th=[ 359], 00:14:44.312 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:14:44.312 | 99.99th=[ 359] 00:14:44.312 lat (usec) : 500=100.00% 00:14:44.312 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=10 00:14:44.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.312 issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.312 00:14:44.312 Run status group 0 (all jobs): 00:14:44.312 READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096B (4096B), run=1-1msec 00:14:44.312 00:14:44.312 Disk stats (read/write): 00:14:44.312 nbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:44.312 12:34:26 -- lvol/snapshot_clone.sh@505 -- # nbd_start_disks /var/tmp/spdk.sock 883b155f-e488-48c0-b88e-fae51c0acdde /dev/nbd2 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('883b155f-e488-48c0-b88e-fae51c0acdde') 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd2') 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@12 -- # local i 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.312 12:34:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 883b155f-e488-48c0-b88e-fae51c0acdde /dev/nbd2 00:14:44.571 /dev/nbd2 00:14:44.571 12:34:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:44.571 12:34:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:44.571 12:34:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:44.571 12:34:26 -- common/autotest_common.sh@857 -- # local i 00:14:44.571 12:34:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:44.571 12:34:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:44.571 12:34:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:44.571 12:34:26 -- common/autotest_common.sh@861 -- # break 00:14:44.571 12:34:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:44.571 12:34:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:44.571 12:34:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:14:44.571 1+0 records in 00:14:44.571 1+0 records out 00:14:44.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314517 s, 13.0 MB/s 00:14:44.571 12:34:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:44.571 12:34:26 -- common/autotest_common.sh@874 -- # size=4096 00:14:44.571 12:34:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:14:44.571 12:34:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:44.571 12:34:26 -- common/autotest_common.sh@877 -- # return 0 00:14:44.571 12:34:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.571 12:34:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.571 12:34:26 -- lvol/snapshot_clone.sh@506 -- # run_fio_test /dev/nbd2 0 8388607 read 0xcc 00:14:44.571 12:34:26 -- lvol/common.sh@40 -- # local file=/dev/nbd2 00:14:44.571 12:34:26 -- lvol/common.sh@41 -- # local offset=0 00:14:44.571 12:34:26 -- lvol/common.sh@42 -- # local size=8388607 00:14:44.571 12:34:26 -- lvol/common.sh@43 -- # local rw=read 00:14:44.571 12:34:26 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:44.571 12:34:26 -- lvol/common.sh@45 -- # local extra_params= 00:14:44.571 12:34:26 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:44.571 12:34:26 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:44.571 12:34:26 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.571 12:34:26 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd2 --offset=0 --size=8388607 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:44.571 12:34:26 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd2 --offset=0 --size=8388607 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:44.571 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:44.571 fio-3.35 00:14:44.571 Starting 1 process 00:14:45.138 00:14:45.139 fio_test: (groupid=0, jobs=1): err= 0: pid=62036: Tue Oct 1 12:34:27 2024 00:14:45.139 read: IOPS=6942, BW=27.1MiB/s (28.4MB/s)(8192KiB/295msec) 00:14:45.139 clat (usec): min=92, max=653, avg=142.02, stdev=33.66 00:14:45.139 lat (usec): min=92, max=653, avg=142.14, stdev=33.68 00:14:45.139 clat percentiles (usec): 00:14:45.139 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 106], 00:14:45.139 | 30.00th=[ 115], 40.00th=[ 133], 50.00th=[ 147], 60.00th=[ 157], 00:14:45.139 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 190], 00:14:45.139 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 281], 99.95th=[ 367], 00:14:45.139 | 99.99th=[ 652] 00:14:45.139 lat (usec) : 100=7.13%, 250=92.63%, 500=0.20%, 750=0.05% 00:14:45.139 cpu : usr=1.70%, sys=5.10%, ctx=2051, majf=0, minf=10 00:14:45.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.139 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.139 00:14:45.139 Run status group 0 (all jobs): 00:14:45.139 READ: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=8192KiB (8389kB), run=295-295msec 00:14:45.139 00:14:45.139 Disk stats (read/write): 00:14:45.139 nbd2: ios=886/0, merge=0/0, ticks=133/0, in_queue=133, util=58.61% 00:14:45.139 12:34:27 -- lvol/snapshot_clone.sh@507 -- # run_fio_test /dev/nbd2 8388608 8388608 read 0xee 00:14:45.139 12:34:27 -- lvol/common.sh@40 -- # local file=/dev/nbd2 00:14:45.139 12:34:27 -- lvol/common.sh@41 -- # local offset=8388608 00:14:45.139 12:34:27 -- lvol/common.sh@42 -- # local size=8388608 00:14:45.139 12:34:27 -- lvol/common.sh@43 -- # local rw=read 00:14:45.139 12:34:27 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:45.139 12:34:27 -- lvol/common.sh@45 -- # local extra_params= 00:14:45.139 12:34:27 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:45.139 12:34:27 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:45.139 12:34:27 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:45.139 12:34:27 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd2 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:45.139 12:34:27 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd2 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:45.139 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:45.139 fio-3.35 00:14:45.139 Starting 1 process 00:14:45.706 00:14:45.706 fio_test: (groupid=0, jobs=1): err= 0: pid=62045: Tue Oct 1 12:34:28 2024 00:14:45.706 read: IOPS=7501, BW=29.3MiB/s (30.7MB/s)(8192KiB/273msec) 00:14:45.706 clat (usec): min=87, max=513, avg=131.46, stdev=32.80 00:14:45.706 lat (usec): min=87, max=514, avg=131.57, stdev=32.82 00:14:45.706 clat percentiles (usec): 00:14:45.706 | 1.00th=[ 90], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 95], 00:14:45.706 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 139], 60.00th=[ 141], 00:14:45.706 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 186], 00:14:45.706 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 223], 99.95th=[ 351], 00:14:45.706 | 99.99th=[ 515] 00:14:45.707 lat (usec) : 100=26.90%, 250=73.00%, 500=0.05%, 750=0.05% 00:14:45.707 cpu : usr=1.47%, sys=5.51%, ctx=2224, majf=0, minf=9 00:14:45.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.707 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.707 00:14:45.707 Run status group 0 (all jobs): 00:14:45.707 READ: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=8192KiB (8389kB), run=273-273msec 00:14:45.707 00:14:45.707 Disk stats (read/write): 00:14:45.707 nbd2: ios=910/0, merge=0/0, ticks=134/0, in_queue=134, util=58.78% 00:14:45.707 12:34:28 -- lvol/snapshot_clone.sh@508 -- # run_fio_test /dev/nbd2 16777216 8388608 read 0xcc 00:14:45.707 12:34:28 -- lvol/common.sh@40 -- # local file=/dev/nbd2 00:14:45.707 12:34:28 -- lvol/common.sh@41 -- # local offset=16777216 00:14:45.707 12:34:28 -- lvol/common.sh@42 -- # local size=8388608 00:14:45.707 12:34:28 -- lvol/common.sh@43 -- # local rw=read 00:14:45.707 12:34:28 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:45.707 12:34:28 -- lvol/common.sh@45 -- # local extra_params= 00:14:45.707 12:34:28 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:45.707 12:34:28 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:45.707 12:34:28 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:45.707 12:34:28 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd2 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:45.707 12:34:28 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd2 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:45.707 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:45.707 fio-3.35 00:14:45.707 Starting 1 process 00:14:46.272 00:14:46.272 fio_test: (groupid=0, jobs=1): err= 0: pid=62053: Tue Oct 1 12:34:28 2024 00:14:46.272 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(8192KiB/251msec) 00:14:46.272 clat (usec): min=91, max=530, avg=120.93, stdev=28.53 00:14:46.272 lat (usec): min=91, max=531, avg=121.05, stdev=28.56 00:14:46.272 clat percentiles (usec): 00:14:46.272 | 1.00th=[ 94], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:14:46.272 | 30.00th=[ 101], 40.00th=[ 106], 50.00th=[ 113], 60.00th=[ 118], 00:14:46.272 | 70.00th=[ 128], 80.00th=[ 145], 90.00th=[ 163], 95.00th=[ 174], 00:14:46.272 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 293], 99.95th=[ 330], 00:14:46.272 | 99.99th=[ 529] 00:14:46.272 lat (usec) : 100=27.54%, 250=72.22%, 500=0.20%, 750=0.05% 00:14:46.272 cpu : usr=0.80%, sys=6.00%, ctx=2048, majf=0, minf=9 00:14:46.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.272 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.272 00:14:46.272 Run status group 0 (all jobs): 00:14:46.272 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=8192KiB (8389kB), run=251-251msec 00:14:46.272 00:14:46.272 Disk stats (read/write): 00:14:46.272 nbd2: ios=1132/0, merge=0/0, ticks=133/0, in_queue=133, util=59.02% 00:14:46.272 12:34:28 -- lvol/snapshot_clone.sh@511 -- # run_fio_test /dev/nbd0 8388608 8388608 read 0xee 00:14:46.272 12:34:28 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:46.272 12:34:28 -- lvol/common.sh@41 -- # local offset=8388608 00:14:46.272 12:34:28 -- lvol/common.sh@42 -- # local size=8388608 00:14:46.272 12:34:28 -- lvol/common.sh@43 -- # local rw=read 00:14:46.272 12:34:28 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:46.272 12:34:28 -- lvol/common.sh@45 -- # local extra_params= 00:14:46.272 12:34:28 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:46.272 12:34:28 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:46.272 12:34:28 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:46.272 12:34:28 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:46.272 12:34:28 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:46.272 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:46.272 fio-3.35 00:14:46.272 Starting 1 process 00:14:46.530 00:14:46.530 fio_test: (groupid=0, jobs=1): err= 0: pid=62062: Tue Oct 1 12:34:29 2024 00:14:46.530 read: IOPS=6671, BW=26.1MiB/s (27.3MB/s)(8192KiB/307msec) 00:14:46.530 clat (usec): min=91, max=414, avg=148.25, stdev=29.99 00:14:46.530 lat (usec): min=91, max=415, avg=148.37, stdev=30.01 00:14:46.530 clat percentiles (usec): 00:14:46.530 | 1.00th=[ 94], 5.00th=[ 100], 10.00th=[ 105], 20.00th=[ 113], 00:14:46.530 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 161], 00:14:46.530 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 190], 00:14:46.530 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 249], 99.95th=[ 269], 00:14:46.530 | 99.99th=[ 416] 00:14:46.530 lat (usec) : 100=5.22%, 250=94.68%, 500=0.10% 00:14:46.530 cpu : usr=2.94%, sys=3.92%, ctx=2049, majf=0, minf=10 00:14:46.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.530 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.530 00:14:46.530 Run status group 0 (all jobs): 00:14:46.530 READ: bw=26.1MiB/s (27.3MB/s), 26.1MiB/s-26.1MiB/s (27.3MB/s-27.3MB/s), io=8192KiB (8389kB), run=307-307msec 00:14:46.530 00:14:46.530 Disk stats (read/write): 00:14:46.530 nbd0: ios=858/0, merge=0/0, ticks=131/0, in_queue=132, util=58.26% 00:14:46.788 12:34:29 -- lvol/snapshot_clone.sh@512 -- # run_fio_test /dev/nbd0 16777216 8388608 read 0xcc 00:14:46.788 12:34:29 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:46.788 12:34:29 -- lvol/common.sh@41 -- # local offset=16777216 00:14:46.788 12:34:29 -- lvol/common.sh@42 -- # local size=8388608 00:14:46.788 12:34:29 -- lvol/common.sh@43 -- # local rw=read 00:14:46.788 12:34:29 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:46.788 12:34:29 -- lvol/common.sh@45 -- # local extra_params= 00:14:46.788 12:34:29 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:46.788 12:34:29 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:46.788 12:34:29 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:46.788 12:34:29 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:46.788 12:34:29 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:46.788 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:46.788 fio-3.35 00:14:46.788 Starting 1 process 00:14:47.356 00:14:47.356 fio_test: (groupid=0, jobs=1): err= 0: pid=62069: Tue Oct 1 12:34:29 2024 00:14:47.356 read: IOPS=6803, BW=26.6MiB/s (27.9MB/s)(8192KiB/301msec) 00:14:47.356 clat (usec): min=96, max=643, avg=144.90, stdev=33.39 00:14:47.356 lat (usec): min=96, max=643, avg=145.04, stdev=33.41 00:14:47.356 clat percentiles (usec): 00:14:47.356 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 109], 00:14:47.356 | 30.00th=[ 116], 40.00th=[ 130], 50.00th=[ 157], 60.00th=[ 163], 00:14:47.356 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 190], 00:14:47.356 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 310], 99.95th=[ 355], 00:14:47.356 | 99.99th=[ 644] 00:14:47.356 lat (usec) : 100=3.22%, 250=96.53%, 500=0.20%, 750=0.05% 00:14:47.356 cpu : usr=1.33%, sys=5.33%, ctx=2049, majf=0, minf=10 00:14:47.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.356 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.356 00:14:47.356 Run status group 0 (all jobs): 00:14:47.356 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=8192KiB (8389kB), run=301-301msec 00:14:47.356 00:14:47.356 Disk stats (read/write): 00:14:47.356 nbd0: ios=897/0, merge=0/0, ticks=131/0, in_queue=131, util=58.68% 00:14:47.356 12:34:29 -- lvol/snapshot_clone.sh@513 -- # jq '.[].driver_specific.lvol.clone' 00:14:47.356 12:34:29 -- lvol/snapshot_clone.sh@513 -- # '[' true = true ']' 00:14:47.356 12:34:29 -- lvol/snapshot_clone.sh@514 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:47.356 12:34:29 -- lvol/snapshot_clone.sh@514 -- # '[' '"lvol_snapshot2"' = '"lvol_snapshot2"' ']' 00:14:47.356 12:34:29 -- lvol/snapshot_clone.sh@517 -- # run_fio_test /dev/nbd0 16777216 8388608 write 0xdd 00:14:47.356 12:34:29 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:47.356 12:34:29 -- lvol/common.sh@41 -- # local offset=16777216 00:14:47.356 12:34:29 -- lvol/common.sh@42 -- # local size=8388608 00:14:47.356 12:34:29 -- lvol/common.sh@43 -- # local rw=write 00:14:47.356 12:34:29 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:47.356 12:34:29 -- lvol/common.sh@45 -- # local extra_params= 00:14:47.356 12:34:29 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:47.356 12:34:29 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:47.356 12:34:29 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:47.356 12:34:29 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:47.356 12:34:29 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:47.356 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:47.356 fio-3.35 00:14:47.356 Starting 1 process 00:14:48.293 00:14:48.293 fio_test: (groupid=0, jobs=1): err= 0: pid=62083: Tue Oct 1 12:34:30 2024 00:14:48.293 read: IOPS=7420, BW=29.0MiB/s (30.4MB/s)(8192KiB/276msec) 00:14:48.293 clat (usec): min=88, max=530, avg=132.79, stdev=36.73 00:14:48.293 lat (usec): min=88, max=530, avg=132.88, stdev=36.75 00:14:48.293 clat percentiles (usec): 00:14:48.293 | 1.00th=[ 89], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 94], 00:14:48.293 | 30.00th=[ 102], 40.00th=[ 112], 50.00th=[ 139], 60.00th=[ 147], 00:14:48.293 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 186], 00:14:48.293 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 359], 99.95th=[ 457], 00:14:48.293 | 99.99th=[ 529] 00:14:48.293 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(8192KiB/336msec); 0 zone resets 00:14:48.293 clat (usec): min=90, max=1634, avg=161.54, stdev=50.75 00:14:48.293 lat (usec): min=90, max=1653, avg=162.57, stdev=51.03 00:14:48.293 clat percentiles (usec): 00:14:48.293 | 1.00th=[ 113], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 143], 00:14:48.293 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 163], 00:14:48.293 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 196], 00:14:48.293 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 1123], 99.95th=[ 1336], 00:14:48.293 | 99.99th=[ 1631] 00:14:48.293 bw ( KiB/s): min=16384, max=16384, per=67.20%, avg=16384.00, stdev= 0.00, samples=1 00:14:48.293 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:48.293 lat (usec) : 100=14.23%, 250=85.45%, 500=0.22%, 750=0.02% 00:14:48.293 lat (msec) : 2=0.07% 00:14:48.293 cpu : usr=3.61%, sys=4.43%, ctx=4100, majf=0, minf=70 00:14:48.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.293 issued rwts: total=2048,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.293 00:14:48.293 Run status group 0 (all jobs): 00:14:48.293 READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=8192KiB (8389kB), run=276-276msec 00:14:48.293 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=8192KiB (8389kB), run=336-336msec 00:14:48.293 00:14:48.293 Disk stats (read/write): 00:14:48.293 nbd0: ios=375/2048, merge=0/0, ticks=62/308, in_queue=371, util=80.24% 00:14:48.293 12:34:30 -- lvol/snapshot_clone.sh@520 -- # run_fio_test /dev/nbd1 0 4096 read 0xcc 00:14:48.293 12:34:30 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:14:48.293 12:34:30 -- lvol/common.sh@41 -- # local offset=0 00:14:48.293 12:34:30 -- lvol/common.sh@42 -- # local size=4096 00:14:48.293 12:34:30 -- lvol/common.sh@43 -- # local rw=read 00:14:48.293 12:34:30 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:48.293 12:34:30 -- lvol/common.sh@45 -- # local extra_params= 00:14:48.293 12:34:30 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:48.293 12:34:30 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:48.293 12:34:30 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:48.293 12:34:30 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=4096 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:48.293 12:34:30 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=4096 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:48.293 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:48.293 fio-3.35 00:14:48.293 Starting 1 process 00:14:48.293 00:14:48.294 fio_test: (groupid=0, jobs=1): err= 0: pid=62097: Tue Oct 1 12:34:30 2024 00:14:48.294 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096B/1msec) 00:14:48.294 clat (nsec): min=350078, max=350078, avg=350078.00, stdev= 0.00 00:14:48.294 lat (nsec): min=351008, max=351008, avg=351008.00, stdev= 0.00 00:14:48.294 clat percentiles (usec): 00:14:48.294 | 1.00th=[ 351], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[ 351], 00:14:48.294 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 351], 60.00th=[ 351], 00:14:48.294 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 351], 95.00th=[ 351], 00:14:48.294 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:14:48.294 | 99.99th=[ 351] 00:14:48.294 lat (usec) : 500=100.00% 00:14:48.294 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=9 00:14:48.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.294 issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.294 00:14:48.294 Run status group 0 (all jobs): 00:14:48.294 READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096B (4096B), run=1-1msec 00:14:48.294 00:14:48.294 Disk stats (read/write): 00:14:48.294 nbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:14:48.294 12:34:30 -- lvol/snapshot_clone.sh@521 -- # run_fio_test /dev/nbd0 16777216 8388608 read 0xdd 00:14:48.294 12:34:30 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:48.294 12:34:30 -- lvol/common.sh@41 -- # local offset=16777216 00:14:48.294 12:34:30 -- lvol/common.sh@42 -- # local size=8388608 00:14:48.294 12:34:30 -- lvol/common.sh@43 -- # local rw=read 00:14:48.294 12:34:30 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:48.294 12:34:30 -- lvol/common.sh@45 -- # local extra_params= 00:14:48.294 12:34:30 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:48.294 12:34:30 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:48.294 12:34:30 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:48.294 12:34:30 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:48.294 12:34:30 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:48.553 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:48.553 fio-3.35 00:14:48.553 Starting 1 process 00:14:48.812 00:14:48.812 fio_test: (groupid=0, jobs=1): err= 0: pid=62100: Tue Oct 1 12:34:31 2024 00:14:48.812 read: IOPS=9022, BW=35.2MiB/s (37.0MB/s)(8192KiB/227msec) 00:14:48.812 clat (usec): min=87, max=366, avg=109.18, stdev=21.41 00:14:48.812 lat (usec): min=87, max=367, avg=109.30, stdev=21.43 00:14:48.812 clat percentiles (usec): 00:14:48.812 | 1.00th=[ 89], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 92], 00:14:48.812 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 109], 00:14:48.812 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 137], 95.00th=[ 155], 00:14:48.812 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 219], 99.95th=[ 219], 00:14:48.812 | 99.99th=[ 367] 00:14:48.812 lat (usec) : 100=39.06%, 250=60.89%, 500=0.05% 00:14:48.812 cpu : usr=3.10%, sys=3.54%, ctx=2051, majf=0, minf=9 00:14:48.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.812 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.812 00:14:48.812 Run status group 0 (all jobs): 00:14:48.812 READ: bw=35.2MiB/s (37.0MB/s), 35.2MiB/s-35.2MiB/s (37.0MB/s-37.0MB/s), io=8192KiB (8389kB), run=227-227msec 00:14:48.812 00:14:48.812 Disk stats (read/write): 00:14:48.812 nbd0: ios=1256/0, merge=0/0, ticks=132/0, in_queue=132, util=58.26% 00:14:48.812 12:34:31 -- lvol/snapshot_clone.sh@522 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd2 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd2') 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@51 -- # local i 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.812 12:34:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd2 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@41 -- # break 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.071 12:34:31 -- lvol/snapshot_clone.sh@523 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@51 -- # local i 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.071 12:34:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@41 -- # break 00:14:49.331 12:34:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.331 12:34:31 -- lvol/snapshot_clone.sh@526 -- # rpc_cmd bdev_lvol_delete 883b155f-e488-48c0-b88e-fae51c0acdde 00:14:49.331 12:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.331 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 12:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.331 12:34:31 -- lvol/snapshot_clone.sh@529 -- # rpc_cmd bdev_get_bdevs -b 30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:49.331 12:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.331 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 12:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.331 12:34:31 -- lvol/snapshot_clone.sh@529 -- # lvol='[ 00:14:49.331 { 00:14:49.331 "name": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:49.331 "aliases": [ 00:14:49.331 "lvs_test/lvol_test" 00:14:49.331 ], 00:14:49.331 "product_name": "Logical Volume", 00:14:49.331 "block_size": 512, 00:14:49.331 "num_blocks": 49152, 00:14:49.331 "uuid": "30308baa-e8b0-4afd-acb9-f4afe5e70928", 00:14:49.331 "assigned_rate_limits": { 00:14:49.331 "rw_ios_per_sec": 0, 00:14:49.331 "rw_mbytes_per_sec": 0, 00:14:49.331 "r_mbytes_per_sec": 0, 00:14:49.331 "w_mbytes_per_sec": 0 00:14:49.331 }, 00:14:49.331 "claimed": false, 00:14:49.331 "zoned": false, 00:14:49.331 "supported_io_types": { 00:14:49.331 "read": true, 00:14:49.331 "write": true, 00:14:49.331 "unmap": true, 00:14:49.331 "write_zeroes": true, 00:14:49.331 "flush": false, 00:14:49.331 "reset": true, 00:14:49.331 "compare": false, 00:14:49.331 "compare_and_write": false, 00:14:49.331 "abort": false, 00:14:49.331 "nvme_admin": false, 00:14:49.331 "nvme_io": false 00:14:49.331 }, 00:14:49.331 "memory_domains": [ 00:14:49.331 { 00:14:49.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.331 "dma_device_type": 2 00:14:49.331 } 00:14:49.331 ], 00:14:49.331 "driver_specific": { 00:14:49.331 "lvol": { 00:14:49.331 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:49.331 "base_bdev": "Malloc8", 00:14:49.331 "thin_provision": true, 00:14:49.331 "snapshot": false, 00:14:49.331 "clone": true, 00:14:49.331 "base_snapshot": "lvol_snapshot", 00:14:49.331 "esnap_clone": false 00:14:49.331 } 00:14:49.331 } 00:14:49.331 } 00:14:49.331 ]' 00:14:49.331 12:34:31 -- lvol/snapshot_clone.sh@530 -- # rpc_cmd bdev_get_bdevs -b b9e47511-09bf-45c6-b358-06b268f40668 00:14:49.331 12:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.331 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:14:49.591 12:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@530 -- # snapshot='[ 00:14:49.591 { 00:14:49.591 "name": "b9e47511-09bf-45c6-b358-06b268f40668", 00:14:49.591 "aliases": [ 00:14:49.591 "lvs_test/lvol_snapshot" 00:14:49.591 ], 00:14:49.591 "product_name": "Logical Volume", 00:14:49.591 "block_size": 512, 00:14:49.591 "num_blocks": 49152, 00:14:49.591 "uuid": "b9e47511-09bf-45c6-b358-06b268f40668", 00:14:49.591 "assigned_rate_limits": { 00:14:49.591 "rw_ios_per_sec": 0, 00:14:49.591 "rw_mbytes_per_sec": 0, 00:14:49.591 "r_mbytes_per_sec": 0, 00:14:49.591 "w_mbytes_per_sec": 0 00:14:49.591 }, 00:14:49.591 "claimed": false, 00:14:49.591 "zoned": false, 00:14:49.591 "supported_io_types": { 00:14:49.591 "read": true, 00:14:49.591 "write": false, 00:14:49.591 "unmap": false, 00:14:49.591 "write_zeroes": false, 00:14:49.591 "flush": false, 00:14:49.591 "reset": true, 00:14:49.591 "compare": false, 00:14:49.591 "compare_and_write": false, 00:14:49.591 "abort": false, 00:14:49.591 "nvme_admin": false, 00:14:49.591 "nvme_io": false 00:14:49.591 }, 00:14:49.591 "memory_domains": [ 00:14:49.591 { 00:14:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.591 "dma_device_type": 2 00:14:49.591 } 00:14:49.591 ], 00:14:49.591 "driver_specific": { 00:14:49.591 "lvol": { 00:14:49.591 "lvol_store_uuid": "6d29695e-3603-4a47-acbd-6aa3fdd9efdd", 00:14:49.591 "base_bdev": "Malloc8", 00:14:49.591 "thin_provision": false, 00:14:49.591 "snapshot": true, 00:14:49.591 "clone": false, 00:14:49.591 "clones": [ 00:14:49.591 "lvol_test" 00:14:49.591 ], 00:14:49.591 "esnap_clone": false 00:14:49.591 } 00:14:49.591 } 00:14:49.591 } 00:14:49.591 ]' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@531 -- # jq '.[].driver_specific.lvol.clone' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@531 -- # '[' true = true ']' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@532 -- # jq '.[].driver_specific.lvol.base_snapshot' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@532 -- # '[' '"lvol_snapshot"' = '"lvol_snapshot"' ']' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@533 -- # jq '.[].driver_specific.lvol.clones|sort' 00:14:49.591 12:34:31 -- lvol/snapshot_clone.sh@533 -- # jq '.|sort' 00:14:49.591 12:34:32 -- lvol/snapshot_clone.sh@533 -- # '[' '[ 00:14:49.591 "lvol_test" 00:14:49.591 ]' = '[ 00:14:49.591 "lvol_test" 00:14:49.591 ]' ']' 00:14:49.591 12:34:32 -- lvol/snapshot_clone.sh@534 -- # run_fio_test /dev/nbd0 8388608 8388608 read 0xee 00:14:49.591 12:34:32 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:49.591 12:34:32 -- lvol/common.sh@41 -- # local offset=8388608 00:14:49.591 12:34:32 -- lvol/common.sh@42 -- # local size=8388608 00:14:49.591 12:34:32 -- lvol/common.sh@43 -- # local rw=read 00:14:49.591 12:34:32 -- lvol/common.sh@44 -- # local pattern=0xee 00:14:49.591 12:34:32 -- lvol/common.sh@45 -- # local extra_params= 00:14:49.591 12:34:32 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:49.591 12:34:32 -- lvol/common.sh@48 -- # [[ -n 0xee ]] 00:14:49.591 12:34:32 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:49.591 12:34:32 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0' 00:14:49.591 12:34:32 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=8388608 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xee --verify_state_save=0 00:14:49.850 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:49.850 fio-3.35 00:14:49.850 Starting 1 process 00:14:50.108 00:14:50.108 fio_test: (groupid=0, jobs=1): err= 0: pid=62134: Tue Oct 1 12:34:32 2024 00:14:50.108 read: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(8192KiB/169msec) 00:14:50.108 clat (usec): min=55, max=281, avg=80.73, stdev=21.51 00:14:50.108 lat (usec): min=55, max=282, avg=80.89, stdev=21.56 00:14:50.108 clat percentiles (usec): 00:14:50.108 | 1.00th=[ 59], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 61], 00:14:50.108 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 86], 00:14:50.108 | 70.00th=[ 92], 80.00th=[ 98], 90.00th=[ 109], 95.00th=[ 116], 00:14:50.108 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 221], 99.95th=[ 269], 00:14:50.108 | 99.99th=[ 281] 00:14:50.108 lat (usec) : 100=82.28%, 250=17.63%, 500=0.10% 00:14:50.108 cpu : usr=3.57%, sys=7.14%, ctx=2050, majf=0, minf=9 00:14:50.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:50.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.108 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:50.108 00:14:50.108 Run status group 0 (all jobs): 00:14:50.108 READ: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=8192KiB (8389kB), run=169-169msec 00:14:50.108 00:14:50.108 Disk stats (read/write): 00:14:50.108 nbd0: ios=1758/0, merge=0/0, ticks=126/0, in_queue=126, util=58.68% 00:14:50.108 12:34:32 -- lvol/snapshot_clone.sh@535 -- # run_fio_test /dev/nbd0 16777216 8388608 read 0xdd 00:14:50.108 12:34:32 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:14:50.108 12:34:32 -- lvol/common.sh@41 -- # local offset=16777216 00:14:50.108 12:34:32 -- lvol/common.sh@42 -- # local size=8388608 00:14:50.108 12:34:32 -- lvol/common.sh@43 -- # local rw=read 00:14:50.108 12:34:32 -- lvol/common.sh@44 -- # local pattern=0xdd 00:14:50.109 12:34:32 -- lvol/common.sh@45 -- # local extra_params= 00:14:50.109 12:34:32 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:50.109 12:34:32 -- lvol/common.sh@48 -- # [[ -n 0xdd ]] 00:14:50.109 12:34:32 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:50.109 12:34:32 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0' 00:14:50.109 12:34:32 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=16777216 --size=8388608 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xdd --verify_state_save=0 00:14:50.109 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:50.109 fio-3.35 00:14:50.109 Starting 1 process 00:14:50.392 00:14:50.392 fio_test: (groupid=0, jobs=1): err= 0: pid=62148: Tue Oct 1 12:34:32 2024 00:14:50.392 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(8192KiB/200msec) 00:14:50.392 clat (usec): min=63, max=321, avg=95.85, stdev=16.32 00:14:50.392 lat (usec): min=63, max=321, avg=95.97, stdev=16.33 00:14:50.392 clat percentiles (usec): 00:14:50.392 | 1.00th=[ 65], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 86], 00:14:50.392 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 94], 00:14:50.392 | 70.00th=[ 100], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 124], 00:14:50.392 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 229], 99.95th=[ 285], 00:14:50.392 | 99.99th=[ 322] 00:14:50.392 lat (usec) : 100=70.80%, 250=29.10%, 500=0.10% 00:14:50.392 cpu : usr=5.03%, sys=6.53%, ctx=2050, majf=0, minf=9 00:14:50.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:50.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.392 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:50.392 00:14:50.392 Run status group 0 (all jobs): 00:14:50.392 READ: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=8192KiB (8389kB), run=200-200msec 00:14:50.392 00:14:50.392 Disk stats (read/write): 00:14:50.392 nbd0: ios=1418/0, merge=0/0, ticks=126/0, in_queue=125, util=58.68% 00:14:50.392 12:34:32 -- lvol/snapshot_clone.sh@538 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:50.392 12:34:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.392 12:34:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:50.392 12:34:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.392 12:34:32 -- bdev/nbd_common.sh@51 -- # local i 00:14:50.392 12:34:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.656 12:34:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@41 -- # break 00:14:50.656 12:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.656 12:34:33 -- lvol/snapshot_clone.sh@539 -- # rpc_cmd bdev_lvol_delete b9e47511-09bf-45c6-b358-06b268f40668 00:14:50.656 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.656 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.656 12:34:33 -- lvol/snapshot_clone.sh@540 -- # rpc_cmd bdev_lvol_delete 30308baa-e8b0-4afd-acb9-f4afe5e70928 00:14:50.656 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.656 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.656 12:34:33 -- lvol/snapshot_clone.sh@541 -- # rpc_cmd bdev_lvol_delete_lvstore -u 6d29695e-3603-4a47-acbd-6aa3fdd9efdd 00:14:50.656 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.656 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.656 12:34:33 -- lvol/snapshot_clone.sh@542 -- # rpc_cmd bdev_malloc_delete Malloc8 00:14:50.656 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.656 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@543 -- # check_leftover_devices 00:14:51.224 12:34:33 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:51.224 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.224 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.224 12:34:33 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:51.224 12:34:33 -- lvol/common.sh@26 -- # jq length 00:14:51.224 12:34:33 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:51.224 12:34:33 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:51.224 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.224 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.224 12:34:33 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:51.224 12:34:33 -- lvol/common.sh@28 -- # jq length 00:14:51.224 ************************************ 00:14:51.224 END TEST test_delete_snapshot_with_snapshot 00:14:51.224 ************************************ 00:14:51.224 12:34:33 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:51.224 00:14:51.224 real 0m11.162s 00:14:51.224 user 0m3.633s 00:14:51.224 sys 0m1.057s 00:14:51.224 12:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.224 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@617 -- # run_test test_bdev_lvol_delete_ordering test_bdev_lvol_delete_ordering 00:14:51.224 12:34:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:51.224 12:34:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.224 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 ************************************ 00:14:51.224 START TEST test_bdev_lvol_delete_ordering 00:14:51.224 ************************************ 00:14:51.224 12:34:33 -- common/autotest_common.sh@1104 -- # test_bdev_lvol_delete_ordering 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@548 -- # local snapshot_name=snapshot snapshot_uuid 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@549 -- # local clone_name=clone clone_uuid 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@551 -- # local bdev_uuid 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@552 -- # local lbd_name=lbd_test 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@553 -- # local lvstore_uuid lvstore_name=lvs_name 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@554 -- # local malloc_dev 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@555 -- # local size 00:14:51.224 12:34:33 -- lvol/snapshot_clone.sh@557 -- # rpc_cmd bdev_malloc_create 256 512 00:14:51.224 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.224 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@557 -- # malloc_dev=Malloc9 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@558 -- # rpc_cmd bdev_lvol_create_lvstore Malloc9 lvs_name 00:14:51.483 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.483 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@558 -- # lvstore_uuid=58028566-03f5-4be8-af00-2355167c8002 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@560 -- # get_lvs_jq bdev_lvol_get_lvstores -u 58028566-03f5-4be8-af00-2355167c8002 00:14:51.483 12:34:33 -- lvol/common.sh@21 -- # rpc_cmd_simple_data_json lvs bdev_lvol_get_lvstores -u 58028566-03f5-4be8-af00-2355167c8002 00:14:51.483 12:34:33 -- common/autotest_common.sh@584 -- # local 'elems=lvs[@]' elem 00:14:51.483 12:34:33 -- common/autotest_common.sh@585 -- # jq_out=() 00:14:51.483 12:34:33 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:14:51.483 12:34:33 -- common/autotest_common.sh@586 -- # local jq val 00:14:51.483 12:34:33 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:14:51.483 12:34:33 -- common/autotest_common.sh@596 -- # local lvs 00:14:51.483 12:34:33 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:14:51.483 12:34:33 -- common/autotest_common.sh@611 -- # local bdev 00:14:51.483 12:34:33 -- common/autotest_common.sh@613 -- # [[ -v lvs[@] ]] 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size' 00:14:51.483 12:34:33 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.483 12:34:33 -- common/autotest_common.sh@616 -- # jq='"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size' 00:14:51.483 12:34:33 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:14:51.483 12:34:33 -- common/autotest_common.sh@620 -- # shift 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_lvol_get_lvstores -u 58028566-03f5-4be8-af00-2355167c8002 00:14:51.483 12:34:33 -- common/autotest_common.sh@582 -- # jq -jr '"uuid"," ",.[0].uuid,"\n","name"," ",.[0].name,"\n","base_bdev"," ",.[0].base_bdev,"\n","total_data_clusters"," ",.[0].total_data_clusters,"\n","free_clusters"," ",.[0].free_clusters,"\n","block_size"," ",.[0].block_size,"\n","cluster_size"," ",.[0].cluster_size,"\n"' 00:14:51.483 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.483 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 12:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=58028566-03f5-4be8-af00-2355167c8002 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=Malloc9 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=63 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=4194304 00:14:51.483 12:34:33 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.483 12:34:33 -- common/autotest_common.sh@624 -- # (( 7 > 0 )) 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@561 -- # [[ 58028566-03f5-4be8-af00-2355167c8002 == \5\8\0\2\8\5\6\6\-\0\3\f\5\-\4\b\e\8\-\a\f\0\0\-\2\3\5\5\1\6\7\c\8\0\0\2 ]] 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@562 -- # [[ lvs_name == \l\v\s\_\n\a\m\e ]] 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@563 -- # [[ Malloc9 == \M\a\l\l\o\c\9 ]] 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@565 -- # size=63 00:14:51.483 12:34:33 -- lvol/snapshot_clone.sh@567 -- # rpc_cmd bdev_lvol_create -t -u 58028566-03f5-4be8-af00-2355167c8002 lbd_test 63 00:14:51.483 12:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.483 12:34:33 -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.484 12:34:34 -- lvol/snapshot_clone.sh@567 -- # bdev_uuid=bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.484 12:34:34 -- lvol/snapshot_clone.sh@569 -- # get_bdev_jq bdev_get_bdevs -b bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.484 12:34:34 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.484 12:34:34 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:14:51.484 12:34:34 -- common/autotest_common.sh@585 -- # jq_out=() 00:14:51.743 12:34:34 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:14:51.743 12:34:34 -- common/autotest_common.sh@586 -- # local jq val 00:14:51.743 12:34:34 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:14:51.743 12:34:34 -- common/autotest_common.sh@596 -- # local lvs 00:14:51.743 12:34:34 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:14:51.743 12:34:34 -- common/autotest_common.sh@611 -- # local bdev 00:14:51.743 12:34:34 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:14:51.743 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.743 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:14:51.743 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.743 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:14:51.743 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.743 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:14:51.744 12:34:34 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:14:51.744 12:34:34 -- common/autotest_common.sh@620 -- # shift 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.744 12:34:34 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:14:51.744 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.744 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/lbd_test 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:14:51.744 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.744 12:34:34 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:14:51.744 12:34:34 -- lvol/snapshot_clone.sh@571 -- # rpc_cmd bdev_lvol_snapshot bb56d3d0-81b9-4b40-a427-ac70ba125c62 snapshot 00:14:51.744 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.744 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.744 12:34:34 -- lvol/snapshot_clone.sh@571 -- # snapshot_uuid=a10c765c-ad2f-4eb3-b177-488a322cfa53 00:14:51.744 12:34:34 -- lvol/snapshot_clone.sh@573 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/snapshot 00:14:51.744 12:34:34 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/snapshot 00:14:51.744 12:34:34 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:14:51.744 12:34:34 -- common/autotest_common.sh@585 -- # jq_out=() 00:14:51.744 12:34:34 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:14:51.744 12:34:34 -- common/autotest_common.sh@586 -- # local jq val 00:14:51.744 12:34:34 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:14:51.744 12:34:34 -- common/autotest_common.sh@596 -- # local lvs 00:14:51.744 12:34:34 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:14:51.744 12:34:34 -- common/autotest_common.sh@611 -- # local bdev 00:14:51.744 12:34:34 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:14:51.744 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.744 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:14:51.745 12:34:34 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:14:51.745 12:34:34 -- common/autotest_common.sh@620 -- # shift 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/snapshot 00:14:51.745 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.745 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.745 12:34:34 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:14:51.745 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=a10c765c-ad2f-4eb3-b177-488a322cfa53 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/snapshot 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=a10c765c-ad2f-4eb3-b177-488a322cfa53 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:14:51.745 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.745 12:34:34 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@574 -- # [[ a10c765c-ad2f-4eb3-b177-488a322cfa53 == \a\1\0\c\7\6\5\c\-\a\d\2\f\-\4\e\b\3\-\b\1\7\7\-\4\8\8\a\3\2\2\c\f\a\5\3 ]] 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@575 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@576 -- # [[ lvs_name/snapshot == \l\v\s\_\n\a\m\e\/\s\n\a\p\s\h\o\t ]] 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@578 -- # rpc_cmd bdev_lvol_clone lvs_name/snapshot clone 00:14:51.745 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.745 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.745 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@578 -- # clone_uuid=69c35d73-2707-4756-b70b-fed073bd9f35 00:14:51.745 12:34:34 -- lvol/snapshot_clone.sh@580 -- # get_bdev_jq bdev_get_bdevs -b lvs_name/clone 00:14:51.745 12:34:34 -- lvol/common.sh@17 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b lvs_name/clone 00:14:51.745 12:34:34 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:14:51.745 12:34:34 -- common/autotest_common.sh@585 -- # jq_out=() 00:14:51.745 12:34:34 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:14:51.745 12:34:34 -- common/autotest_common.sh@586 -- # local jq val 00:14:51.745 12:34:34 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:14:51.745 12:34:34 -- common/autotest_common.sh@596 -- # local lvs 00:14:51.745 12:34:34 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:14:51.745 12:34:34 -- common/autotest_common.sh@611 -- # local bdev 00:14:51.745 12:34:34 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.745 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:14:51.745 12:34:34 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:14:51.746 12:34:34 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:14:51.746 12:34:34 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:14:51.746 12:34:34 -- common/autotest_common.sh@620 -- # shift 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b lvs_name/clone 00:14:51.746 12:34:34 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:14:51.746 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.746 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.746 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=69c35d73-2707-4756-b70b-fed073bd9f35 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_name/clone 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=131072 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=69c35d73-2707-4756-b70b-fed073bd9f35 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=snapshot 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:14:51.746 12:34:34 -- common/autotest_common.sh@621 -- # read -r elem val 00:14:51.746 12:34:34 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@581 -- # [[ 69c35d73-2707-4756-b70b-fed073bd9f35 == \6\9\c\3\5\d\7\3\-\2\7\0\7\-\4\7\5\6\-\b\7\0\b\-\f\e\d\0\7\3\b\d\9\f\3\5 ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@582 -- # [[ Logical Volume == \L\o\g\i\c\a\l\ \V\o\l\u\m\e ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@583 -- # [[ lvs_name/clone == \l\v\s\_\n\a\m\e\/\c\l\o\n\e ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@586 -- # rpc_cmd bdev_lvol_delete a10c765c-ad2f-4eb3-b177-488a322cfa53 00:14:51.746 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.746 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.746 [2024-10-01 12:34:34.239222] vbdev_lvol.c: 640:_vbdev_lvol_destroy: *ERROR*: Cannot delete lvol 00:14:51.746 request: 00:14:51.746 { 00:14:51.746 "name": "a10c765c-ad2f-4eb3-b177-488a322cfa53", 00:14:51.746 "method": "bdev_lvol_delete", 00:14:51.746 "req_id": 1 00:14:51.746 } 00:14:51.746 Got JSON-RPC error response 00:14:51.746 response: 00:14:51.746 { 00:14:51.746 "code": -32603, 00:14:51.746 "message": "Operation not permitted" 00:14:51.746 } 00:14:51.746 12:34:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@589 -- # rpc_cmd bdev_lvol_delete bb56d3d0-81b9-4b40-a427-ac70ba125c62 00:14:51.746 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.746 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.746 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@590 -- # rpc_cmd bdev_lvol_delete 69c35d73-2707-4756-b70b-fed073bd9f35 00:14:51.746 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.746 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.746 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.746 12:34:34 -- lvol/snapshot_clone.sh@591 -- # rpc_cmd bdev_lvol_delete a10c765c-ad2f-4eb3-b177-488a322cfa53 00:14:51.746 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.746 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.004 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.004 12:34:34 -- lvol/snapshot_clone.sh@594 -- # rpc_cmd bdev_lvol_delete_lvstore -u 58028566-03f5-4be8-af00-2355167c8002 00:14:52.004 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.004 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.004 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.004 12:34:34 -- lvol/snapshot_clone.sh@597 -- # rpc_cmd bdev_malloc_delete Malloc9 00:14:52.004 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.004 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.571 12:34:34 -- lvol/snapshot_clone.sh@599 -- # check_leftover_devices 00:14:52.571 12:34:34 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:52.571 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.571 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.571 12:34:34 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:52.571 12:34:34 -- lvol/common.sh@26 -- # jq length 00:14:52.571 12:34:34 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:52.571 12:34:34 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:52.571 12:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.571 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.571 12:34:34 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:52.571 12:34:34 -- lvol/common.sh@28 -- # jq length 00:14:52.571 ************************************ 00:14:52.571 END TEST test_bdev_lvol_delete_ordering 00:14:52.571 ************************************ 00:14:52.571 12:34:34 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:52.571 00:14:52.571 real 0m1.328s 00:14:52.571 user 0m0.383s 00:14:52.571 sys 0m0.063s 00:14:52.571 12:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.571 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:34:35 -- lvol/snapshot_clone.sh@619 -- # trap - SIGINT SIGTERM EXIT 00:14:52.571 12:34:35 -- lvol/snapshot_clone.sh@620 -- # killprocess 60576 00:14:52.571 12:34:35 -- common/autotest_common.sh@926 -- # '[' -z 60576 ']' 00:14:52.571 12:34:35 -- common/autotest_common.sh@930 -- # kill -0 60576 00:14:52.571 12:34:35 -- common/autotest_common.sh@931 -- # uname 00:14:52.571 12:34:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.571 12:34:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60576 00:14:52.571 killing process with pid 60576 00:14:52.571 12:34:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:52.571 12:34:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:52.571 12:34:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60576' 00:14:52.571 12:34:35 -- common/autotest_common.sh@945 -- # kill 60576 00:14:52.571 12:34:35 -- common/autotest_common.sh@950 -- # wait 60576 00:14:55.105 ************************************ 00:14:55.105 END TEST lvol_snapshot_clone 00:14:55.105 ************************************ 00:14:55.105 00:14:55.105 real 1m14.954s 00:14:55.105 user 1m17.052s 00:14:55.105 sys 0m22.334s 00:14:55.105 12:34:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.105 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:14:55.105 12:34:37 -- lvol/lvol.sh@19 -- # run_test lvol_rename /home/vagrant/spdk_repo/spdk/test/lvol/rename.sh 00:14:55.105 12:34:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:55.105 12:34:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.105 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:14:55.105 ************************************ 00:14:55.105 START TEST lvol_rename 00:14:55.105 ************************************ 00:14:55.105 12:34:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/rename.sh 00:14:55.105 * Looking for test storage... 00:14:55.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:14:55.105 12:34:37 -- lvol/rename.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:55.105 12:34:37 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:55.105 12:34:37 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:55.105 12:34:37 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:55.105 12:34:37 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:55.105 12:34:37 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:55.105 12:34:37 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:55.105 12:34:37 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:55.105 12:34:37 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:55.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.105 12:34:37 -- lvol/rename.sh@213 -- # spdk_pid=62305 00:14:55.105 12:34:37 -- lvol/rename.sh@214 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:55.105 12:34:37 -- lvol/rename.sh@215 -- # waitforlisten 62305 00:14:55.105 12:34:37 -- common/autotest_common.sh@819 -- # '[' -z 62305 ']' 00:14:55.105 12:34:37 -- lvol/rename.sh@212 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:55.105 12:34:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.105 12:34:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.105 12:34:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.105 12:34:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.105 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:14:55.105 [2024-10-01 12:34:37.295350] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:55.105 [2024-10-01 12:34:37.295520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62305 ] 00:14:55.105 [2024-10-01 12:34:37.462695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.364 [2024-10-01 12:34:37.640527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:55.364 [2024-10-01 12:34:37.640848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.741 12:34:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.741 12:34:38 -- common/autotest_common.sh@852 -- # return 0 00:14:56.741 12:34:38 -- lvol/rename.sh@217 -- # run_test test_rename_positive test_rename_positive 00:14:56.741 12:34:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:56.741 12:34:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.741 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 ************************************ 00:14:56.741 START TEST test_rename_positive 00:14:56.741 ************************************ 00:14:56.741 12:34:38 -- common/autotest_common.sh@1104 -- # test_rename_positive 00:14:56.741 12:34:38 -- lvol/rename.sh@13 -- # rpc_cmd bdev_malloc_create 128 512 00:14:56.741 12:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.741 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.741 12:34:39 -- lvol/rename.sh@13 -- # malloc_name=Malloc0 00:14:56.741 12:34:39 -- lvol/rename.sh@14 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:14:56.741 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.741 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.741 12:34:39 -- lvol/rename.sh@14 -- # lvs_uuid=2a2f7f07-26f7-473e-83cc-963872b6baf2 00:14:56.741 12:34:39 -- lvol/rename.sh@15 -- # bdev_name=("lvol_test"{0..3}) 00:14:56.741 12:34:39 -- lvol/rename.sh@16 -- # bdev_aliases=("lvs_test/lvol_test"{0..3}) 00:14:56.741 12:34:39 -- lvol/rename.sh@19 -- # round_down 31 00:14:56.741 12:34:39 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:14:56.741 12:34:39 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:14:56.741 12:34:39 -- lvol/common.sh@36 -- # echo 28 00:14:56.741 12:34:39 -- lvol/rename.sh@19 -- # lvol_size_mb=28 00:14:56.741 12:34:39 -- lvol/rename.sh@20 -- # lvol_size=29360128 00:14:56.741 12:34:39 -- lvol/rename.sh@23 -- # bdev_uuids=() 00:14:56.741 12:34:39 -- lvol/rename.sh@24 -- # for i in "${!bdev_name[@]}" 00:14:56.741 12:34:39 -- lvol/rename.sh@25 -- # rpc_cmd bdev_lvol_create -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 lvol_test0 28 00:14:56.741 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.741 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.741 12:34:39 -- lvol/rename.sh@25 -- # lvol_uuid=6409ac31-2641-413d-9b5c-6de171df4a6b 00:14:56.741 12:34:39 -- lvol/rename.sh@26 -- # rpc_cmd bdev_get_bdevs -b 6409ac31-2641-413d-9b5c-6de171df4a6b 00:14:56.741 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.741 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.741 12:34:39 -- lvol/rename.sh@26 -- # lvol='[ 00:14:56.741 { 00:14:56.741 "name": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:56.741 "aliases": [ 00:14:56.741 "lvs_test/lvol_test0" 00:14:56.741 ], 00:14:56.741 "product_name": "Logical Volume", 00:14:56.741 "block_size": 512, 00:14:56.741 "num_blocks": 57344, 00:14:56.741 "uuid": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:56.741 "assigned_rate_limits": { 00:14:56.741 "rw_ios_per_sec": 0, 00:14:56.741 "rw_mbytes_per_sec": 0, 00:14:56.741 "r_mbytes_per_sec": 0, 00:14:56.741 "w_mbytes_per_sec": 0 00:14:56.741 }, 00:14:56.741 "claimed": false, 00:14:56.741 "zoned": false, 00:14:56.741 "supported_io_types": { 00:14:56.741 "read": true, 00:14:56.741 "write": true, 00:14:56.741 "unmap": true, 00:14:56.741 "write_zeroes": true, 00:14:56.741 "flush": false, 00:14:56.741 "reset": true, 00:14:56.741 "compare": false, 00:14:56.741 "compare_and_write": false, 00:14:56.741 "abort": false, 00:14:56.741 "nvme_admin": false, 00:14:56.741 "nvme_io": false 00:14:56.741 }, 00:14:56.741 "memory_domains": [ 00:14:56.741 { 00:14:56.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.741 "dma_device_type": 2 00:14:56.741 } 00:14:56.741 ], 00:14:56.741 "driver_specific": { 00:14:56.741 "lvol": { 00:14:56.741 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:56.741 "base_bdev": "Malloc0", 00:14:56.741 "thin_provision": false, 00:14:56.741 "snapshot": false, 00:14:56.741 "clone": false, 00:14:56.741 "esnap_clone": false 00:14:56.741 } 00:14:56.741 } 00:14:56.741 } 00:14:56.741 ]' 00:14:56.741 12:34:39 -- lvol/rename.sh@27 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:56.741 12:34:39 -- lvol/rename.sh@27 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:56.741 12:34:39 -- lvol/rename.sh@28 -- # jq -r '.[0].block_size' 00:14:56.741 12:34:39 -- lvol/rename.sh@28 -- # '[' 512 = 512 ']' 00:14:56.741 12:34:39 -- lvol/rename.sh@29 -- # jq -r '.[0].num_blocks' 00:14:57.000 12:34:39 -- lvol/rename.sh@29 -- # '[' 57344 = 57344 ']' 00:14:57.000 12:34:39 -- lvol/rename.sh@30 -- # jq '.[0].aliases|sort' 00:14:57.000 12:34:39 -- lvol/rename.sh@30 -- # jq '.|sort' 00:14:57.000 12:34:39 -- lvol/rename.sh@30 -- # '[' '[ 00:14:57.000 "lvs_test/lvol_test0" 00:14:57.000 ]' = '[ 00:14:57.000 "lvs_test/lvol_test0" 00:14:57.000 ]' ']' 00:14:57.000 12:34:39 -- lvol/rename.sh@31 -- # bdev_uuids+=("$lvol_uuid") 00:14:57.000 12:34:39 -- lvol/rename.sh@24 -- # for i in "${!bdev_name[@]}" 00:14:57.000 12:34:39 -- lvol/rename.sh@25 -- # rpc_cmd bdev_lvol_create -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 lvol_test1 28 00:14:57.000 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.000 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.000 12:34:39 -- lvol/rename.sh@25 -- # lvol_uuid=8e5091bc-ea88-4c21-a8b5-afc37b01355b 00:14:57.000 12:34:39 -- lvol/rename.sh@26 -- # rpc_cmd bdev_get_bdevs -b 8e5091bc-ea88-4c21-a8b5-afc37b01355b 00:14:57.000 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.000 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.000 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.000 12:34:39 -- lvol/rename.sh@26 -- # lvol='[ 00:14:57.000 { 00:14:57.000 "name": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:57.000 "aliases": [ 00:14:57.000 "lvs_test/lvol_test1" 00:14:57.000 ], 00:14:57.000 "product_name": "Logical Volume", 00:14:57.000 "block_size": 512, 00:14:57.000 "num_blocks": 57344, 00:14:57.000 "uuid": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:57.000 "assigned_rate_limits": { 00:14:57.000 "rw_ios_per_sec": 0, 00:14:57.000 "rw_mbytes_per_sec": 0, 00:14:57.000 "r_mbytes_per_sec": 0, 00:14:57.000 "w_mbytes_per_sec": 0 00:14:57.000 }, 00:14:57.000 "claimed": false, 00:14:57.000 "zoned": false, 00:14:57.000 "supported_io_types": { 00:14:57.000 "read": true, 00:14:57.000 "write": true, 00:14:57.000 "unmap": true, 00:14:57.000 "write_zeroes": true, 00:14:57.000 "flush": false, 00:14:57.000 "reset": true, 00:14:57.000 "compare": false, 00:14:57.000 "compare_and_write": false, 00:14:57.000 "abort": false, 00:14:57.000 "nvme_admin": false, 00:14:57.000 "nvme_io": false 00:14:57.000 }, 00:14:57.000 "memory_domains": [ 00:14:57.000 { 00:14:57.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.000 "dma_device_type": 2 00:14:57.000 } 00:14:57.000 ], 00:14:57.000 "driver_specific": { 00:14:57.000 "lvol": { 00:14:57.000 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:57.000 "base_bdev": "Malloc0", 00:14:57.000 "thin_provision": false, 00:14:57.000 "snapshot": false, 00:14:57.000 "clone": false, 00:14:57.000 "esnap_clone": false 00:14:57.000 } 00:14:57.000 } 00:14:57.000 } 00:14:57.000 ]' 00:14:57.000 12:34:39 -- lvol/rename.sh@27 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:57.000 12:34:39 -- lvol/rename.sh@27 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:57.000 12:34:39 -- lvol/rename.sh@28 -- # jq -r '.[0].block_size' 00:14:57.259 12:34:39 -- lvol/rename.sh@28 -- # '[' 512 = 512 ']' 00:14:57.259 12:34:39 -- lvol/rename.sh@29 -- # jq -r '.[0].num_blocks' 00:14:57.259 12:34:39 -- lvol/rename.sh@29 -- # '[' 57344 = 57344 ']' 00:14:57.259 12:34:39 -- lvol/rename.sh@30 -- # jq '.[0].aliases|sort' 00:14:57.259 12:34:39 -- lvol/rename.sh@30 -- # jq '.|sort' 00:14:57.259 12:34:39 -- lvol/rename.sh@30 -- # '[' '[ 00:14:57.259 "lvs_test/lvol_test1" 00:14:57.259 ]' = '[ 00:14:57.259 "lvs_test/lvol_test1" 00:14:57.259 ]' ']' 00:14:57.259 12:34:39 -- lvol/rename.sh@31 -- # bdev_uuids+=("$lvol_uuid") 00:14:57.259 12:34:39 -- lvol/rename.sh@24 -- # for i in "${!bdev_name[@]}" 00:14:57.259 12:34:39 -- lvol/rename.sh@25 -- # rpc_cmd bdev_lvol_create -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 lvol_test2 28 00:14:57.259 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.259 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.259 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.259 12:34:39 -- lvol/rename.sh@25 -- # lvol_uuid=25b95e55-9fe7-4df9-ba32-f12b1427e31a 00:14:57.259 12:34:39 -- lvol/rename.sh@26 -- # rpc_cmd bdev_get_bdevs -b 25b95e55-9fe7-4df9-ba32-f12b1427e31a 00:14:57.259 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.259 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.259 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.259 12:34:39 -- lvol/rename.sh@26 -- # lvol='[ 00:14:57.259 { 00:14:57.259 "name": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:57.259 "aliases": [ 00:14:57.259 "lvs_test/lvol_test2" 00:14:57.259 ], 00:14:57.260 "product_name": "Logical Volume", 00:14:57.260 "block_size": 512, 00:14:57.260 "num_blocks": 57344, 00:14:57.260 "uuid": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:57.260 "assigned_rate_limits": { 00:14:57.260 "rw_ios_per_sec": 0, 00:14:57.260 "rw_mbytes_per_sec": 0, 00:14:57.260 "r_mbytes_per_sec": 0, 00:14:57.260 "w_mbytes_per_sec": 0 00:14:57.260 }, 00:14:57.260 "claimed": false, 00:14:57.260 "zoned": false, 00:14:57.260 "supported_io_types": { 00:14:57.260 "read": true, 00:14:57.260 "write": true, 00:14:57.260 "unmap": true, 00:14:57.260 "write_zeroes": true, 00:14:57.260 "flush": false, 00:14:57.260 "reset": true, 00:14:57.260 "compare": false, 00:14:57.260 "compare_and_write": false, 00:14:57.260 "abort": false, 00:14:57.260 "nvme_admin": false, 00:14:57.260 "nvme_io": false 00:14:57.260 }, 00:14:57.260 "memory_domains": [ 00:14:57.260 { 00:14:57.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.260 "dma_device_type": 2 00:14:57.260 } 00:14:57.260 ], 00:14:57.260 "driver_specific": { 00:14:57.260 "lvol": { 00:14:57.260 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:57.260 "base_bdev": "Malloc0", 00:14:57.260 "thin_provision": false, 00:14:57.260 "snapshot": false, 00:14:57.260 "clone": false, 00:14:57.260 "esnap_clone": false 00:14:57.260 } 00:14:57.260 } 00:14:57.260 } 00:14:57.260 ]' 00:14:57.260 12:34:39 -- lvol/rename.sh@27 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:57.518 12:34:39 -- lvol/rename.sh@27 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:57.518 12:34:39 -- lvol/rename.sh@28 -- # jq -r '.[0].block_size' 00:14:57.518 12:34:39 -- lvol/rename.sh@28 -- # '[' 512 = 512 ']' 00:14:57.518 12:34:39 -- lvol/rename.sh@29 -- # jq -r '.[0].num_blocks' 00:14:57.518 12:34:39 -- lvol/rename.sh@29 -- # '[' 57344 = 57344 ']' 00:14:57.518 12:34:39 -- lvol/rename.sh@30 -- # jq '.[0].aliases|sort' 00:14:57.518 12:34:39 -- lvol/rename.sh@30 -- # jq '.|sort' 00:14:57.518 12:34:39 -- lvol/rename.sh@30 -- # '[' '[ 00:14:57.518 "lvs_test/lvol_test2" 00:14:57.518 ]' = '[ 00:14:57.518 "lvs_test/lvol_test2" 00:14:57.518 ]' ']' 00:14:57.518 12:34:39 -- lvol/rename.sh@31 -- # bdev_uuids+=("$lvol_uuid") 00:14:57.518 12:34:39 -- lvol/rename.sh@24 -- # for i in "${!bdev_name[@]}" 00:14:57.518 12:34:39 -- lvol/rename.sh@25 -- # rpc_cmd bdev_lvol_create -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 lvol_test3 28 00:14:57.518 12:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.518 12:34:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.518 12:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.518 12:34:39 -- lvol/rename.sh@25 -- # lvol_uuid=0a78883e-1e46-41bf-9dfb-e519254214d9 00:14:57.518 12:34:39 -- lvol/rename.sh@26 -- # rpc_cmd bdev_get_bdevs -b 0a78883e-1e46-41bf-9dfb-e519254214d9 00:14:57.518 12:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.518 12:34:40 -- common/autotest_common.sh@10 -- # set +x 00:14:57.518 12:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.518 12:34:40 -- lvol/rename.sh@26 -- # lvol='[ 00:14:57.518 { 00:14:57.518 "name": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:14:57.518 "aliases": [ 00:14:57.518 "lvs_test/lvol_test3" 00:14:57.518 ], 00:14:57.518 "product_name": "Logical Volume", 00:14:57.518 "block_size": 512, 00:14:57.518 "num_blocks": 57344, 00:14:57.518 "uuid": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:14:57.518 "assigned_rate_limits": { 00:14:57.518 "rw_ios_per_sec": 0, 00:14:57.518 "rw_mbytes_per_sec": 0, 00:14:57.518 "r_mbytes_per_sec": 0, 00:14:57.518 "w_mbytes_per_sec": 0 00:14:57.518 }, 00:14:57.518 "claimed": false, 00:14:57.518 "zoned": false, 00:14:57.518 "supported_io_types": { 00:14:57.519 "read": true, 00:14:57.519 "write": true, 00:14:57.519 "unmap": true, 00:14:57.519 "write_zeroes": true, 00:14:57.519 "flush": false, 00:14:57.519 "reset": true, 00:14:57.519 "compare": false, 00:14:57.519 "compare_and_write": false, 00:14:57.519 "abort": false, 00:14:57.519 "nvme_admin": false, 00:14:57.519 "nvme_io": false 00:14:57.519 }, 00:14:57.519 "memory_domains": [ 00:14:57.519 { 00:14:57.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.519 "dma_device_type": 2 00:14:57.519 } 00:14:57.519 ], 00:14:57.519 "driver_specific": { 00:14:57.519 "lvol": { 00:14:57.519 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:57.519 "base_bdev": "Malloc0", 00:14:57.519 "thin_provision": false, 00:14:57.519 "snapshot": false, 00:14:57.519 "clone": false, 00:14:57.519 "esnap_clone": false 00:14:57.519 } 00:14:57.519 } 00:14:57.519 } 00:14:57.519 ]' 00:14:57.519 12:34:40 -- lvol/rename.sh@27 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:57.777 12:34:40 -- lvol/rename.sh@27 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:57.777 12:34:40 -- lvol/rename.sh@28 -- # jq -r '.[0].block_size' 00:14:57.777 12:34:40 -- lvol/rename.sh@28 -- # '[' 512 = 512 ']' 00:14:57.778 12:34:40 -- lvol/rename.sh@29 -- # jq -r '.[0].num_blocks' 00:14:57.778 12:34:40 -- lvol/rename.sh@29 -- # '[' 57344 = 57344 ']' 00:14:57.778 12:34:40 -- lvol/rename.sh@30 -- # jq '.[0].aliases|sort' 00:14:57.778 12:34:40 -- lvol/rename.sh@30 -- # jq '.|sort' 00:14:57.778 12:34:40 -- lvol/rename.sh@30 -- # '[' '[ 00:14:57.778 "lvs_test/lvol_test3" 00:14:57.778 ]' = '[ 00:14:57.778 "lvs_test/lvol_test3" 00:14:57.778 ]' ']' 00:14:57.778 12:34:40 -- lvol/rename.sh@31 -- # bdev_uuids+=("$lvol_uuid") 00:14:57.778 12:34:40 -- lvol/rename.sh@36 -- # new_lvs_name=lvs_new 00:14:57.778 12:34:40 -- lvol/rename.sh@37 -- # bdev_aliases=("$new_lvs_name/lvol_test"{0..3}) 00:14:57.778 12:34:40 -- lvol/rename.sh@39 -- # rpc_cmd bdev_lvol_rename_lvstore lvs_test lvs_new 00:14:57.778 12:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.778 12:34:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.036 12:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.036 12:34:40 -- lvol/rename.sh@41 -- # rpc_cmd bdev_lvol_get_lvstores -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 00:14:58.036 12:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.036 12:34:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.036 12:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.036 12:34:40 -- lvol/rename.sh@41 -- # lvs='[ 00:14:58.036 { 00:14:58.036 "uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:58.036 "name": "lvs_new", 00:14:58.036 "base_bdev": "Malloc0", 00:14:58.036 "total_data_clusters": 31, 00:14:58.036 "free_clusters": 3, 00:14:58.036 "block_size": 512, 00:14:58.036 "cluster_size": 4194304 00:14:58.036 } 00:14:58.037 ]' 00:14:58.037 12:34:40 -- lvol/rename.sh@44 -- # jq -r '.[0].uuid' 00:14:58.037 12:34:40 -- lvol/rename.sh@44 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:58.037 12:34:40 -- lvol/rename.sh@45 -- # jq -r '.[0].name' 00:14:58.037 12:34:40 -- lvol/rename.sh@45 -- # '[' lvs_new = lvs_new ']' 00:14:58.037 12:34:40 -- lvol/rename.sh@46 -- # jq -r '.[0].base_bdev' 00:14:58.037 12:34:40 -- lvol/rename.sh@46 -- # '[' Malloc0 = Malloc0 ']' 00:14:58.037 12:34:40 -- lvol/rename.sh@49 -- # jq -r '.[0].cluster_size' 00:14:58.037 12:34:40 -- lvol/rename.sh@49 -- # cluster_size=4194304 00:14:58.037 12:34:40 -- lvol/rename.sh@50 -- # '[' 4194304 = 4194304 ']' 00:14:58.037 12:34:40 -- lvol/rename.sh@51 -- # jq -r '.[0].total_data_clusters' 00:14:58.295 12:34:40 -- lvol/rename.sh@51 -- # total_clusters=31 00:14:58.295 12:34:40 -- lvol/rename.sh@52 -- # '[' 130023424 = 130023424 ']' 00:14:58.295 12:34:40 -- lvol/rename.sh@54 -- # for i in "${!bdev_uuids[@]}" 00:14:58.295 12:34:40 -- lvol/rename.sh@55 -- # rpc_cmd bdev_get_bdevs -b 6409ac31-2641-413d-9b5c-6de171df4a6b 00:14:58.295 12:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.295 12:34:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.295 12:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.295 12:34:40 -- lvol/rename.sh@55 -- # lvol='[ 00:14:58.295 { 00:14:58.295 "name": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:58.295 "aliases": [ 00:14:58.295 "lvs_new/lvol_test0" 00:14:58.295 ], 00:14:58.295 "product_name": "Logical Volume", 00:14:58.295 "block_size": 512, 00:14:58.295 "num_blocks": 57344, 00:14:58.295 "uuid": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:58.295 "assigned_rate_limits": { 00:14:58.295 "rw_ios_per_sec": 0, 00:14:58.295 "rw_mbytes_per_sec": 0, 00:14:58.295 "r_mbytes_per_sec": 0, 00:14:58.295 "w_mbytes_per_sec": 0 00:14:58.295 }, 00:14:58.295 "claimed": false, 00:14:58.295 "zoned": false, 00:14:58.295 "supported_io_types": { 00:14:58.295 "read": true, 00:14:58.295 "write": true, 00:14:58.295 "unmap": true, 00:14:58.295 "write_zeroes": true, 00:14:58.295 "flush": false, 00:14:58.295 "reset": true, 00:14:58.295 "compare": false, 00:14:58.295 "compare_and_write": false, 00:14:58.295 "abort": false, 00:14:58.295 "nvme_admin": false, 00:14:58.295 "nvme_io": false 00:14:58.295 }, 00:14:58.295 "memory_domains": [ 00:14:58.295 { 00:14:58.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.295 "dma_device_type": 2 00:14:58.295 } 00:14:58.295 ], 00:14:58.295 "driver_specific": { 00:14:58.295 "lvol": { 00:14:58.295 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:58.295 "base_bdev": "Malloc0", 00:14:58.295 "thin_provision": false, 00:14:58.295 "snapshot": false, 00:14:58.295 "clone": false, 00:14:58.295 "esnap_clone": false 00:14:58.295 } 00:14:58.295 } 00:14:58.295 } 00:14:58.295 ]' 00:14:58.295 12:34:40 -- lvol/rename.sh@56 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:58.295 12:34:40 -- lvol/rename.sh@56 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:58.295 12:34:40 -- lvol/rename.sh@57 -- # jq -r '.[0].block_size' 00:14:58.295 12:34:40 -- lvol/rename.sh@57 -- # '[' 512 = 512 ']' 00:14:58.295 12:34:40 -- lvol/rename.sh@58 -- # jq -r '.[0].num_blocks' 00:14:58.295 12:34:40 -- lvol/rename.sh@58 -- # '[' 57344 = 57344 ']' 00:14:58.295 12:34:40 -- lvol/rename.sh@59 -- # jq -r '.[0].aliases|sort' 00:14:58.295 12:34:40 -- lvol/rename.sh@59 -- # jq '.|sort' 00:14:58.553 12:34:40 -- lvol/rename.sh@59 -- # '[' '[ 00:14:58.553 "lvs_new/lvol_test0" 00:14:58.553 ]' = '[ 00:14:58.553 "lvs_new/lvol_test0" 00:14:58.553 ]' ']' 00:14:58.553 12:34:40 -- lvol/rename.sh@54 -- # for i in "${!bdev_uuids[@]}" 00:14:58.554 12:34:40 -- lvol/rename.sh@55 -- # rpc_cmd bdev_get_bdevs -b 8e5091bc-ea88-4c21-a8b5-afc37b01355b 00:14:58.554 12:34:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.554 12:34:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.554 12:34:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.554 12:34:40 -- lvol/rename.sh@55 -- # lvol='[ 00:14:58.554 { 00:14:58.554 "name": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:58.554 "aliases": [ 00:14:58.554 "lvs_new/lvol_test1" 00:14:58.554 ], 00:14:58.554 "product_name": "Logical Volume", 00:14:58.554 "block_size": 512, 00:14:58.554 "num_blocks": 57344, 00:14:58.554 "uuid": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:58.554 "assigned_rate_limits": { 00:14:58.554 "rw_ios_per_sec": 0, 00:14:58.554 "rw_mbytes_per_sec": 0, 00:14:58.554 "r_mbytes_per_sec": 0, 00:14:58.554 "w_mbytes_per_sec": 0 00:14:58.554 }, 00:14:58.554 "claimed": false, 00:14:58.554 "zoned": false, 00:14:58.554 "supported_io_types": { 00:14:58.554 "read": true, 00:14:58.554 "write": true, 00:14:58.554 "unmap": true, 00:14:58.554 "write_zeroes": true, 00:14:58.554 "flush": false, 00:14:58.554 "reset": true, 00:14:58.554 "compare": false, 00:14:58.554 "compare_and_write": false, 00:14:58.554 "abort": false, 00:14:58.554 "nvme_admin": false, 00:14:58.554 "nvme_io": false 00:14:58.554 }, 00:14:58.554 "memory_domains": [ 00:14:58.554 { 00:14:58.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.554 "dma_device_type": 2 00:14:58.554 } 00:14:58.554 ], 00:14:58.554 "driver_specific": { 00:14:58.554 "lvol": { 00:14:58.554 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:58.554 "base_bdev": "Malloc0", 00:14:58.554 "thin_provision": false, 00:14:58.554 "snapshot": false, 00:14:58.554 "clone": false, 00:14:58.554 "esnap_clone": false 00:14:58.554 } 00:14:58.554 } 00:14:58.554 } 00:14:58.554 ]' 00:14:58.554 12:34:40 -- lvol/rename.sh@56 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:58.554 12:34:40 -- lvol/rename.sh@56 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:58.554 12:34:40 -- lvol/rename.sh@57 -- # jq -r '.[0].block_size' 00:14:58.554 12:34:40 -- lvol/rename.sh@57 -- # '[' 512 = 512 ']' 00:14:58.554 12:34:40 -- lvol/rename.sh@58 -- # jq -r '.[0].num_blocks' 00:14:58.554 12:34:41 -- lvol/rename.sh@58 -- # '[' 57344 = 57344 ']' 00:14:58.554 12:34:41 -- lvol/rename.sh@59 -- # jq -r '.[0].aliases|sort' 00:14:58.554 12:34:41 -- lvol/rename.sh@59 -- # jq '.|sort' 00:14:58.812 12:34:41 -- lvol/rename.sh@59 -- # '[' '[ 00:14:58.812 "lvs_new/lvol_test1" 00:14:58.812 ]' = '[ 00:14:58.812 "lvs_new/lvol_test1" 00:14:58.812 ]' ']' 00:14:58.812 12:34:41 -- lvol/rename.sh@54 -- # for i in "${!bdev_uuids[@]}" 00:14:58.812 12:34:41 -- lvol/rename.sh@55 -- # rpc_cmd bdev_get_bdevs -b 25b95e55-9fe7-4df9-ba32-f12b1427e31a 00:14:58.812 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.812 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:58.812 12:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.812 12:34:41 -- lvol/rename.sh@55 -- # lvol='[ 00:14:58.812 { 00:14:58.812 "name": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:58.812 "aliases": [ 00:14:58.812 "lvs_new/lvol_test2" 00:14:58.812 ], 00:14:58.812 "product_name": "Logical Volume", 00:14:58.812 "block_size": 512, 00:14:58.812 "num_blocks": 57344, 00:14:58.812 "uuid": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:58.812 "assigned_rate_limits": { 00:14:58.812 "rw_ios_per_sec": 0, 00:14:58.812 "rw_mbytes_per_sec": 0, 00:14:58.812 "r_mbytes_per_sec": 0, 00:14:58.812 "w_mbytes_per_sec": 0 00:14:58.812 }, 00:14:58.812 "claimed": false, 00:14:58.812 "zoned": false, 00:14:58.812 "supported_io_types": { 00:14:58.812 "read": true, 00:14:58.812 "write": true, 00:14:58.812 "unmap": true, 00:14:58.812 "write_zeroes": true, 00:14:58.812 "flush": false, 00:14:58.812 "reset": true, 00:14:58.812 "compare": false, 00:14:58.812 "compare_and_write": false, 00:14:58.812 "abort": false, 00:14:58.812 "nvme_admin": false, 00:14:58.812 "nvme_io": false 00:14:58.812 }, 00:14:58.812 "memory_domains": [ 00:14:58.812 { 00:14:58.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.812 "dma_device_type": 2 00:14:58.812 } 00:14:58.812 ], 00:14:58.812 "driver_specific": { 00:14:58.812 "lvol": { 00:14:58.812 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:58.812 "base_bdev": "Malloc0", 00:14:58.812 "thin_provision": false, 00:14:58.812 "snapshot": false, 00:14:58.812 "clone": false, 00:14:58.812 "esnap_clone": false 00:14:58.812 } 00:14:58.812 } 00:14:58.812 } 00:14:58.812 ]' 00:14:58.812 12:34:41 -- lvol/rename.sh@56 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:58.812 12:34:41 -- lvol/rename.sh@56 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:58.812 12:34:41 -- lvol/rename.sh@57 -- # jq -r '.[0].block_size' 00:14:58.812 12:34:41 -- lvol/rename.sh@57 -- # '[' 512 = 512 ']' 00:14:58.812 12:34:41 -- lvol/rename.sh@58 -- # jq -r '.[0].num_blocks' 00:14:58.812 12:34:41 -- lvol/rename.sh@58 -- # '[' 57344 = 57344 ']' 00:14:58.812 12:34:41 -- lvol/rename.sh@59 -- # jq -r '.[0].aliases|sort' 00:14:58.812 12:34:41 -- lvol/rename.sh@59 -- # jq '.|sort' 00:14:59.070 12:34:41 -- lvol/rename.sh@59 -- # '[' '[ 00:14:59.070 "lvs_new/lvol_test2" 00:14:59.070 ]' = '[ 00:14:59.070 "lvs_new/lvol_test2" 00:14:59.070 ]' ']' 00:14:59.070 12:34:41 -- lvol/rename.sh@54 -- # for i in "${!bdev_uuids[@]}" 00:14:59.070 12:34:41 -- lvol/rename.sh@55 -- # rpc_cmd bdev_get_bdevs -b 0a78883e-1e46-41bf-9dfb-e519254214d9 00:14:59.070 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.071 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.071 12:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.071 12:34:41 -- lvol/rename.sh@55 -- # lvol='[ 00:14:59.071 { 00:14:59.071 "name": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:14:59.071 "aliases": [ 00:14:59.071 "lvs_new/lvol_test3" 00:14:59.071 ], 00:14:59.071 "product_name": "Logical Volume", 00:14:59.071 "block_size": 512, 00:14:59.071 "num_blocks": 57344, 00:14:59.071 "uuid": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:14:59.071 "assigned_rate_limits": { 00:14:59.071 "rw_ios_per_sec": 0, 00:14:59.071 "rw_mbytes_per_sec": 0, 00:14:59.071 "r_mbytes_per_sec": 0, 00:14:59.071 "w_mbytes_per_sec": 0 00:14:59.071 }, 00:14:59.071 "claimed": false, 00:14:59.071 "zoned": false, 00:14:59.071 "supported_io_types": { 00:14:59.071 "read": true, 00:14:59.071 "write": true, 00:14:59.071 "unmap": true, 00:14:59.071 "write_zeroes": true, 00:14:59.071 "flush": false, 00:14:59.071 "reset": true, 00:14:59.071 "compare": false, 00:14:59.071 "compare_and_write": false, 00:14:59.071 "abort": false, 00:14:59.071 "nvme_admin": false, 00:14:59.071 "nvme_io": false 00:14:59.071 }, 00:14:59.071 "memory_domains": [ 00:14:59.071 { 00:14:59.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.071 "dma_device_type": 2 00:14:59.071 } 00:14:59.071 ], 00:14:59.071 "driver_specific": { 00:14:59.071 "lvol": { 00:14:59.071 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:59.071 "base_bdev": "Malloc0", 00:14:59.071 "thin_provision": false, 00:14:59.071 "snapshot": false, 00:14:59.071 "clone": false, 00:14:59.071 "esnap_clone": false 00:14:59.071 } 00:14:59.071 } 00:14:59.071 } 00:14:59.071 ]' 00:14:59.071 12:34:41 -- lvol/rename.sh@56 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:59.071 12:34:41 -- lvol/rename.sh@56 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:59.071 12:34:41 -- lvol/rename.sh@57 -- # jq -r '.[0].block_size' 00:14:59.071 12:34:41 -- lvol/rename.sh@57 -- # '[' 512 = 512 ']' 00:14:59.071 12:34:41 -- lvol/rename.sh@58 -- # jq -r '.[0].num_blocks' 00:14:59.071 12:34:41 -- lvol/rename.sh@58 -- # '[' 57344 = 57344 ']' 00:14:59.071 12:34:41 -- lvol/rename.sh@59 -- # jq -r '.[0].aliases|sort' 00:14:59.330 12:34:41 -- lvol/rename.sh@59 -- # jq '.|sort' 00:14:59.330 12:34:41 -- lvol/rename.sh@59 -- # '[' '[ 00:14:59.330 "lvs_new/lvol_test3" 00:14:59.330 ]' = '[ 00:14:59.330 "lvs_new/lvol_test3" 00:14:59.330 ]' ']' 00:14:59.330 12:34:41 -- lvol/rename.sh@64 -- # bdev_names=("lbd_new"{0..3}) 00:14:59.330 12:34:41 -- lvol/rename.sh@65 -- # new_bdev_aliases=() 00:14:59.330 12:34:41 -- lvol/rename.sh@66 -- # for bdev_name in "${bdev_names[@]}" 00:14:59.330 12:34:41 -- lvol/rename.sh@67 -- # new_bdev_aliases+=("$new_lvs_name/$bdev_name") 00:14:59.330 12:34:41 -- lvol/rename.sh@66 -- # for bdev_name in "${bdev_names[@]}" 00:14:59.330 12:34:41 -- lvol/rename.sh@67 -- # new_bdev_aliases+=("$new_lvs_name/$bdev_name") 00:14:59.330 12:34:41 -- lvol/rename.sh@66 -- # for bdev_name in "${bdev_names[@]}" 00:14:59.330 12:34:41 -- lvol/rename.sh@67 -- # new_bdev_aliases+=("$new_lvs_name/$bdev_name") 00:14:59.330 12:34:41 -- lvol/rename.sh@66 -- # for bdev_name in "${bdev_names[@]}" 00:14:59.330 12:34:41 -- lvol/rename.sh@67 -- # new_bdev_aliases+=("$new_lvs_name/$bdev_name") 00:14:59.330 12:34:41 -- lvol/rename.sh@69 -- # for i in "${!bdev_names[@]}" 00:14:59.330 12:34:41 -- lvol/rename.sh@70 -- # rpc_cmd bdev_lvol_rename lvs_new/lvol_test0 lbd_new0 00:14:59.330 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.330 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.330 12:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.330 12:34:41 -- lvol/rename.sh@71 -- # rpc_cmd bdev_get_bdevs -b 6409ac31-2641-413d-9b5c-6de171df4a6b 00:14:59.331 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.331 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.331 12:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.331 12:34:41 -- lvol/rename.sh@71 -- # lvol='[ 00:14:59.331 { 00:14:59.331 "name": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:59.331 "aliases": [ 00:14:59.331 "lvs_new/lbd_new0" 00:14:59.331 ], 00:14:59.331 "product_name": "Logical Volume", 00:14:59.331 "block_size": 512, 00:14:59.331 "num_blocks": 57344, 00:14:59.331 "uuid": "6409ac31-2641-413d-9b5c-6de171df4a6b", 00:14:59.331 "assigned_rate_limits": { 00:14:59.331 "rw_ios_per_sec": 0, 00:14:59.331 "rw_mbytes_per_sec": 0, 00:14:59.331 "r_mbytes_per_sec": 0, 00:14:59.331 "w_mbytes_per_sec": 0 00:14:59.331 }, 00:14:59.331 "claimed": false, 00:14:59.331 "zoned": false, 00:14:59.331 "supported_io_types": { 00:14:59.331 "read": true, 00:14:59.331 "write": true, 00:14:59.331 "unmap": true, 00:14:59.331 "write_zeroes": true, 00:14:59.331 "flush": false, 00:14:59.331 "reset": true, 00:14:59.331 "compare": false, 00:14:59.331 "compare_and_write": false, 00:14:59.331 "abort": false, 00:14:59.331 "nvme_admin": false, 00:14:59.331 "nvme_io": false 00:14:59.331 }, 00:14:59.331 "memory_domains": [ 00:14:59.331 { 00:14:59.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.331 "dma_device_type": 2 00:14:59.331 } 00:14:59.331 ], 00:14:59.331 "driver_specific": { 00:14:59.331 "lvol": { 00:14:59.331 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:59.331 "base_bdev": "Malloc0", 00:14:59.331 "thin_provision": false, 00:14:59.331 "snapshot": false, 00:14:59.331 "clone": false, 00:14:59.331 "esnap_clone": false 00:14:59.331 } 00:14:59.331 } 00:14:59.331 } 00:14:59.331 ]' 00:14:59.331 12:34:41 -- lvol/rename.sh@72 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:59.331 12:34:41 -- lvol/rename.sh@72 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:59.331 12:34:41 -- lvol/rename.sh@73 -- # jq -r '.[0].block_size' 00:14:59.331 12:34:41 -- lvol/rename.sh@73 -- # '[' 512 = 512 ']' 00:14:59.331 12:34:41 -- lvol/rename.sh@74 -- # jq -r '.[0].num_blocks' 00:14:59.592 12:34:41 -- lvol/rename.sh@74 -- # '[' 57344 = 57344 ']' 00:14:59.592 12:34:41 -- lvol/rename.sh@75 -- # jq -r '.[0].aliases|sort' 00:14:59.592 12:34:41 -- lvol/rename.sh@75 -- # jq '.|sort' 00:14:59.592 12:34:41 -- lvol/rename.sh@75 -- # '[' '[ 00:14:59.592 "lvs_new/lbd_new0" 00:14:59.592 ]' = '[ 00:14:59.592 "lvs_new/lbd_new0" 00:14:59.592 ]' ']' 00:14:59.592 12:34:41 -- lvol/rename.sh@69 -- # for i in "${!bdev_names[@]}" 00:14:59.592 12:34:41 -- lvol/rename.sh@70 -- # rpc_cmd bdev_lvol_rename lvs_new/lvol_test1 lbd_new1 00:14:59.592 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.592 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.592 12:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.592 12:34:41 -- lvol/rename.sh@71 -- # rpc_cmd bdev_get_bdevs -b 8e5091bc-ea88-4c21-a8b5-afc37b01355b 00:14:59.592 12:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.592 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.592 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.592 12:34:42 -- lvol/rename.sh@71 -- # lvol='[ 00:14:59.592 { 00:14:59.592 "name": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:59.592 "aliases": [ 00:14:59.592 "lvs_new/lbd_new1" 00:14:59.592 ], 00:14:59.592 "product_name": "Logical Volume", 00:14:59.592 "block_size": 512, 00:14:59.592 "num_blocks": 57344, 00:14:59.592 "uuid": "8e5091bc-ea88-4c21-a8b5-afc37b01355b", 00:14:59.592 "assigned_rate_limits": { 00:14:59.592 "rw_ios_per_sec": 0, 00:14:59.592 "rw_mbytes_per_sec": 0, 00:14:59.592 "r_mbytes_per_sec": 0, 00:14:59.592 "w_mbytes_per_sec": 0 00:14:59.592 }, 00:14:59.592 "claimed": false, 00:14:59.592 "zoned": false, 00:14:59.592 "supported_io_types": { 00:14:59.592 "read": true, 00:14:59.592 "write": true, 00:14:59.592 "unmap": true, 00:14:59.592 "write_zeroes": true, 00:14:59.592 "flush": false, 00:14:59.592 "reset": true, 00:14:59.592 "compare": false, 00:14:59.592 "compare_and_write": false, 00:14:59.592 "abort": false, 00:14:59.592 "nvme_admin": false, 00:14:59.592 "nvme_io": false 00:14:59.592 }, 00:14:59.592 "memory_domains": [ 00:14:59.592 { 00:14:59.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.592 "dma_device_type": 2 00:14:59.592 } 00:14:59.592 ], 00:14:59.592 "driver_specific": { 00:14:59.592 "lvol": { 00:14:59.592 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:59.592 "base_bdev": "Malloc0", 00:14:59.592 "thin_provision": false, 00:14:59.592 "snapshot": false, 00:14:59.592 "clone": false, 00:14:59.592 "esnap_clone": false 00:14:59.592 } 00:14:59.592 } 00:14:59.592 } 00:14:59.592 ]' 00:14:59.592 12:34:42 -- lvol/rename.sh@72 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:59.592 12:34:42 -- lvol/rename.sh@72 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:59.592 12:34:42 -- lvol/rename.sh@73 -- # jq -r '.[0].block_size' 00:14:59.592 12:34:42 -- lvol/rename.sh@73 -- # '[' 512 = 512 ']' 00:14:59.592 12:34:42 -- lvol/rename.sh@74 -- # jq -r '.[0].num_blocks' 00:14:59.856 12:34:42 -- lvol/rename.sh@74 -- # '[' 57344 = 57344 ']' 00:14:59.856 12:34:42 -- lvol/rename.sh@75 -- # jq -r '.[0].aliases|sort' 00:14:59.856 12:34:42 -- lvol/rename.sh@75 -- # jq '.|sort' 00:14:59.856 12:34:42 -- lvol/rename.sh@75 -- # '[' '[ 00:14:59.856 "lvs_new/lbd_new1" 00:14:59.856 ]' = '[ 00:14:59.856 "lvs_new/lbd_new1" 00:14:59.856 ]' ']' 00:14:59.856 12:34:42 -- lvol/rename.sh@69 -- # for i in "${!bdev_names[@]}" 00:14:59.856 12:34:42 -- lvol/rename.sh@70 -- # rpc_cmd bdev_lvol_rename lvs_new/lvol_test2 lbd_new2 00:14:59.856 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.856 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.856 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.856 12:34:42 -- lvol/rename.sh@71 -- # rpc_cmd bdev_get_bdevs -b 25b95e55-9fe7-4df9-ba32-f12b1427e31a 00:14:59.856 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.856 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.856 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.856 12:34:42 -- lvol/rename.sh@71 -- # lvol='[ 00:14:59.856 { 00:14:59.856 "name": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:59.856 "aliases": [ 00:14:59.856 "lvs_new/lbd_new2" 00:14:59.856 ], 00:14:59.856 "product_name": "Logical Volume", 00:14:59.856 "block_size": 512, 00:14:59.856 "num_blocks": 57344, 00:14:59.856 "uuid": "25b95e55-9fe7-4df9-ba32-f12b1427e31a", 00:14:59.856 "assigned_rate_limits": { 00:14:59.856 "rw_ios_per_sec": 0, 00:14:59.856 "rw_mbytes_per_sec": 0, 00:14:59.856 "r_mbytes_per_sec": 0, 00:14:59.856 "w_mbytes_per_sec": 0 00:14:59.856 }, 00:14:59.856 "claimed": false, 00:14:59.856 "zoned": false, 00:14:59.856 "supported_io_types": { 00:14:59.856 "read": true, 00:14:59.856 "write": true, 00:14:59.856 "unmap": true, 00:14:59.856 "write_zeroes": true, 00:14:59.856 "flush": false, 00:14:59.856 "reset": true, 00:14:59.856 "compare": false, 00:14:59.856 "compare_and_write": false, 00:14:59.856 "abort": false, 00:14:59.856 "nvme_admin": false, 00:14:59.856 "nvme_io": false 00:14:59.856 }, 00:14:59.856 "memory_domains": [ 00:14:59.856 { 00:14:59.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.856 "dma_device_type": 2 00:14:59.856 } 00:14:59.856 ], 00:14:59.856 "driver_specific": { 00:14:59.856 "lvol": { 00:14:59.856 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:14:59.856 "base_bdev": "Malloc0", 00:14:59.856 "thin_provision": false, 00:14:59.856 "snapshot": false, 00:14:59.856 "clone": false, 00:14:59.856 "esnap_clone": false 00:14:59.856 } 00:14:59.856 } 00:14:59.856 } 00:14:59.856 ]' 00:14:59.856 12:34:42 -- lvol/rename.sh@72 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:14:59.856 12:34:42 -- lvol/rename.sh@72 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:14:59.856 12:34:42 -- lvol/rename.sh@73 -- # jq -r '.[0].block_size' 00:15:00.115 12:34:42 -- lvol/rename.sh@73 -- # '[' 512 = 512 ']' 00:15:00.115 12:34:42 -- lvol/rename.sh@74 -- # jq -r '.[0].num_blocks' 00:15:00.115 12:34:42 -- lvol/rename.sh@74 -- # '[' 57344 = 57344 ']' 00:15:00.115 12:34:42 -- lvol/rename.sh@75 -- # jq -r '.[0].aliases|sort' 00:15:00.115 12:34:42 -- lvol/rename.sh@75 -- # jq '.|sort' 00:15:00.115 12:34:42 -- lvol/rename.sh@75 -- # '[' '[ 00:15:00.115 "lvs_new/lbd_new2" 00:15:00.115 ]' = '[ 00:15:00.115 "lvs_new/lbd_new2" 00:15:00.115 ]' ']' 00:15:00.115 12:34:42 -- lvol/rename.sh@69 -- # for i in "${!bdev_names[@]}" 00:15:00.115 12:34:42 -- lvol/rename.sh@70 -- # rpc_cmd bdev_lvol_rename lvs_new/lvol_test3 lbd_new3 00:15:00.115 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.115 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.115 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.115 12:34:42 -- lvol/rename.sh@71 -- # rpc_cmd bdev_get_bdevs -b 0a78883e-1e46-41bf-9dfb-e519254214d9 00:15:00.115 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.115 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.115 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.115 12:34:42 -- lvol/rename.sh@71 -- # lvol='[ 00:15:00.115 { 00:15:00.115 "name": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:15:00.115 "aliases": [ 00:15:00.115 "lvs_new/lbd_new3" 00:15:00.115 ], 00:15:00.115 "product_name": "Logical Volume", 00:15:00.115 "block_size": 512, 00:15:00.115 "num_blocks": 57344, 00:15:00.115 "uuid": "0a78883e-1e46-41bf-9dfb-e519254214d9", 00:15:00.115 "assigned_rate_limits": { 00:15:00.115 "rw_ios_per_sec": 0, 00:15:00.115 "rw_mbytes_per_sec": 0, 00:15:00.115 "r_mbytes_per_sec": 0, 00:15:00.115 "w_mbytes_per_sec": 0 00:15:00.115 }, 00:15:00.115 "claimed": false, 00:15:00.115 "zoned": false, 00:15:00.115 "supported_io_types": { 00:15:00.115 "read": true, 00:15:00.115 "write": true, 00:15:00.115 "unmap": true, 00:15:00.115 "write_zeroes": true, 00:15:00.115 "flush": false, 00:15:00.115 "reset": true, 00:15:00.115 "compare": false, 00:15:00.115 "compare_and_write": false, 00:15:00.115 "abort": false, 00:15:00.115 "nvme_admin": false, 00:15:00.115 "nvme_io": false 00:15:00.115 }, 00:15:00.115 "memory_domains": [ 00:15:00.115 { 00:15:00.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.115 "dma_device_type": 2 00:15:00.115 } 00:15:00.115 ], 00:15:00.115 "driver_specific": { 00:15:00.115 "lvol": { 00:15:00.115 "lvol_store_uuid": "2a2f7f07-26f7-473e-83cc-963872b6baf2", 00:15:00.115 "base_bdev": "Malloc0", 00:15:00.115 "thin_provision": false, 00:15:00.115 "snapshot": false, 00:15:00.115 "clone": false, 00:15:00.115 "esnap_clone": false 00:15:00.115 } 00:15:00.115 } 00:15:00.115 } 00:15:00.115 ]' 00:15:00.115 12:34:42 -- lvol/rename.sh@72 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:00.375 12:34:42 -- lvol/rename.sh@72 -- # '[' 2a2f7f07-26f7-473e-83cc-963872b6baf2 = 2a2f7f07-26f7-473e-83cc-963872b6baf2 ']' 00:15:00.375 12:34:42 -- lvol/rename.sh@73 -- # jq -r '.[0].block_size' 00:15:00.375 12:34:42 -- lvol/rename.sh@73 -- # '[' 512 = 512 ']' 00:15:00.375 12:34:42 -- lvol/rename.sh@74 -- # jq -r '.[0].num_blocks' 00:15:00.375 12:34:42 -- lvol/rename.sh@74 -- # '[' 57344 = 57344 ']' 00:15:00.375 12:34:42 -- lvol/rename.sh@75 -- # jq -r '.[0].aliases|sort' 00:15:00.375 12:34:42 -- lvol/rename.sh@75 -- # jq '.|sort' 00:15:00.375 12:34:42 -- lvol/rename.sh@75 -- # '[' '[ 00:15:00.375 "lvs_new/lbd_new3" 00:15:00.375 ]' = '[ 00:15:00.375 "lvs_new/lbd_new3" 00:15:00.375 ]' ']' 00:15:00.375 12:34:42 -- lvol/rename.sh@79 -- # for bdev in "${new_bdev_aliases[@]}" 00:15:00.375 12:34:42 -- lvol/rename.sh@80 -- # rpc_cmd bdev_lvol_delete lvs_new/lbd_new0 00:15:00.375 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.375 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.375 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.375 12:34:42 -- lvol/rename.sh@79 -- # for bdev in "${new_bdev_aliases[@]}" 00:15:00.375 12:34:42 -- lvol/rename.sh@80 -- # rpc_cmd bdev_lvol_delete lvs_new/lbd_new1 00:15:00.375 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.375 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.375 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.375 12:34:42 -- lvol/rename.sh@79 -- # for bdev in "${new_bdev_aliases[@]}" 00:15:00.375 12:34:42 -- lvol/rename.sh@80 -- # rpc_cmd bdev_lvol_delete lvs_new/lbd_new2 00:15:00.375 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.375 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.634 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.634 12:34:42 -- lvol/rename.sh@79 -- # for bdev in "${new_bdev_aliases[@]}" 00:15:00.634 12:34:42 -- lvol/rename.sh@80 -- # rpc_cmd bdev_lvol_delete lvs_new/lbd_new3 00:15:00.634 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.634 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.634 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.634 12:34:42 -- lvol/rename.sh@82 -- # rpc_cmd bdev_lvol_delete_lvstore -u 2a2f7f07-26f7-473e-83cc-963872b6baf2 00:15:00.634 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.634 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.634 12:34:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.634 12:34:42 -- lvol/rename.sh@83 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:00.634 12:34:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.634 12:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.893 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.893 12:34:43 -- lvol/rename.sh@84 -- # check_leftover_devices 00:15:00.893 12:34:43 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:00.893 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.893 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:00.893 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.893 12:34:43 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:00.893 12:34:43 -- lvol/common.sh@26 -- # jq length 00:15:00.893 12:34:43 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:00.893 12:34:43 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:00.893 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.893 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:00.893 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.893 12:34:43 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:00.893 12:34:43 -- lvol/common.sh@28 -- # jq length 00:15:00.893 ************************************ 00:15:00.893 END TEST test_rename_positive 00:15:00.893 ************************************ 00:15:00.893 12:34:43 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:00.893 00:15:00.893 real 0m4.396s 00:15:00.893 user 0m3.280s 00:15:00.893 sys 0m0.389s 00:15:00.894 12:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.894 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:00.894 12:34:43 -- lvol/rename.sh@218 -- # run_test test_rename_lvs_negative test_rename_lvs_negative 00:15:00.894 12:34:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:00.894 12:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.894 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.152 ************************************ 00:15:01.152 START TEST test_rename_lvs_negative 00:15:01.152 ************************************ 00:15:01.152 12:34:43 -- common/autotest_common.sh@1104 -- # test_rename_lvs_negative 00:15:01.152 12:34:43 -- lvol/rename.sh@93 -- # rpc_cmd bdev_lvol_rename_lvstore NOTEXIST WHATEVER 00:15:01.152 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.153 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.153 request: 00:15:01.153 { 00:15:01.153 "old_name": "NOTEXIST", 00:15:01.153 "new_name": "WHATEVER", 00:15:01.153 "method": "bdev_lvol_rename_lvstore", 00:15:01.153 "req_id": 1 00:15:01.153 } 00:15:01.153 Got JSON-RPC error response 00:15:01.153 response: 00:15:01.153 { 00:15:01.153 "code": -2, 00:15:01.153 "message": "Lvol store NOTEXIST not found" 00:15:01.153 } 00:15:01.153 12:34:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:01.153 12:34:43 -- lvol/rename.sh@96 -- # rpc_cmd bdev_malloc_create 128 512 00:15:01.153 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.153 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.153 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.153 12:34:43 -- lvol/rename.sh@96 -- # malloc_name1=Malloc1 00:15:01.153 12:34:43 -- lvol/rename.sh@97 -- # rpc_cmd bdev_malloc_create 128 512 00:15:01.153 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.153 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.412 12:34:43 -- lvol/rename.sh@97 -- # malloc_name2=Malloc2 00:15:01.412 12:34:43 -- lvol/rename.sh@100 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test1 00:15:01.412 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.412 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.412 12:34:43 -- lvol/rename.sh@100 -- # lvs_uuid1=4eccc2a1-fe4d-4d89-b409-617c1346afc0 00:15:01.412 12:34:43 -- lvol/rename.sh@101 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test2 00:15:01.412 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.412 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.412 12:34:43 -- lvol/rename.sh@101 -- # lvs_uuid2=d37b74da-27f3-4887-89f7-036b5d8ed02f 00:15:01.412 12:34:43 -- lvol/rename.sh@104 -- # bdev_names_1=("lvol_test_1_"{0..3}) 00:15:01.412 12:34:43 -- lvol/rename.sh@105 -- # bdev_names_2=("lvol_test_2_"{0..3}) 00:15:01.412 12:34:43 -- lvol/rename.sh@106 -- # bdev_aliases_1=("lvs_test1/lvol_test_1_"{0..3}) 00:15:01.412 12:34:43 -- lvol/rename.sh@107 -- # bdev_aliases_2=("lvs_test2/lvol_test_2_"{0..3}) 00:15:01.412 12:34:43 -- lvol/rename.sh@110 -- # round_down 31 00:15:01.412 12:34:43 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:01.412 12:34:43 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:01.412 12:34:43 -- lvol/common.sh@36 -- # echo 28 00:15:01.412 12:34:43 -- lvol/rename.sh@110 -- # lvol_size_mb=28 00:15:01.412 12:34:43 -- lvol/rename.sh@111 -- # lvol_size=29360128 00:15:01.412 12:34:43 -- lvol/rename.sh@114 -- # bdev_uuids_1=() 00:15:01.412 12:34:43 -- lvol/rename.sh@115 -- # bdev_uuids_2=() 00:15:01.412 12:34:43 -- lvol/rename.sh@116 -- # for i in "${!bdev_names_1[@]}" 00:15:01.412 12:34:43 -- lvol/rename.sh@117 -- # rpc_cmd bdev_lvol_create -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 lvol_test_1_0 28 00:15:01.412 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.412 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.412 12:34:43 -- lvol/rename.sh@117 -- # lvol_uuid=26f7a932-0971-433f-aed1-c2c5fd44fed6 00:15:01.412 12:34:43 -- lvol/rename.sh@118 -- # rpc_cmd bdev_get_bdevs -b 26f7a932-0971-433f-aed1-c2c5fd44fed6 00:15:01.412 12:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.412 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 12:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.412 12:34:43 -- lvol/rename.sh@118 -- # lvol='[ 00:15:01.412 { 00:15:01.412 "name": "26f7a932-0971-433f-aed1-c2c5fd44fed6", 00:15:01.412 "aliases": [ 00:15:01.412 "lvs_test1/lvol_test_1_0" 00:15:01.412 ], 00:15:01.412 "product_name": "Logical Volume", 00:15:01.412 "block_size": 512, 00:15:01.412 "num_blocks": 57344, 00:15:01.412 "uuid": "26f7a932-0971-433f-aed1-c2c5fd44fed6", 00:15:01.412 "assigned_rate_limits": { 00:15:01.412 "rw_ios_per_sec": 0, 00:15:01.412 "rw_mbytes_per_sec": 0, 00:15:01.412 "r_mbytes_per_sec": 0, 00:15:01.412 "w_mbytes_per_sec": 0 00:15:01.412 }, 00:15:01.412 "claimed": false, 00:15:01.412 "zoned": false, 00:15:01.412 "supported_io_types": { 00:15:01.412 "read": true, 00:15:01.412 "write": true, 00:15:01.412 "unmap": true, 00:15:01.412 "write_zeroes": true, 00:15:01.412 "flush": false, 00:15:01.412 "reset": true, 00:15:01.412 "compare": false, 00:15:01.412 "compare_and_write": false, 00:15:01.412 "abort": false, 00:15:01.412 "nvme_admin": false, 00:15:01.412 "nvme_io": false 00:15:01.412 }, 00:15:01.412 "memory_domains": [ 00:15:01.412 { 00:15:01.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.412 "dma_device_type": 2 00:15:01.412 } 00:15:01.412 ], 00:15:01.412 "driver_specific": { 00:15:01.412 "lvol": { 00:15:01.412 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:01.412 "base_bdev": "Malloc1", 00:15:01.412 "thin_provision": false, 00:15:01.412 "snapshot": false, 00:15:01.412 "clone": false, 00:15:01.412 "esnap_clone": false 00:15:01.412 } 00:15:01.412 } 00:15:01.412 } 00:15:01.412 ]' 00:15:01.412 12:34:43 -- lvol/rename.sh@119 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:01.412 12:34:43 -- lvol/rename.sh@119 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:01.412 12:34:43 -- lvol/rename.sh@120 -- # jq -r '.[0].block_size' 00:15:01.412 12:34:43 -- lvol/rename.sh@120 -- # '[' 512 = 512 ']' 00:15:01.412 12:34:43 -- lvol/rename.sh@121 -- # jq -r '.[0].num_blocks' 00:15:01.671 12:34:43 -- lvol/rename.sh@121 -- # '[' 57344 = 57344 ']' 00:15:01.671 12:34:43 -- lvol/rename.sh@122 -- # jq '.[0].aliases|sort' 00:15:01.671 12:34:43 -- lvol/rename.sh@122 -- # jq '.|sort' 00:15:01.671 12:34:44 -- lvol/rename.sh@122 -- # '[' '[ 00:15:01.671 "lvs_test1/lvol_test_1_0" 00:15:01.671 ]' = '[ 00:15:01.671 "lvs_test1/lvol_test_1_0" 00:15:01.671 ]' ']' 00:15:01.671 12:34:44 -- lvol/rename.sh@123 -- # bdev_uuids_1+=("$lvol_uuid") 00:15:01.671 12:34:44 -- lvol/rename.sh@125 -- # rpc_cmd bdev_lvol_create -u d37b74da-27f3-4887-89f7-036b5d8ed02f lvol_test_2_0 28 00:15:01.671 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.671 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:01.671 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.671 12:34:44 -- lvol/rename.sh@125 -- # lvol_uuid=49ed579c-240d-48e6-89cd-ebc5fbbce087 00:15:01.671 12:34:44 -- lvol/rename.sh@126 -- # rpc_cmd bdev_get_bdevs -b 49ed579c-240d-48e6-89cd-ebc5fbbce087 00:15:01.671 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.671 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:01.671 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.671 12:34:44 -- lvol/rename.sh@126 -- # lvol='[ 00:15:01.671 { 00:15:01.671 "name": "49ed579c-240d-48e6-89cd-ebc5fbbce087", 00:15:01.671 "aliases": [ 00:15:01.671 "lvs_test2/lvol_test_2_0" 00:15:01.671 ], 00:15:01.671 "product_name": "Logical Volume", 00:15:01.671 "block_size": 512, 00:15:01.671 "num_blocks": 57344, 00:15:01.671 "uuid": "49ed579c-240d-48e6-89cd-ebc5fbbce087", 00:15:01.671 "assigned_rate_limits": { 00:15:01.671 "rw_ios_per_sec": 0, 00:15:01.671 "rw_mbytes_per_sec": 0, 00:15:01.671 "r_mbytes_per_sec": 0, 00:15:01.671 "w_mbytes_per_sec": 0 00:15:01.671 }, 00:15:01.671 "claimed": false, 00:15:01.671 "zoned": false, 00:15:01.671 "supported_io_types": { 00:15:01.671 "read": true, 00:15:01.671 "write": true, 00:15:01.671 "unmap": true, 00:15:01.671 "write_zeroes": true, 00:15:01.671 "flush": false, 00:15:01.671 "reset": true, 00:15:01.671 "compare": false, 00:15:01.671 "compare_and_write": false, 00:15:01.671 "abort": false, 00:15:01.671 "nvme_admin": false, 00:15:01.671 "nvme_io": false 00:15:01.671 }, 00:15:01.671 "memory_domains": [ 00:15:01.671 { 00:15:01.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.671 "dma_device_type": 2 00:15:01.671 } 00:15:01.671 ], 00:15:01.671 "driver_specific": { 00:15:01.671 "lvol": { 00:15:01.671 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:01.671 "base_bdev": "Malloc2", 00:15:01.671 "thin_provision": false, 00:15:01.671 "snapshot": false, 00:15:01.671 "clone": false, 00:15:01.671 "esnap_clone": false 00:15:01.671 } 00:15:01.671 } 00:15:01.671 } 00:15:01.671 ]' 00:15:01.671 12:34:44 -- lvol/rename.sh@127 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:01.671 12:34:44 -- lvol/rename.sh@127 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:01.671 12:34:44 -- lvol/rename.sh@128 -- # jq -r '.[0].block_size' 00:15:01.671 12:34:44 -- lvol/rename.sh@128 -- # '[' 512 = 512 ']' 00:15:01.671 12:34:44 -- lvol/rename.sh@129 -- # jq -r '.[0].num_blocks' 00:15:01.931 12:34:44 -- lvol/rename.sh@129 -- # '[' 57344 = 57344 ']' 00:15:01.931 12:34:44 -- lvol/rename.sh@130 -- # jq '.[0].aliases|sort' 00:15:01.931 12:34:44 -- lvol/rename.sh@130 -- # jq '.|sort' 00:15:01.931 12:34:44 -- lvol/rename.sh@130 -- # '[' '[ 00:15:01.931 "lvs_test2/lvol_test_2_0" 00:15:01.931 ]' = '[ 00:15:01.931 "lvs_test2/lvol_test_2_0" 00:15:01.931 ]' ']' 00:15:01.931 12:34:44 -- lvol/rename.sh@131 -- # bdev_uuids_2+=("$lvol_uuid") 00:15:01.931 12:34:44 -- lvol/rename.sh@116 -- # for i in "${!bdev_names_1[@]}" 00:15:01.931 12:34:44 -- lvol/rename.sh@117 -- # rpc_cmd bdev_lvol_create -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 lvol_test_1_1 28 00:15:01.931 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.931 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:01.931 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.931 12:34:44 -- lvol/rename.sh@117 -- # lvol_uuid=3ea0aebf-3579-4d73-a0b3-08cbb87fa762 00:15:01.931 12:34:44 -- lvol/rename.sh@118 -- # rpc_cmd bdev_get_bdevs -b 3ea0aebf-3579-4d73-a0b3-08cbb87fa762 00:15:01.931 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.931 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:01.931 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.931 12:34:44 -- lvol/rename.sh@118 -- # lvol='[ 00:15:01.931 { 00:15:01.931 "name": "3ea0aebf-3579-4d73-a0b3-08cbb87fa762", 00:15:01.931 "aliases": [ 00:15:01.931 "lvs_test1/lvol_test_1_1" 00:15:01.931 ], 00:15:01.931 "product_name": "Logical Volume", 00:15:01.931 "block_size": 512, 00:15:01.931 "num_blocks": 57344, 00:15:01.931 "uuid": "3ea0aebf-3579-4d73-a0b3-08cbb87fa762", 00:15:01.931 "assigned_rate_limits": { 00:15:01.931 "rw_ios_per_sec": 0, 00:15:01.931 "rw_mbytes_per_sec": 0, 00:15:01.931 "r_mbytes_per_sec": 0, 00:15:01.931 "w_mbytes_per_sec": 0 00:15:01.931 }, 00:15:01.931 "claimed": false, 00:15:01.931 "zoned": false, 00:15:01.931 "supported_io_types": { 00:15:01.931 "read": true, 00:15:01.931 "write": true, 00:15:01.931 "unmap": true, 00:15:01.931 "write_zeroes": true, 00:15:01.931 "flush": false, 00:15:01.931 "reset": true, 00:15:01.931 "compare": false, 00:15:01.931 "compare_and_write": false, 00:15:01.931 "abort": false, 00:15:01.931 "nvme_admin": false, 00:15:01.931 "nvme_io": false 00:15:01.931 }, 00:15:01.931 "memory_domains": [ 00:15:01.931 { 00:15:01.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.932 "dma_device_type": 2 00:15:01.932 } 00:15:01.932 ], 00:15:01.932 "driver_specific": { 00:15:01.932 "lvol": { 00:15:01.932 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:01.932 "base_bdev": "Malloc1", 00:15:01.932 "thin_provision": false, 00:15:01.932 "snapshot": false, 00:15:01.932 "clone": false, 00:15:01.932 "esnap_clone": false 00:15:01.932 } 00:15:01.932 } 00:15:01.932 } 00:15:01.932 ]' 00:15:01.932 12:34:44 -- lvol/rename.sh@119 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:01.932 12:34:44 -- lvol/rename.sh@119 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:01.932 12:34:44 -- lvol/rename.sh@120 -- # jq -r '.[0].block_size' 00:15:02.191 12:34:44 -- lvol/rename.sh@120 -- # '[' 512 = 512 ']' 00:15:02.191 12:34:44 -- lvol/rename.sh@121 -- # jq -r '.[0].num_blocks' 00:15:02.191 12:34:44 -- lvol/rename.sh@121 -- # '[' 57344 = 57344 ']' 00:15:02.191 12:34:44 -- lvol/rename.sh@122 -- # jq '.[0].aliases|sort' 00:15:02.191 12:34:44 -- lvol/rename.sh@122 -- # jq '.|sort' 00:15:02.191 12:34:44 -- lvol/rename.sh@122 -- # '[' '[ 00:15:02.191 "lvs_test1/lvol_test_1_1" 00:15:02.191 ]' = '[ 00:15:02.191 "lvs_test1/lvol_test_1_1" 00:15:02.191 ]' ']' 00:15:02.191 12:34:44 -- lvol/rename.sh@123 -- # bdev_uuids_1+=("$lvol_uuid") 00:15:02.191 12:34:44 -- lvol/rename.sh@125 -- # rpc_cmd bdev_lvol_create -u d37b74da-27f3-4887-89f7-036b5d8ed02f lvol_test_2_1 28 00:15:02.191 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.191 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:02.191 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.191 12:34:44 -- lvol/rename.sh@125 -- # lvol_uuid=c144e15b-2c8f-4b33-aeee-a0b7414f694d 00:15:02.191 12:34:44 -- lvol/rename.sh@126 -- # rpc_cmd bdev_get_bdevs -b c144e15b-2c8f-4b33-aeee-a0b7414f694d 00:15:02.191 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.191 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:02.191 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.191 12:34:44 -- lvol/rename.sh@126 -- # lvol='[ 00:15:02.191 { 00:15:02.191 "name": "c144e15b-2c8f-4b33-aeee-a0b7414f694d", 00:15:02.191 "aliases": [ 00:15:02.191 "lvs_test2/lvol_test_2_1" 00:15:02.191 ], 00:15:02.191 "product_name": "Logical Volume", 00:15:02.191 "block_size": 512, 00:15:02.191 "num_blocks": 57344, 00:15:02.191 "uuid": "c144e15b-2c8f-4b33-aeee-a0b7414f694d", 00:15:02.191 "assigned_rate_limits": { 00:15:02.191 "rw_ios_per_sec": 0, 00:15:02.191 "rw_mbytes_per_sec": 0, 00:15:02.191 "r_mbytes_per_sec": 0, 00:15:02.191 "w_mbytes_per_sec": 0 00:15:02.191 }, 00:15:02.191 "claimed": false, 00:15:02.191 "zoned": false, 00:15:02.191 "supported_io_types": { 00:15:02.191 "read": true, 00:15:02.191 "write": true, 00:15:02.191 "unmap": true, 00:15:02.191 "write_zeroes": true, 00:15:02.191 "flush": false, 00:15:02.191 "reset": true, 00:15:02.191 "compare": false, 00:15:02.191 "compare_and_write": false, 00:15:02.191 "abort": false, 00:15:02.191 "nvme_admin": false, 00:15:02.191 "nvme_io": false 00:15:02.191 }, 00:15:02.191 "memory_domains": [ 00:15:02.191 { 00:15:02.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.191 "dma_device_type": 2 00:15:02.191 } 00:15:02.191 ], 00:15:02.191 "driver_specific": { 00:15:02.191 "lvol": { 00:15:02.191 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:02.191 "base_bdev": "Malloc2", 00:15:02.191 "thin_provision": false, 00:15:02.191 "snapshot": false, 00:15:02.191 "clone": false, 00:15:02.191 "esnap_clone": false 00:15:02.191 } 00:15:02.191 } 00:15:02.191 } 00:15:02.191 ]' 00:15:02.191 12:34:44 -- lvol/rename.sh@127 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:02.451 12:34:44 -- lvol/rename.sh@127 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:02.451 12:34:44 -- lvol/rename.sh@128 -- # jq -r '.[0].block_size' 00:15:02.451 12:34:44 -- lvol/rename.sh@128 -- # '[' 512 = 512 ']' 00:15:02.451 12:34:44 -- lvol/rename.sh@129 -- # jq -r '.[0].num_blocks' 00:15:02.451 12:34:44 -- lvol/rename.sh@129 -- # '[' 57344 = 57344 ']' 00:15:02.451 12:34:44 -- lvol/rename.sh@130 -- # jq '.[0].aliases|sort' 00:15:02.451 12:34:44 -- lvol/rename.sh@130 -- # jq '.|sort' 00:15:02.451 12:34:44 -- lvol/rename.sh@130 -- # '[' '[ 00:15:02.451 "lvs_test2/lvol_test_2_1" 00:15:02.451 ]' = '[ 00:15:02.451 "lvs_test2/lvol_test_2_1" 00:15:02.451 ]' ']' 00:15:02.451 12:34:44 -- lvol/rename.sh@131 -- # bdev_uuids_2+=("$lvol_uuid") 00:15:02.451 12:34:44 -- lvol/rename.sh@116 -- # for i in "${!bdev_names_1[@]}" 00:15:02.451 12:34:44 -- lvol/rename.sh@117 -- # rpc_cmd bdev_lvol_create -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 lvol_test_1_2 28 00:15:02.451 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.451 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:02.451 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.451 12:34:44 -- lvol/rename.sh@117 -- # lvol_uuid=91f10096-0569-4af9-8475-c3844f99649b 00:15:02.451 12:34:44 -- lvol/rename.sh@118 -- # rpc_cmd bdev_get_bdevs -b 91f10096-0569-4af9-8475-c3844f99649b 00:15:02.451 12:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.451 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:02.451 12:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.451 12:34:44 -- lvol/rename.sh@118 -- # lvol='[ 00:15:02.451 { 00:15:02.451 "name": "91f10096-0569-4af9-8475-c3844f99649b", 00:15:02.451 "aliases": [ 00:15:02.451 "lvs_test1/lvol_test_1_2" 00:15:02.451 ], 00:15:02.451 "product_name": "Logical Volume", 00:15:02.451 "block_size": 512, 00:15:02.451 "num_blocks": 57344, 00:15:02.451 "uuid": "91f10096-0569-4af9-8475-c3844f99649b", 00:15:02.451 "assigned_rate_limits": { 00:15:02.451 "rw_ios_per_sec": 0, 00:15:02.451 "rw_mbytes_per_sec": 0, 00:15:02.451 "r_mbytes_per_sec": 0, 00:15:02.451 "w_mbytes_per_sec": 0 00:15:02.451 }, 00:15:02.451 "claimed": false, 00:15:02.451 "zoned": false, 00:15:02.451 "supported_io_types": { 00:15:02.451 "read": true, 00:15:02.451 "write": true, 00:15:02.451 "unmap": true, 00:15:02.451 "write_zeroes": true, 00:15:02.451 "flush": false, 00:15:02.451 "reset": true, 00:15:02.451 "compare": false, 00:15:02.451 "compare_and_write": false, 00:15:02.451 "abort": false, 00:15:02.451 "nvme_admin": false, 00:15:02.451 "nvme_io": false 00:15:02.451 }, 00:15:02.451 "memory_domains": [ 00:15:02.451 { 00:15:02.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.451 "dma_device_type": 2 00:15:02.451 } 00:15:02.451 ], 00:15:02.451 "driver_specific": { 00:15:02.451 "lvol": { 00:15:02.451 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:02.451 "base_bdev": "Malloc1", 00:15:02.451 "thin_provision": false, 00:15:02.451 "snapshot": false, 00:15:02.451 "clone": false, 00:15:02.451 "esnap_clone": false 00:15:02.451 } 00:15:02.451 } 00:15:02.451 } 00:15:02.451 ]' 00:15:02.451 12:34:44 -- lvol/rename.sh@119 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:02.711 12:34:45 -- lvol/rename.sh@119 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:02.711 12:34:45 -- lvol/rename.sh@120 -- # jq -r '.[0].block_size' 00:15:02.711 12:34:45 -- lvol/rename.sh@120 -- # '[' 512 = 512 ']' 00:15:02.711 12:34:45 -- lvol/rename.sh@121 -- # jq -r '.[0].num_blocks' 00:15:02.711 12:34:45 -- lvol/rename.sh@121 -- # '[' 57344 = 57344 ']' 00:15:02.711 12:34:45 -- lvol/rename.sh@122 -- # jq '.[0].aliases|sort' 00:15:02.711 12:34:45 -- lvol/rename.sh@122 -- # jq '.|sort' 00:15:02.711 12:34:45 -- lvol/rename.sh@122 -- # '[' '[ 00:15:02.711 "lvs_test1/lvol_test_1_2" 00:15:02.711 ]' = '[ 00:15:02.711 "lvs_test1/lvol_test_1_2" 00:15:02.711 ]' ']' 00:15:02.711 12:34:45 -- lvol/rename.sh@123 -- # bdev_uuids_1+=("$lvol_uuid") 00:15:02.711 12:34:45 -- lvol/rename.sh@125 -- # rpc_cmd bdev_lvol_create -u d37b74da-27f3-4887-89f7-036b5d8ed02f lvol_test_2_2 28 00:15:02.711 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.711 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:02.711 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.711 12:34:45 -- lvol/rename.sh@125 -- # lvol_uuid=5229ed6f-fa3d-4037-87e1-29def641761c 00:15:02.711 12:34:45 -- lvol/rename.sh@126 -- # rpc_cmd bdev_get_bdevs -b 5229ed6f-fa3d-4037-87e1-29def641761c 00:15:02.711 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.711 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:02.970 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.970 12:34:45 -- lvol/rename.sh@126 -- # lvol='[ 00:15:02.970 { 00:15:02.970 "name": "5229ed6f-fa3d-4037-87e1-29def641761c", 00:15:02.970 "aliases": [ 00:15:02.970 "lvs_test2/lvol_test_2_2" 00:15:02.970 ], 00:15:02.970 "product_name": "Logical Volume", 00:15:02.970 "block_size": 512, 00:15:02.970 "num_blocks": 57344, 00:15:02.970 "uuid": "5229ed6f-fa3d-4037-87e1-29def641761c", 00:15:02.970 "assigned_rate_limits": { 00:15:02.970 "rw_ios_per_sec": 0, 00:15:02.970 "rw_mbytes_per_sec": 0, 00:15:02.970 "r_mbytes_per_sec": 0, 00:15:02.970 "w_mbytes_per_sec": 0 00:15:02.970 }, 00:15:02.970 "claimed": false, 00:15:02.970 "zoned": false, 00:15:02.970 "supported_io_types": { 00:15:02.970 "read": true, 00:15:02.970 "write": true, 00:15:02.970 "unmap": true, 00:15:02.970 "write_zeroes": true, 00:15:02.970 "flush": false, 00:15:02.970 "reset": true, 00:15:02.970 "compare": false, 00:15:02.970 "compare_and_write": false, 00:15:02.970 "abort": false, 00:15:02.970 "nvme_admin": false, 00:15:02.970 "nvme_io": false 00:15:02.970 }, 00:15:02.970 "memory_domains": [ 00:15:02.970 { 00:15:02.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.970 "dma_device_type": 2 00:15:02.970 } 00:15:02.970 ], 00:15:02.970 "driver_specific": { 00:15:02.970 "lvol": { 00:15:02.970 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:02.970 "base_bdev": "Malloc2", 00:15:02.970 "thin_provision": false, 00:15:02.970 "snapshot": false, 00:15:02.970 "clone": false, 00:15:02.970 "esnap_clone": false 00:15:02.970 } 00:15:02.970 } 00:15:02.970 } 00:15:02.970 ]' 00:15:02.970 12:34:45 -- lvol/rename.sh@127 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:02.970 12:34:45 -- lvol/rename.sh@127 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:02.970 12:34:45 -- lvol/rename.sh@128 -- # jq -r '.[0].block_size' 00:15:02.970 12:34:45 -- lvol/rename.sh@128 -- # '[' 512 = 512 ']' 00:15:02.970 12:34:45 -- lvol/rename.sh@129 -- # jq -r '.[0].num_blocks' 00:15:02.970 12:34:45 -- lvol/rename.sh@129 -- # '[' 57344 = 57344 ']' 00:15:02.970 12:34:45 -- lvol/rename.sh@130 -- # jq '.[0].aliases|sort' 00:15:02.970 12:34:45 -- lvol/rename.sh@130 -- # jq '.|sort' 00:15:03.229 12:34:45 -- lvol/rename.sh@130 -- # '[' '[ 00:15:03.229 "lvs_test2/lvol_test_2_2" 00:15:03.229 ]' = '[ 00:15:03.229 "lvs_test2/lvol_test_2_2" 00:15:03.229 ]' ']' 00:15:03.229 12:34:45 -- lvol/rename.sh@131 -- # bdev_uuids_2+=("$lvol_uuid") 00:15:03.229 12:34:45 -- lvol/rename.sh@116 -- # for i in "${!bdev_names_1[@]}" 00:15:03.229 12:34:45 -- lvol/rename.sh@117 -- # rpc_cmd bdev_lvol_create -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 lvol_test_1_3 28 00:15:03.229 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.229 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:03.229 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.229 12:34:45 -- lvol/rename.sh@117 -- # lvol_uuid=65868af8-e6e1-4560-8dc9-433aa0cba704 00:15:03.229 12:34:45 -- lvol/rename.sh@118 -- # rpc_cmd bdev_get_bdevs -b 65868af8-e6e1-4560-8dc9-433aa0cba704 00:15:03.229 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.229 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:03.229 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.229 12:34:45 -- lvol/rename.sh@118 -- # lvol='[ 00:15:03.229 { 00:15:03.229 "name": "65868af8-e6e1-4560-8dc9-433aa0cba704", 00:15:03.229 "aliases": [ 00:15:03.229 "lvs_test1/lvol_test_1_3" 00:15:03.229 ], 00:15:03.229 "product_name": "Logical Volume", 00:15:03.229 "block_size": 512, 00:15:03.229 "num_blocks": 57344, 00:15:03.229 "uuid": "65868af8-e6e1-4560-8dc9-433aa0cba704", 00:15:03.230 "assigned_rate_limits": { 00:15:03.230 "rw_ios_per_sec": 0, 00:15:03.230 "rw_mbytes_per_sec": 0, 00:15:03.230 "r_mbytes_per_sec": 0, 00:15:03.230 "w_mbytes_per_sec": 0 00:15:03.230 }, 00:15:03.230 "claimed": false, 00:15:03.230 "zoned": false, 00:15:03.230 "supported_io_types": { 00:15:03.230 "read": true, 00:15:03.230 "write": true, 00:15:03.230 "unmap": true, 00:15:03.230 "write_zeroes": true, 00:15:03.230 "flush": false, 00:15:03.230 "reset": true, 00:15:03.230 "compare": false, 00:15:03.230 "compare_and_write": false, 00:15:03.230 "abort": false, 00:15:03.230 "nvme_admin": false, 00:15:03.230 "nvme_io": false 00:15:03.230 }, 00:15:03.230 "memory_domains": [ 00:15:03.230 { 00:15:03.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.230 "dma_device_type": 2 00:15:03.230 } 00:15:03.230 ], 00:15:03.230 "driver_specific": { 00:15:03.230 "lvol": { 00:15:03.230 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:03.230 "base_bdev": "Malloc1", 00:15:03.230 "thin_provision": false, 00:15:03.230 "snapshot": false, 00:15:03.230 "clone": false, 00:15:03.230 "esnap_clone": false 00:15:03.230 } 00:15:03.230 } 00:15:03.230 } 00:15:03.230 ]' 00:15:03.230 12:34:45 -- lvol/rename.sh@119 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:03.230 12:34:45 -- lvol/rename.sh@119 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:03.230 12:34:45 -- lvol/rename.sh@120 -- # jq -r '.[0].block_size' 00:15:03.230 12:34:45 -- lvol/rename.sh@120 -- # '[' 512 = 512 ']' 00:15:03.230 12:34:45 -- lvol/rename.sh@121 -- # jq -r '.[0].num_blocks' 00:15:03.230 12:34:45 -- lvol/rename.sh@121 -- # '[' 57344 = 57344 ']' 00:15:03.230 12:34:45 -- lvol/rename.sh@122 -- # jq '.[0].aliases|sort' 00:15:03.488 12:34:45 -- lvol/rename.sh@122 -- # jq '.|sort' 00:15:03.488 12:34:45 -- lvol/rename.sh@122 -- # '[' '[ 00:15:03.488 "lvs_test1/lvol_test_1_3" 00:15:03.488 ]' = '[ 00:15:03.488 "lvs_test1/lvol_test_1_3" 00:15:03.488 ]' ']' 00:15:03.488 12:34:45 -- lvol/rename.sh@123 -- # bdev_uuids_1+=("$lvol_uuid") 00:15:03.488 12:34:45 -- lvol/rename.sh@125 -- # rpc_cmd bdev_lvol_create -u d37b74da-27f3-4887-89f7-036b5d8ed02f lvol_test_2_3 28 00:15:03.488 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.488 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:03.488 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.488 12:34:45 -- lvol/rename.sh@125 -- # lvol_uuid=b4cce4bc-b24c-4340-b804-73979da83cf9 00:15:03.488 12:34:45 -- lvol/rename.sh@126 -- # rpc_cmd bdev_get_bdevs -b b4cce4bc-b24c-4340-b804-73979da83cf9 00:15:03.488 12:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.488 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:03.488 12:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.488 12:34:45 -- lvol/rename.sh@126 -- # lvol='[ 00:15:03.488 { 00:15:03.488 "name": "b4cce4bc-b24c-4340-b804-73979da83cf9", 00:15:03.488 "aliases": [ 00:15:03.488 "lvs_test2/lvol_test_2_3" 00:15:03.488 ], 00:15:03.488 "product_name": "Logical Volume", 00:15:03.488 "block_size": 512, 00:15:03.488 "num_blocks": 57344, 00:15:03.488 "uuid": "b4cce4bc-b24c-4340-b804-73979da83cf9", 00:15:03.488 "assigned_rate_limits": { 00:15:03.488 "rw_ios_per_sec": 0, 00:15:03.488 "rw_mbytes_per_sec": 0, 00:15:03.488 "r_mbytes_per_sec": 0, 00:15:03.488 "w_mbytes_per_sec": 0 00:15:03.488 }, 00:15:03.488 "claimed": false, 00:15:03.488 "zoned": false, 00:15:03.488 "supported_io_types": { 00:15:03.488 "read": true, 00:15:03.488 "write": true, 00:15:03.488 "unmap": true, 00:15:03.488 "write_zeroes": true, 00:15:03.488 "flush": false, 00:15:03.488 "reset": true, 00:15:03.488 "compare": false, 00:15:03.488 "compare_and_write": false, 00:15:03.488 "abort": false, 00:15:03.488 "nvme_admin": false, 00:15:03.488 "nvme_io": false 00:15:03.488 }, 00:15:03.488 "memory_domains": [ 00:15:03.488 { 00:15:03.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.488 "dma_device_type": 2 00:15:03.488 } 00:15:03.488 ], 00:15:03.488 "driver_specific": { 00:15:03.488 "lvol": { 00:15:03.488 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:03.488 "base_bdev": "Malloc2", 00:15:03.488 "thin_provision": false, 00:15:03.488 "snapshot": false, 00:15:03.488 "clone": false, 00:15:03.488 "esnap_clone": false 00:15:03.488 } 00:15:03.488 } 00:15:03.488 } 00:15:03.489 ]' 00:15:03.489 12:34:45 -- lvol/rename.sh@127 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:03.489 12:34:45 -- lvol/rename.sh@127 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:03.489 12:34:45 -- lvol/rename.sh@128 -- # jq -r '.[0].block_size' 00:15:03.489 12:34:45 -- lvol/rename.sh@128 -- # '[' 512 = 512 ']' 00:15:03.489 12:34:45 -- lvol/rename.sh@129 -- # jq -r '.[0].num_blocks' 00:15:03.489 12:34:46 -- lvol/rename.sh@129 -- # '[' 57344 = 57344 ']' 00:15:03.489 12:34:46 -- lvol/rename.sh@130 -- # jq '.[0].aliases|sort' 00:15:03.748 12:34:46 -- lvol/rename.sh@130 -- # jq '.|sort' 00:15:03.748 12:34:46 -- lvol/rename.sh@130 -- # '[' '[ 00:15:03.748 "lvs_test2/lvol_test_2_3" 00:15:03.748 ]' = '[ 00:15:03.748 "lvs_test2/lvol_test_2_3" 00:15:03.748 ]' ']' 00:15:03.748 12:34:46 -- lvol/rename.sh@131 -- # bdev_uuids_2+=("$lvol_uuid") 00:15:03.748 12:34:46 -- lvol/rename.sh@136 -- # rpc_cmd bdev_lvol_rename_lvstore lvs_test1 lvs_test2 00:15:03.748 12:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.748 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:03.748 request: 00:15:03.748 { 00:15:03.748 "old_name": "lvs_test1", 00:15:03.748 "new_name": "lvs_test2", 00:15:03.748 "method": "bdev_lvol_rename_lvstore", 00:15:03.748 "req_id": 1 00:15:03.748 } 00:15:03.748 Got JSON-RPC error response 00:15:03.748 response: 00:15:03.748 { 00:15:03.748 "code": -32602, 00:15:03.748 "message": "File exists" 00:15:03.748 } 00:15:03.748 12:34:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:03.748 12:34:46 -- lvol/rename.sh@139 -- # rpc_cmd bdev_lvol_get_lvstores -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 00:15:03.748 12:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.748 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:03.748 12:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.748 12:34:46 -- lvol/rename.sh@139 -- # lvs='[ 00:15:03.748 { 00:15:03.748 "uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:03.748 "name": "lvs_test1", 00:15:03.748 "base_bdev": "Malloc1", 00:15:03.748 "total_data_clusters": 31, 00:15:03.748 "free_clusters": 3, 00:15:03.748 "block_size": 512, 00:15:03.748 "cluster_size": 4194304 00:15:03.748 } 00:15:03.748 ]' 00:15:03.748 12:34:46 -- lvol/rename.sh@140 -- # jq -r '.[0].uuid' 00:15:03.748 12:34:46 -- lvol/rename.sh@140 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:03.748 12:34:46 -- lvol/rename.sh@141 -- # jq -r '.[0].name' 00:15:03.748 12:34:46 -- lvol/rename.sh@141 -- # '[' lvs_test1 = lvs_test1 ']' 00:15:03.748 12:34:46 -- lvol/rename.sh@142 -- # jq -r '.[0].base_bdev' 00:15:04.007 12:34:46 -- lvol/rename.sh@142 -- # '[' Malloc1 = Malloc1 ']' 00:15:04.007 12:34:46 -- lvol/rename.sh@143 -- # jq -r '.[0].cluster_size' 00:15:04.007 12:34:46 -- lvol/rename.sh@143 -- # '[' 4194304 = 4194304 ']' 00:15:04.007 12:34:46 -- lvol/rename.sh@144 -- # rpc_cmd bdev_lvol_get_lvstores -u d37b74da-27f3-4887-89f7-036b5d8ed02f 00:15:04.007 12:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.007 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:04.007 12:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.007 12:34:46 -- lvol/rename.sh@144 -- # lvs='[ 00:15:04.007 { 00:15:04.007 "uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:04.007 "name": "lvs_test2", 00:15:04.007 "base_bdev": "Malloc2", 00:15:04.007 "total_data_clusters": 31, 00:15:04.007 "free_clusters": 3, 00:15:04.007 "block_size": 512, 00:15:04.007 "cluster_size": 4194304 00:15:04.007 } 00:15:04.007 ]' 00:15:04.007 12:34:46 -- lvol/rename.sh@145 -- # jq -r '.[0].uuid' 00:15:04.007 12:34:46 -- lvol/rename.sh@145 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:04.007 12:34:46 -- lvol/rename.sh@146 -- # jq -r '.[0].name' 00:15:04.007 12:34:46 -- lvol/rename.sh@146 -- # '[' lvs_test2 = lvs_test2 ']' 00:15:04.007 12:34:46 -- lvol/rename.sh@147 -- # jq -r '.[0].base_bdev' 00:15:04.007 12:34:46 -- lvol/rename.sh@147 -- # '[' Malloc2 = Malloc2 ']' 00:15:04.007 12:34:46 -- lvol/rename.sh@148 -- # jq -r '.[0].cluster_size' 00:15:04.266 12:34:46 -- lvol/rename.sh@148 -- # '[' 4194304 = 4194304 ']' 00:15:04.266 12:34:46 -- lvol/rename.sh@150 -- # for i in "${!bdev_uuids_1[@]}" 00:15:04.266 12:34:46 -- lvol/rename.sh@151 -- # rpc_cmd bdev_get_bdevs -b 26f7a932-0971-433f-aed1-c2c5fd44fed6 00:15:04.266 12:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.266 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:04.266 12:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.266 12:34:46 -- lvol/rename.sh@151 -- # lvol='[ 00:15:04.266 { 00:15:04.266 "name": "26f7a932-0971-433f-aed1-c2c5fd44fed6", 00:15:04.266 "aliases": [ 00:15:04.266 "lvs_test1/lvol_test_1_0" 00:15:04.266 ], 00:15:04.266 "product_name": "Logical Volume", 00:15:04.266 "block_size": 512, 00:15:04.266 "num_blocks": 57344, 00:15:04.266 "uuid": "26f7a932-0971-433f-aed1-c2c5fd44fed6", 00:15:04.266 "assigned_rate_limits": { 00:15:04.266 "rw_ios_per_sec": 0, 00:15:04.266 "rw_mbytes_per_sec": 0, 00:15:04.266 "r_mbytes_per_sec": 0, 00:15:04.266 "w_mbytes_per_sec": 0 00:15:04.266 }, 00:15:04.266 "claimed": false, 00:15:04.266 "zoned": false, 00:15:04.266 "supported_io_types": { 00:15:04.266 "read": true, 00:15:04.266 "write": true, 00:15:04.266 "unmap": true, 00:15:04.266 "write_zeroes": true, 00:15:04.266 "flush": false, 00:15:04.266 "reset": true, 00:15:04.266 "compare": false, 00:15:04.266 "compare_and_write": false, 00:15:04.266 "abort": false, 00:15:04.266 "nvme_admin": false, 00:15:04.266 "nvme_io": false 00:15:04.266 }, 00:15:04.266 "memory_domains": [ 00:15:04.266 { 00:15:04.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.266 "dma_device_type": 2 00:15:04.266 } 00:15:04.266 ], 00:15:04.266 "driver_specific": { 00:15:04.266 "lvol": { 00:15:04.266 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:04.266 "base_bdev": "Malloc1", 00:15:04.266 "thin_provision": false, 00:15:04.266 "snapshot": false, 00:15:04.266 "clone": false, 00:15:04.266 "esnap_clone": false 00:15:04.266 } 00:15:04.266 } 00:15:04.266 } 00:15:04.266 ]' 00:15:04.266 12:34:46 -- lvol/rename.sh@152 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:04.266 12:34:46 -- lvol/rename.sh@152 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:04.266 12:34:46 -- lvol/rename.sh@153 -- # jq -r '.[0].block_size' 00:15:04.266 12:34:46 -- lvol/rename.sh@153 -- # '[' 512 = 512 ']' 00:15:04.266 12:34:46 -- lvol/rename.sh@154 -- # jq -r '.[0].num_blocks' 00:15:04.266 12:34:46 -- lvol/rename.sh@154 -- # '[' 57344 = 57344 ']' 00:15:04.266 12:34:46 -- lvol/rename.sh@155 -- # jq '.[0].aliases|sort' 00:15:04.266 12:34:46 -- lvol/rename.sh@155 -- # jq '.|sort' 00:15:04.526 12:34:46 -- lvol/rename.sh@155 -- # '[' '[ 00:15:04.526 "lvs_test1/lvol_test_1_0" 00:15:04.526 ]' = '[ 00:15:04.526 "lvs_test1/lvol_test_1_0" 00:15:04.526 ]' ']' 00:15:04.526 12:34:46 -- lvol/rename.sh@157 -- # rpc_cmd bdev_get_bdevs -b 49ed579c-240d-48e6-89cd-ebc5fbbce087 00:15:04.526 12:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.526 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:04.526 12:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.526 12:34:46 -- lvol/rename.sh@157 -- # lvol='[ 00:15:04.526 { 00:15:04.526 "name": "49ed579c-240d-48e6-89cd-ebc5fbbce087", 00:15:04.526 "aliases": [ 00:15:04.526 "lvs_test2/lvol_test_2_0" 00:15:04.526 ], 00:15:04.526 "product_name": "Logical Volume", 00:15:04.526 "block_size": 512, 00:15:04.526 "num_blocks": 57344, 00:15:04.526 "uuid": "49ed579c-240d-48e6-89cd-ebc5fbbce087", 00:15:04.526 "assigned_rate_limits": { 00:15:04.526 "rw_ios_per_sec": 0, 00:15:04.526 "rw_mbytes_per_sec": 0, 00:15:04.526 "r_mbytes_per_sec": 0, 00:15:04.526 "w_mbytes_per_sec": 0 00:15:04.526 }, 00:15:04.526 "claimed": false, 00:15:04.526 "zoned": false, 00:15:04.526 "supported_io_types": { 00:15:04.526 "read": true, 00:15:04.526 "write": true, 00:15:04.526 "unmap": true, 00:15:04.526 "write_zeroes": true, 00:15:04.526 "flush": false, 00:15:04.526 "reset": true, 00:15:04.526 "compare": false, 00:15:04.526 "compare_and_write": false, 00:15:04.526 "abort": false, 00:15:04.526 "nvme_admin": false, 00:15:04.526 "nvme_io": false 00:15:04.526 }, 00:15:04.526 "memory_domains": [ 00:15:04.526 { 00:15:04.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.526 "dma_device_type": 2 00:15:04.526 } 00:15:04.526 ], 00:15:04.526 "driver_specific": { 00:15:04.526 "lvol": { 00:15:04.526 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:04.526 "base_bdev": "Malloc2", 00:15:04.526 "thin_provision": false, 00:15:04.526 "snapshot": false, 00:15:04.526 "clone": false, 00:15:04.526 "esnap_clone": false 00:15:04.526 } 00:15:04.526 } 00:15:04.526 } 00:15:04.526 ]' 00:15:04.526 12:34:46 -- lvol/rename.sh@158 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:04.526 12:34:46 -- lvol/rename.sh@158 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:04.526 12:34:46 -- lvol/rename.sh@159 -- # jq -r '.[0].block_size' 00:15:04.526 12:34:46 -- lvol/rename.sh@159 -- # '[' 512 = 512 ']' 00:15:04.526 12:34:46 -- lvol/rename.sh@160 -- # jq -r '.[0].num_blocks' 00:15:04.526 12:34:46 -- lvol/rename.sh@160 -- # '[' 57344 = 57344 ']' 00:15:04.526 12:34:46 -- lvol/rename.sh@161 -- # jq '.[0].aliases|sort' 00:15:04.785 12:34:47 -- lvol/rename.sh@161 -- # jq '.|sort' 00:15:04.785 12:34:47 -- lvol/rename.sh@161 -- # '[' '[ 00:15:04.785 "lvs_test2/lvol_test_2_0" 00:15:04.785 ]' = '[ 00:15:04.785 "lvs_test2/lvol_test_2_0" 00:15:04.785 ]' ']' 00:15:04.785 12:34:47 -- lvol/rename.sh@150 -- # for i in "${!bdev_uuids_1[@]}" 00:15:04.785 12:34:47 -- lvol/rename.sh@151 -- # rpc_cmd bdev_get_bdevs -b 3ea0aebf-3579-4d73-a0b3-08cbb87fa762 00:15:04.785 12:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.785 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 12:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.785 12:34:47 -- lvol/rename.sh@151 -- # lvol='[ 00:15:04.785 { 00:15:04.785 "name": "3ea0aebf-3579-4d73-a0b3-08cbb87fa762", 00:15:04.785 "aliases": [ 00:15:04.785 "lvs_test1/lvol_test_1_1" 00:15:04.785 ], 00:15:04.785 "product_name": "Logical Volume", 00:15:04.785 "block_size": 512, 00:15:04.785 "num_blocks": 57344, 00:15:04.785 "uuid": "3ea0aebf-3579-4d73-a0b3-08cbb87fa762", 00:15:04.785 "assigned_rate_limits": { 00:15:04.785 "rw_ios_per_sec": 0, 00:15:04.785 "rw_mbytes_per_sec": 0, 00:15:04.785 "r_mbytes_per_sec": 0, 00:15:04.785 "w_mbytes_per_sec": 0 00:15:04.785 }, 00:15:04.785 "claimed": false, 00:15:04.785 "zoned": false, 00:15:04.785 "supported_io_types": { 00:15:04.785 "read": true, 00:15:04.785 "write": true, 00:15:04.785 "unmap": true, 00:15:04.785 "write_zeroes": true, 00:15:04.785 "flush": false, 00:15:04.785 "reset": true, 00:15:04.785 "compare": false, 00:15:04.785 "compare_and_write": false, 00:15:04.785 "abort": false, 00:15:04.785 "nvme_admin": false, 00:15:04.785 "nvme_io": false 00:15:04.785 }, 00:15:04.785 "memory_domains": [ 00:15:04.785 { 00:15:04.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.785 "dma_device_type": 2 00:15:04.785 } 00:15:04.785 ], 00:15:04.785 "driver_specific": { 00:15:04.785 "lvol": { 00:15:04.785 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:04.785 "base_bdev": "Malloc1", 00:15:04.785 "thin_provision": false, 00:15:04.785 "snapshot": false, 00:15:04.785 "clone": false, 00:15:04.785 "esnap_clone": false 00:15:04.785 } 00:15:04.785 } 00:15:04.785 } 00:15:04.785 ]' 00:15:04.785 12:34:47 -- lvol/rename.sh@152 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:04.785 12:34:47 -- lvol/rename.sh@152 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:04.785 12:34:47 -- lvol/rename.sh@153 -- # jq -r '.[0].block_size' 00:15:04.785 12:34:47 -- lvol/rename.sh@153 -- # '[' 512 = 512 ']' 00:15:04.785 12:34:47 -- lvol/rename.sh@154 -- # jq -r '.[0].num_blocks' 00:15:04.785 12:34:47 -- lvol/rename.sh@154 -- # '[' 57344 = 57344 ']' 00:15:04.785 12:34:47 -- lvol/rename.sh@155 -- # jq '.[0].aliases|sort' 00:15:05.050 12:34:47 -- lvol/rename.sh@155 -- # jq '.|sort' 00:15:05.050 12:34:47 -- lvol/rename.sh@155 -- # '[' '[ 00:15:05.050 "lvs_test1/lvol_test_1_1" 00:15:05.050 ]' = '[ 00:15:05.050 "lvs_test1/lvol_test_1_1" 00:15:05.050 ]' ']' 00:15:05.050 12:34:47 -- lvol/rename.sh@157 -- # rpc_cmd bdev_get_bdevs -b c144e15b-2c8f-4b33-aeee-a0b7414f694d 00:15:05.050 12:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.050 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:05.050 12:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.050 12:34:47 -- lvol/rename.sh@157 -- # lvol='[ 00:15:05.050 { 00:15:05.050 "name": "c144e15b-2c8f-4b33-aeee-a0b7414f694d", 00:15:05.050 "aliases": [ 00:15:05.050 "lvs_test2/lvol_test_2_1" 00:15:05.050 ], 00:15:05.050 "product_name": "Logical Volume", 00:15:05.050 "block_size": 512, 00:15:05.050 "num_blocks": 57344, 00:15:05.050 "uuid": "c144e15b-2c8f-4b33-aeee-a0b7414f694d", 00:15:05.050 "assigned_rate_limits": { 00:15:05.050 "rw_ios_per_sec": 0, 00:15:05.050 "rw_mbytes_per_sec": 0, 00:15:05.050 "r_mbytes_per_sec": 0, 00:15:05.050 "w_mbytes_per_sec": 0 00:15:05.050 }, 00:15:05.050 "claimed": false, 00:15:05.050 "zoned": false, 00:15:05.050 "supported_io_types": { 00:15:05.050 "read": true, 00:15:05.050 "write": true, 00:15:05.050 "unmap": true, 00:15:05.050 "write_zeroes": true, 00:15:05.050 "flush": false, 00:15:05.050 "reset": true, 00:15:05.050 "compare": false, 00:15:05.050 "compare_and_write": false, 00:15:05.050 "abort": false, 00:15:05.050 "nvme_admin": false, 00:15:05.050 "nvme_io": false 00:15:05.050 }, 00:15:05.050 "memory_domains": [ 00:15:05.050 { 00:15:05.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.050 "dma_device_type": 2 00:15:05.050 } 00:15:05.050 ], 00:15:05.050 "driver_specific": { 00:15:05.050 "lvol": { 00:15:05.050 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:05.050 "base_bdev": "Malloc2", 00:15:05.050 "thin_provision": false, 00:15:05.050 "snapshot": false, 00:15:05.050 "clone": false, 00:15:05.050 "esnap_clone": false 00:15:05.050 } 00:15:05.050 } 00:15:05.050 } 00:15:05.050 ]' 00:15:05.050 12:34:47 -- lvol/rename.sh@158 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:05.050 12:34:47 -- lvol/rename.sh@158 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:05.050 12:34:47 -- lvol/rename.sh@159 -- # jq -r '.[0].block_size' 00:15:05.050 12:34:47 -- lvol/rename.sh@159 -- # '[' 512 = 512 ']' 00:15:05.050 12:34:47 -- lvol/rename.sh@160 -- # jq -r '.[0].num_blocks' 00:15:05.369 12:34:47 -- lvol/rename.sh@160 -- # '[' 57344 = 57344 ']' 00:15:05.369 12:34:47 -- lvol/rename.sh@161 -- # jq '.[0].aliases|sort' 00:15:05.369 12:34:47 -- lvol/rename.sh@161 -- # jq '.|sort' 00:15:05.369 12:34:47 -- lvol/rename.sh@161 -- # '[' '[ 00:15:05.369 "lvs_test2/lvol_test_2_1" 00:15:05.369 ]' = '[ 00:15:05.369 "lvs_test2/lvol_test_2_1" 00:15:05.369 ]' ']' 00:15:05.369 12:34:47 -- lvol/rename.sh@150 -- # for i in "${!bdev_uuids_1[@]}" 00:15:05.369 12:34:47 -- lvol/rename.sh@151 -- # rpc_cmd bdev_get_bdevs -b 91f10096-0569-4af9-8475-c3844f99649b 00:15:05.369 12:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.369 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:05.369 12:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.369 12:34:47 -- lvol/rename.sh@151 -- # lvol='[ 00:15:05.369 { 00:15:05.369 "name": "91f10096-0569-4af9-8475-c3844f99649b", 00:15:05.369 "aliases": [ 00:15:05.369 "lvs_test1/lvol_test_1_2" 00:15:05.369 ], 00:15:05.369 "product_name": "Logical Volume", 00:15:05.369 "block_size": 512, 00:15:05.369 "num_blocks": 57344, 00:15:05.369 "uuid": "91f10096-0569-4af9-8475-c3844f99649b", 00:15:05.369 "assigned_rate_limits": { 00:15:05.369 "rw_ios_per_sec": 0, 00:15:05.369 "rw_mbytes_per_sec": 0, 00:15:05.369 "r_mbytes_per_sec": 0, 00:15:05.369 "w_mbytes_per_sec": 0 00:15:05.369 }, 00:15:05.369 "claimed": false, 00:15:05.369 "zoned": false, 00:15:05.369 "supported_io_types": { 00:15:05.369 "read": true, 00:15:05.369 "write": true, 00:15:05.369 "unmap": true, 00:15:05.369 "write_zeroes": true, 00:15:05.369 "flush": false, 00:15:05.369 "reset": true, 00:15:05.369 "compare": false, 00:15:05.369 "compare_and_write": false, 00:15:05.369 "abort": false, 00:15:05.369 "nvme_admin": false, 00:15:05.369 "nvme_io": false 00:15:05.369 }, 00:15:05.369 "memory_domains": [ 00:15:05.369 { 00:15:05.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.369 "dma_device_type": 2 00:15:05.369 } 00:15:05.369 ], 00:15:05.369 "driver_specific": { 00:15:05.369 "lvol": { 00:15:05.369 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:05.369 "base_bdev": "Malloc1", 00:15:05.369 "thin_provision": false, 00:15:05.369 "snapshot": false, 00:15:05.369 "clone": false, 00:15:05.369 "esnap_clone": false 00:15:05.369 } 00:15:05.369 } 00:15:05.369 } 00:15:05.369 ]' 00:15:05.369 12:34:47 -- lvol/rename.sh@152 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:05.369 12:34:47 -- lvol/rename.sh@152 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:05.369 12:34:47 -- lvol/rename.sh@153 -- # jq -r '.[0].block_size' 00:15:05.369 12:34:47 -- lvol/rename.sh@153 -- # '[' 512 = 512 ']' 00:15:05.369 12:34:47 -- lvol/rename.sh@154 -- # jq -r '.[0].num_blocks' 00:15:05.369 12:34:47 -- lvol/rename.sh@154 -- # '[' 57344 = 57344 ']' 00:15:05.369 12:34:47 -- lvol/rename.sh@155 -- # jq '.[0].aliases|sort' 00:15:05.629 12:34:47 -- lvol/rename.sh@155 -- # jq '.|sort' 00:15:05.629 12:34:47 -- lvol/rename.sh@155 -- # '[' '[ 00:15:05.629 "lvs_test1/lvol_test_1_2" 00:15:05.629 ]' = '[ 00:15:05.629 "lvs_test1/lvol_test_1_2" 00:15:05.629 ]' ']' 00:15:05.629 12:34:47 -- lvol/rename.sh@157 -- # rpc_cmd bdev_get_bdevs -b 5229ed6f-fa3d-4037-87e1-29def641761c 00:15:05.629 12:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.629 12:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:05.629 12:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.629 12:34:47 -- lvol/rename.sh@157 -- # lvol='[ 00:15:05.629 { 00:15:05.629 "name": "5229ed6f-fa3d-4037-87e1-29def641761c", 00:15:05.629 "aliases": [ 00:15:05.629 "lvs_test2/lvol_test_2_2" 00:15:05.629 ], 00:15:05.629 "product_name": "Logical Volume", 00:15:05.629 "block_size": 512, 00:15:05.629 "num_blocks": 57344, 00:15:05.629 "uuid": "5229ed6f-fa3d-4037-87e1-29def641761c", 00:15:05.629 "assigned_rate_limits": { 00:15:05.629 "rw_ios_per_sec": 0, 00:15:05.629 "rw_mbytes_per_sec": 0, 00:15:05.629 "r_mbytes_per_sec": 0, 00:15:05.629 "w_mbytes_per_sec": 0 00:15:05.629 }, 00:15:05.629 "claimed": false, 00:15:05.629 "zoned": false, 00:15:05.629 "supported_io_types": { 00:15:05.629 "read": true, 00:15:05.629 "write": true, 00:15:05.629 "unmap": true, 00:15:05.629 "write_zeroes": true, 00:15:05.629 "flush": false, 00:15:05.629 "reset": true, 00:15:05.629 "compare": false, 00:15:05.629 "compare_and_write": false, 00:15:05.629 "abort": false, 00:15:05.629 "nvme_admin": false, 00:15:05.629 "nvme_io": false 00:15:05.629 }, 00:15:05.629 "memory_domains": [ 00:15:05.629 { 00:15:05.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.629 "dma_device_type": 2 00:15:05.629 } 00:15:05.629 ], 00:15:05.629 "driver_specific": { 00:15:05.629 "lvol": { 00:15:05.629 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:05.629 "base_bdev": "Malloc2", 00:15:05.629 "thin_provision": false, 00:15:05.629 "snapshot": false, 00:15:05.629 "clone": false, 00:15:05.629 "esnap_clone": false 00:15:05.629 } 00:15:05.629 } 00:15:05.629 } 00:15:05.629 ]' 00:15:05.629 12:34:47 -- lvol/rename.sh@158 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:05.629 12:34:48 -- lvol/rename.sh@158 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:05.629 12:34:48 -- lvol/rename.sh@159 -- # jq -r '.[0].block_size' 00:15:05.629 12:34:48 -- lvol/rename.sh@159 -- # '[' 512 = 512 ']' 00:15:05.629 12:34:48 -- lvol/rename.sh@160 -- # jq -r '.[0].num_blocks' 00:15:05.629 12:34:48 -- lvol/rename.sh@160 -- # '[' 57344 = 57344 ']' 00:15:05.629 12:34:48 -- lvol/rename.sh@161 -- # jq '.[0].aliases|sort' 00:15:05.888 12:34:48 -- lvol/rename.sh@161 -- # jq '.|sort' 00:15:05.888 12:34:48 -- lvol/rename.sh@161 -- # '[' '[ 00:15:05.888 "lvs_test2/lvol_test_2_2" 00:15:05.888 ]' = '[ 00:15:05.888 "lvs_test2/lvol_test_2_2" 00:15:05.888 ]' ']' 00:15:05.888 12:34:48 -- lvol/rename.sh@150 -- # for i in "${!bdev_uuids_1[@]}" 00:15:05.888 12:34:48 -- lvol/rename.sh@151 -- # rpc_cmd bdev_get_bdevs -b 65868af8-e6e1-4560-8dc9-433aa0cba704 00:15:05.888 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.888 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:05.888 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.888 12:34:48 -- lvol/rename.sh@151 -- # lvol='[ 00:15:05.888 { 00:15:05.888 "name": "65868af8-e6e1-4560-8dc9-433aa0cba704", 00:15:05.888 "aliases": [ 00:15:05.888 "lvs_test1/lvol_test_1_3" 00:15:05.888 ], 00:15:05.888 "product_name": "Logical Volume", 00:15:05.888 "block_size": 512, 00:15:05.888 "num_blocks": 57344, 00:15:05.888 "uuid": "65868af8-e6e1-4560-8dc9-433aa0cba704", 00:15:05.888 "assigned_rate_limits": { 00:15:05.888 "rw_ios_per_sec": 0, 00:15:05.888 "rw_mbytes_per_sec": 0, 00:15:05.888 "r_mbytes_per_sec": 0, 00:15:05.888 "w_mbytes_per_sec": 0 00:15:05.888 }, 00:15:05.888 "claimed": false, 00:15:05.888 "zoned": false, 00:15:05.888 "supported_io_types": { 00:15:05.888 "read": true, 00:15:05.888 "write": true, 00:15:05.888 "unmap": true, 00:15:05.888 "write_zeroes": true, 00:15:05.888 "flush": false, 00:15:05.888 "reset": true, 00:15:05.888 "compare": false, 00:15:05.888 "compare_and_write": false, 00:15:05.888 "abort": false, 00:15:05.888 "nvme_admin": false, 00:15:05.888 "nvme_io": false 00:15:05.888 }, 00:15:05.888 "memory_domains": [ 00:15:05.888 { 00:15:05.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.888 "dma_device_type": 2 00:15:05.888 } 00:15:05.888 ], 00:15:05.888 "driver_specific": { 00:15:05.888 "lvol": { 00:15:05.888 "lvol_store_uuid": "4eccc2a1-fe4d-4d89-b409-617c1346afc0", 00:15:05.888 "base_bdev": "Malloc1", 00:15:05.888 "thin_provision": false, 00:15:05.888 "snapshot": false, 00:15:05.888 "clone": false, 00:15:05.888 "esnap_clone": false 00:15:05.888 } 00:15:05.888 } 00:15:05.888 } 00:15:05.888 ]' 00:15:05.888 12:34:48 -- lvol/rename.sh@152 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:05.889 12:34:48 -- lvol/rename.sh@152 -- # '[' 4eccc2a1-fe4d-4d89-b409-617c1346afc0 = 4eccc2a1-fe4d-4d89-b409-617c1346afc0 ']' 00:15:05.889 12:34:48 -- lvol/rename.sh@153 -- # jq -r '.[0].block_size' 00:15:05.889 12:34:48 -- lvol/rename.sh@153 -- # '[' 512 = 512 ']' 00:15:05.889 12:34:48 -- lvol/rename.sh@154 -- # jq -r '.[0].num_blocks' 00:15:06.148 12:34:48 -- lvol/rename.sh@154 -- # '[' 57344 = 57344 ']' 00:15:06.148 12:34:48 -- lvol/rename.sh@155 -- # jq '.[0].aliases|sort' 00:15:06.148 12:34:48 -- lvol/rename.sh@155 -- # jq '.|sort' 00:15:06.148 12:34:48 -- lvol/rename.sh@155 -- # '[' '[ 00:15:06.148 "lvs_test1/lvol_test_1_3" 00:15:06.148 ]' = '[ 00:15:06.148 "lvs_test1/lvol_test_1_3" 00:15:06.148 ]' ']' 00:15:06.148 12:34:48 -- lvol/rename.sh@157 -- # rpc_cmd bdev_get_bdevs -b b4cce4bc-b24c-4340-b804-73979da83cf9 00:15:06.148 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.148 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.148 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.148 12:34:48 -- lvol/rename.sh@157 -- # lvol='[ 00:15:06.148 { 00:15:06.148 "name": "b4cce4bc-b24c-4340-b804-73979da83cf9", 00:15:06.148 "aliases": [ 00:15:06.148 "lvs_test2/lvol_test_2_3" 00:15:06.148 ], 00:15:06.148 "product_name": "Logical Volume", 00:15:06.148 "block_size": 512, 00:15:06.148 "num_blocks": 57344, 00:15:06.148 "uuid": "b4cce4bc-b24c-4340-b804-73979da83cf9", 00:15:06.148 "assigned_rate_limits": { 00:15:06.148 "rw_ios_per_sec": 0, 00:15:06.148 "rw_mbytes_per_sec": 0, 00:15:06.148 "r_mbytes_per_sec": 0, 00:15:06.148 "w_mbytes_per_sec": 0 00:15:06.148 }, 00:15:06.148 "claimed": false, 00:15:06.148 "zoned": false, 00:15:06.148 "supported_io_types": { 00:15:06.148 "read": true, 00:15:06.148 "write": true, 00:15:06.148 "unmap": true, 00:15:06.148 "write_zeroes": true, 00:15:06.148 "flush": false, 00:15:06.148 "reset": true, 00:15:06.148 "compare": false, 00:15:06.148 "compare_and_write": false, 00:15:06.148 "abort": false, 00:15:06.148 "nvme_admin": false, 00:15:06.148 "nvme_io": false 00:15:06.148 }, 00:15:06.148 "memory_domains": [ 00:15:06.148 { 00:15:06.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.148 "dma_device_type": 2 00:15:06.148 } 00:15:06.148 ], 00:15:06.148 "driver_specific": { 00:15:06.148 "lvol": { 00:15:06.148 "lvol_store_uuid": "d37b74da-27f3-4887-89f7-036b5d8ed02f", 00:15:06.148 "base_bdev": "Malloc2", 00:15:06.148 "thin_provision": false, 00:15:06.148 "snapshot": false, 00:15:06.148 "clone": false, 00:15:06.148 "esnap_clone": false 00:15:06.148 } 00:15:06.148 } 00:15:06.148 } 00:15:06.148 ]' 00:15:06.148 12:34:48 -- lvol/rename.sh@158 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:06.148 12:34:48 -- lvol/rename.sh@158 -- # '[' d37b74da-27f3-4887-89f7-036b5d8ed02f = d37b74da-27f3-4887-89f7-036b5d8ed02f ']' 00:15:06.148 12:34:48 -- lvol/rename.sh@159 -- # jq -r '.[0].block_size' 00:15:06.148 12:34:48 -- lvol/rename.sh@159 -- # '[' 512 = 512 ']' 00:15:06.407 12:34:48 -- lvol/rename.sh@160 -- # jq -r '.[0].num_blocks' 00:15:06.407 12:34:48 -- lvol/rename.sh@160 -- # '[' 57344 = 57344 ']' 00:15:06.407 12:34:48 -- lvol/rename.sh@161 -- # jq '.[0].aliases|sort' 00:15:06.407 12:34:48 -- lvol/rename.sh@161 -- # jq '.|sort' 00:15:06.407 12:34:48 -- lvol/rename.sh@161 -- # '[' '[ 00:15:06.407 "lvs_test2/lvol_test_2_3" 00:15:06.407 ]' = '[ 00:15:06.407 "lvs_test2/lvol_test_2_3" 00:15:06.407 ]' ']' 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test_1_0 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test_1_1 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test_1_2 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test1/lvol_test_1_3 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test2/lvol_test_2_0 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test2/lvol_test_2_1 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test2/lvol_test_2_2 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@165 -- # for bdev in "${bdev_aliases_1[@]}" "${bdev_aliases_2[@]}" 00:15:06.407 12:34:48 -- lvol/rename.sh@166 -- # rpc_cmd bdev_lvol_delete lvs_test2/lvol_test_2_3 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@168 -- # rpc_cmd bdev_lvol_delete_lvstore -u 4eccc2a1-fe4d-4d89-b409-617c1346afc0 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@169 -- # rpc_cmd bdev_lvol_delete_lvstore -u d37b74da-27f3-4887-89f7-036b5d8ed02f 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 12:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.407 12:34:48 -- lvol/rename.sh@170 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:06.407 12:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.407 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:06.974 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.974 12:34:49 -- lvol/rename.sh@171 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:06.974 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.974 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:06.974 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.974 12:34:49 -- lvol/rename.sh@172 -- # check_leftover_devices 00:15:06.974 12:34:49 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:06.974 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.974 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.233 12:34:49 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:07.233 12:34:49 -- lvol/common.sh@26 -- # jq length 00:15:07.233 12:34:49 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:07.233 12:34:49 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:07.233 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.233 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.233 12:34:49 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:07.233 12:34:49 -- lvol/common.sh@28 -- # jq length 00:15:07.233 ************************************ 00:15:07.233 END TEST test_rename_lvs_negative 00:15:07.233 ************************************ 00:15:07.233 12:34:49 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:07.233 00:15:07.233 real 0m6.198s 00:15:07.233 user 0m4.469s 00:15:07.233 sys 0m0.517s 00:15:07.233 12:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.233 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 12:34:49 -- lvol/rename.sh@219 -- # run_test test_lvol_rename_negative test_lvol_rename_negative 00:15:07.233 12:34:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:07.233 12:34:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.233 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 ************************************ 00:15:07.233 START TEST test_lvol_rename_negative 00:15:07.233 ************************************ 00:15:07.233 12:34:49 -- common/autotest_common.sh@1104 -- # test_lvol_rename_negative 00:15:07.233 12:34:49 -- lvol/rename.sh@181 -- # rpc_cmd bdev_lvol_rename NOTEXIST WHATEVER 00:15:07.233 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.233 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.233 [2024-10-01 12:34:49.681319] vbdev_lvol_rpc.c: 679:rpc_bdev_lvol_rename: *ERROR*: bdev 'NOTEXIST' does not exist 00:15:07.233 request: 00:15:07.233 { 00:15:07.233 "old_name": "NOTEXIST", 00:15:07.233 "new_name": "WHATEVER", 00:15:07.233 "method": "bdev_lvol_rename", 00:15:07.233 "req_id": 1 00:15:07.233 } 00:15:07.233 Got JSON-RPC error response 00:15:07.233 response: 00:15:07.233 { 00:15:07.233 "code": -19, 00:15:07.233 "message": "No such device" 00:15:07.233 } 00:15:07.233 12:34:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:07.233 12:34:49 -- lvol/rename.sh@183 -- # rpc_cmd bdev_malloc_create 128 512 00:15:07.233 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.233 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@183 -- # malloc_name=Malloc3 00:15:07.493 12:34:49 -- lvol/rename.sh@184 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:15:07.493 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.493 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@184 -- # lvs_uuid=86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 00:15:07.493 12:34:49 -- lvol/rename.sh@187 -- # round_down 62 00:15:07.493 12:34:49 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:07.493 12:34:49 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:07.493 12:34:49 -- lvol/common.sh@36 -- # echo 60 00:15:07.493 12:34:49 -- lvol/rename.sh@187 -- # lvol_size_mb=60 00:15:07.493 12:34:49 -- lvol/rename.sh@188 -- # lvol_size=62914560 00:15:07.493 12:34:49 -- lvol/rename.sh@191 -- # rpc_cmd bdev_lvol_create -u 86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 lvol_test1 60 00:15:07.493 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.493 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@191 -- # lvol_uuid1=ec7e7944-a181-4c1f-a0ac-c63c6c0910b7 00:15:07.493 12:34:49 -- lvol/rename.sh@192 -- # rpc_cmd bdev_lvol_create -u 86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 lvol_test2 60 00:15:07.493 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.493 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@192 -- # lvol_uuid2=a3d94da2-19fe-46cc-98b5-5b77738980e8 00:15:07.493 12:34:49 -- lvol/rename.sh@196 -- # rpc_cmd bdev_lvol_rename lvol_test1 lvol_test2 00:15:07.493 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.493 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 [2024-10-01 12:34:49.861553] vbdev_lvol_rpc.c: 679:rpc_bdev_lvol_rename: *ERROR*: bdev 'lvol_test1' does not exist 00:15:07.493 request: 00:15:07.493 { 00:15:07.493 "old_name": "lvol_test1", 00:15:07.493 "new_name": "lvol_test2", 00:15:07.493 "method": "bdev_lvol_rename", 00:15:07.493 "req_id": 1 00:15:07.493 } 00:15:07.493 Got JSON-RPC error response 00:15:07.493 response: 00:15:07.493 { 00:15:07.493 "code": -19, 00:15:07.493 "message": "No such device" 00:15:07.493 } 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@199 -- # rpc_cmd bdev_get_bdevs -b ec7e7944-a181-4c1f-a0ac-c63c6c0910b7 00:15:07.493 12:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.493 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:07.493 12:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.493 12:34:49 -- lvol/rename.sh@199 -- # lvol='[ 00:15:07.493 { 00:15:07.493 "name": "ec7e7944-a181-4c1f-a0ac-c63c6c0910b7", 00:15:07.493 "aliases": [ 00:15:07.493 "lvs_test/lvol_test1" 00:15:07.493 ], 00:15:07.493 "product_name": "Logical Volume", 00:15:07.493 "block_size": 512, 00:15:07.493 "num_blocks": 122880, 00:15:07.493 "uuid": "ec7e7944-a181-4c1f-a0ac-c63c6c0910b7", 00:15:07.493 "assigned_rate_limits": { 00:15:07.493 "rw_ios_per_sec": 0, 00:15:07.493 "rw_mbytes_per_sec": 0, 00:15:07.493 "r_mbytes_per_sec": 0, 00:15:07.493 "w_mbytes_per_sec": 0 00:15:07.493 }, 00:15:07.493 "claimed": false, 00:15:07.493 "zoned": false, 00:15:07.493 "supported_io_types": { 00:15:07.493 "read": true, 00:15:07.493 "write": true, 00:15:07.493 "unmap": true, 00:15:07.493 "write_zeroes": true, 00:15:07.493 "flush": false, 00:15:07.493 "reset": true, 00:15:07.493 "compare": false, 00:15:07.493 "compare_and_write": false, 00:15:07.493 "abort": false, 00:15:07.493 "nvme_admin": false, 00:15:07.493 "nvme_io": false 00:15:07.493 }, 00:15:07.493 "memory_domains": [ 00:15:07.493 { 00:15:07.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.493 "dma_device_type": 2 00:15:07.493 } 00:15:07.493 ], 00:15:07.493 "driver_specific": { 00:15:07.493 "lvol": { 00:15:07.493 "lvol_store_uuid": "86a51a9c-e65e-41c1-94e2-625fdd8b7cc3", 00:15:07.493 "base_bdev": "Malloc3", 00:15:07.493 "thin_provision": false, 00:15:07.493 "snapshot": false, 00:15:07.493 "clone": false, 00:15:07.493 "esnap_clone": false 00:15:07.493 } 00:15:07.493 } 00:15:07.493 } 00:15:07.493 ]' 00:15:07.493 12:34:49 -- lvol/rename.sh@200 -- # jq -r '.[0].driver_specific.lvol.lvol_store_uuid' 00:15:07.493 12:34:49 -- lvol/rename.sh@200 -- # '[' 86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 = 86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 ']' 00:15:07.493 12:34:49 -- lvol/rename.sh@201 -- # jq -r '.[0].block_size' 00:15:07.493 12:34:49 -- lvol/rename.sh@201 -- # '[' 512 = 512 ']' 00:15:07.493 12:34:49 -- lvol/rename.sh@202 -- # jq -r '.[0].num_blocks' 00:15:07.752 12:34:50 -- lvol/rename.sh@202 -- # '[' 122880 = 122880 ']' 00:15:07.752 12:34:50 -- lvol/rename.sh@203 -- # jq -r '.[0].aliases|sort' 00:15:07.752 12:34:50 -- lvol/rename.sh@203 -- # jq '.|sort' 00:15:07.752 12:34:50 -- lvol/rename.sh@203 -- # '[' '[ 00:15:07.752 "lvs_test/lvol_test1" 00:15:07.752 ]' = '[ 00:15:07.752 "lvs_test/lvol_test1" 00:15:07.752 ]' ']' 00:15:07.752 12:34:50 -- lvol/rename.sh@205 -- # rpc_cmd bdev_lvol_delete lvs_test/lvol_test1 00:15:07.752 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.752 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:07.752 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.752 12:34:50 -- lvol/rename.sh@206 -- # rpc_cmd bdev_lvol_delete lvs_test/lvol_test2 00:15:07.752 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.752 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:07.752 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.752 12:34:50 -- lvol/rename.sh@207 -- # rpc_cmd bdev_lvol_delete_lvstore -u 86a51a9c-e65e-41c1-94e2-625fdd8b7cc3 00:15:07.752 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.752 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:07.752 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:07.752 12:34:50 -- lvol/rename.sh@208 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:07.752 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.752 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.010 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.010 12:34:50 -- lvol/rename.sh@209 -- # check_leftover_devices 00:15:08.010 12:34:50 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:08.010 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.010 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.010 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.010 12:34:50 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:08.010 12:34:50 -- lvol/common.sh@26 -- # jq length 00:15:08.270 12:34:50 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:08.270 12:34:50 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:08.270 12:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.270 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 12:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.270 12:34:50 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:08.270 12:34:50 -- lvol/common.sh@28 -- # jq length 00:15:08.270 ************************************ 00:15:08.270 END TEST test_lvol_rename_negative 00:15:08.270 ************************************ 00:15:08.270 12:34:50 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:08.270 00:15:08.270 real 0m0.920s 00:15:08.270 user 0m0.347s 00:15:08.270 sys 0m0.058s 00:15:08.270 12:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.270 12:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 12:34:50 -- lvol/rename.sh@221 -- # trap - SIGINT SIGTERM EXIT 00:15:08.270 12:34:50 -- lvol/rename.sh@222 -- # killprocess 62305 00:15:08.270 12:34:50 -- common/autotest_common.sh@926 -- # '[' -z 62305 ']' 00:15:08.270 12:34:50 -- common/autotest_common.sh@930 -- # kill -0 62305 00:15:08.270 12:34:50 -- common/autotest_common.sh@931 -- # uname 00:15:08.270 12:34:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.270 12:34:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62305 00:15:08.270 killing process with pid 62305 00:15:08.270 12:34:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.270 12:34:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.270 12:34:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62305' 00:15:08.270 12:34:50 -- common/autotest_common.sh@945 -- # kill 62305 00:15:08.270 12:34:50 -- common/autotest_common.sh@950 -- # wait 62305 00:15:10.175 00:15:10.175 real 0m15.523s 00:15:10.175 user 0m23.662s 00:15:10.175 sys 0m1.648s 00:15:10.175 12:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.175 ************************************ 00:15:10.175 END TEST lvol_rename 00:15:10.175 ************************************ 00:15:10.175 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:10.175 12:34:52 -- lvol/lvol.sh@20 -- # run_test lvol_provisioning /home/vagrant/spdk_repo/spdk/test/lvol/thin_provisioning.sh 00:15:10.175 12:34:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:10.175 12:34:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.175 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:10.175 ************************************ 00:15:10.175 START TEST lvol_provisioning 00:15:10.175 ************************************ 00:15:10.175 12:34:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/thin_provisioning.sh 00:15:10.433 * Looking for test storage... 00:15:10.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:10.433 12:34:52 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:10.433 12:34:52 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:10.433 12:34:52 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:10.433 12:34:52 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:10.433 12:34:52 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:10.433 12:34:52 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:10.433 12:34:52 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:10.433 12:34:52 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:10.433 12:34:52 -- bdev/nbd_common.sh@6 -- # set -e 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@228 -- # spdk_pid=62960 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@229 -- # trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@227 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.433 12:34:52 -- lvol/thin_provisioning.sh@230 -- # waitforlisten 62960 00:15:10.434 12:34:52 -- common/autotest_common.sh@819 -- # '[' -z 62960 ']' 00:15:10.434 12:34:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.434 12:34:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.434 12:34:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.434 12:34:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.434 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:10.434 [2024-10-01 12:34:52.869288] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:10.434 [2024-10-01 12:34:52.869481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62960 ] 00:15:10.692 [2024-10-01 12:34:53.039276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.692 [2024-10-01 12:34:53.211360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.692 [2024-10-01 12:34:53.211683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.068 12:34:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.068 12:34:54 -- common/autotest_common.sh@852 -- # return 0 00:15:12.068 12:34:54 -- lvol/thin_provisioning.sh@232 -- # run_test test_thin_lvol_check_space test_thin_lvol_check_space 00:15:12.068 12:34:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:12.068 12:34:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.068 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.068 ************************************ 00:15:12.068 START TEST test_thin_lvol_check_space 00:15:12.068 ************************************ 00:15:12.068 12:34:54 -- common/autotest_common.sh@1104 -- # test_thin_lvol_check_space 00:15:12.068 12:34:54 -- lvol/thin_provisioning.sh@15 -- # rpc_cmd bdev_malloc_create 128 512 00:15:12.068 12:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.068 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.326 12:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@15 -- # malloc_name=Malloc0 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@16 -- # rpc_cmd bdev_lvol_create_lvstore Malloc0 lvs_test 00:15:12.326 12:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.326 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.326 12:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@16 -- # lvs_uuid=949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@17 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:12.326 12:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.326 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.326 12:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@17 -- # lvs='[ 00:15:12.326 { 00:15:12.326 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:12.326 "name": "lvs_test", 00:15:12.326 "base_bdev": "Malloc0", 00:15:12.326 "total_data_clusters": 31, 00:15:12.326 "free_clusters": 31, 00:15:12.326 "block_size": 512, 00:15:12.326 "cluster_size": 4194304 00:15:12.326 } 00:15:12.326 ]' 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@18 -- # jq -r '.[0].free_clusters' 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@18 -- # free_clusters_start=31 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@21 -- # round_down 124 00:15:12.326 12:34:54 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:12.326 12:34:54 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:12.326 12:34:54 -- lvol/common.sh@36 -- # echo 124 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@21 -- # lvol_size_mb=124 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@22 -- # rpc_cmd bdev_lvol_create -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff lvol_test 124 -t 00:15:12.326 12:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.326 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.326 12:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@22 -- # lvol_uuid=19770589-6c3b-4869-8670-71ae8edcd9f0 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@24 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:12.326 12:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.326 12:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:12.326 12:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@24 -- # lvs='[ 00:15:12.326 { 00:15:12.326 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:12.326 "name": "lvs_test", 00:15:12.326 "base_bdev": "Malloc0", 00:15:12.326 "total_data_clusters": 31, 00:15:12.326 "free_clusters": 31, 00:15:12.326 "block_size": 512, 00:15:12.326 "cluster_size": 4194304 00:15:12.326 } 00:15:12.326 ]' 00:15:12.326 12:34:54 -- lvol/thin_provisioning.sh@25 -- # jq -r '.[0].free_clusters' 00:15:12.585 12:34:54 -- lvol/thin_provisioning.sh@25 -- # free_clusters_create_lvol=31 00:15:12.585 12:34:54 -- lvol/thin_provisioning.sh@26 -- # '[' 31 == 31 ']' 00:15:12.585 12:34:54 -- lvol/thin_provisioning.sh@29 -- # size=4194304 00:15:12.585 12:34:54 -- lvol/thin_provisioning.sh@30 -- # nbd_start_disks /var/tmp/spdk.sock 19770589-6c3b-4869-8670-71ae8edcd9f0 /dev/nbd0 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('19770589-6c3b-4869-8670-71ae8edcd9f0') 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@12 -- # local i 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.585 12:34:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 19770589-6c3b-4869-8670-71ae8edcd9f0 /dev/nbd0 00:15:12.843 /dev/nbd0 00:15:12.843 12:34:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.843 12:34:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.843 12:34:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:12.843 12:34:55 -- common/autotest_common.sh@857 -- # local i 00:15:12.843 12:34:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:12.843 12:34:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:12.843 12:34:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:12.843 12:34:55 -- common/autotest_common.sh@861 -- # break 00:15:12.843 12:34:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:12.843 12:34:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:12.843 12:34:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:12.843 1+0 records in 00:15:12.843 1+0 records out 00:15:12.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041628 s, 9.8 MB/s 00:15:12.843 12:34:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:12.843 12:34:55 -- common/autotest_common.sh@874 -- # size=4096 00:15:12.843 12:34:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:12.843 12:34:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:12.843 12:34:55 -- common/autotest_common.sh@877 -- # return 0 00:15:12.843 12:34:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.843 12:34:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.843 12:34:55 -- lvol/thin_provisioning.sh@31 -- # run_fio_test /dev/nbd0 0 4096 write 0xcc 00:15:12.843 12:34:55 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:12.843 12:34:55 -- lvol/common.sh@41 -- # local offset=0 00:15:12.843 12:34:55 -- lvol/common.sh@42 -- # local size=4096 00:15:12.843 12:34:55 -- lvol/common.sh@43 -- # local rw=write 00:15:12.843 12:34:55 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:12.843 12:34:55 -- lvol/common.sh@45 -- # local extra_params= 00:15:12.843 12:34:55 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:12.843 12:34:55 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:12.843 12:34:55 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:12.843 12:34:55 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4096 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:12.843 12:34:55 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=4096 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:12.843 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:12.843 fio-3.35 00:15:12.843 Starting 1 process 00:15:13.102 00:15:13.102 fio_test: (groupid=0, jobs=1): err= 0: pid=63021: Tue Oct 1 12:34:55 2024 00:15:13.102 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096B/1msec) 00:15:13.102 clat (nsec): min=188731, max=188731, avg=188731.00, stdev= 0.00 00:15:13.102 lat (nsec): min=189028, max=189028, avg=189028.00, stdev= 0.00 00:15:13.102 clat percentiles (usec): 00:15:13.102 | 1.00th=[ 190], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 190], 00:15:13.102 | 30.00th=[ 190], 40.00th=[ 190], 50.00th=[ 190], 60.00th=[ 190], 00:15:13.102 | 70.00th=[ 190], 80.00th=[ 190], 90.00th=[ 190], 95.00th=[ 190], 00:15:13.102 | 99.00th=[ 190], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 190], 00:15:13.102 | 99.99th=[ 190] 00:15:13.102 write: IOPS=500, BW=2000KiB/s (2048kB/s)(4096B/2msec); 0 zone resets 00:15:13.102 clat (nsec): min=535293, max=535293, avg=535293.00, stdev= 0.00 00:15:13.102 lat (nsec): min=568248, max=568248, avg=568248.00, stdev= 0.00 00:15:13.102 clat percentiles (usec): 00:15:13.102 | 1.00th=[ 537], 5.00th=[ 537], 10.00th=[ 537], 20.00th=[ 537], 00:15:13.102 | 30.00th=[ 537], 40.00th=[ 537], 50.00th=[ 537], 60.00th=[ 537], 00:15:13.102 | 70.00th=[ 537], 80.00th=[ 537], 90.00th=[ 537], 95.00th=[ 537], 00:15:13.102 | 99.00th=[ 537], 99.50th=[ 537], 99.90th=[ 537], 99.95th=[ 537], 00:15:13.102 | 99.99th=[ 537] 00:15:13.102 lat (usec) : 250=50.00%, 750=50.00% 00:15:13.102 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=12 00:15:13.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.103 issued rwts: total=1,1,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.103 00:15:13.103 Run status group 0 (all jobs): 00:15:13.103 READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096B (4096B), run=1-1msec 00:15:13.103 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=4096B (4096B), run=2-2msec 00:15:13.103 00:15:13.103 Disk stats (read/write): 00:15:13.103 nbd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@32 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:13.103 12:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.103 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:15:13.103 12:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@32 -- # lvs='[ 00:15:13.103 { 00:15:13.103 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:13.103 "name": "lvs_test", 00:15:13.103 "base_bdev": "Malloc0", 00:15:13.103 "total_data_clusters": 31, 00:15:13.103 "free_clusters": 30, 00:15:13.103 "block_size": 512, 00:15:13.103 "cluster_size": 4194304 00:15:13.103 } 00:15:13.103 ]' 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@33 -- # jq -r '.[0].free_clusters' 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@33 -- # free_clusters_first_fio=30 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@34 -- # '[' 31 == 31 ']' 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@37 -- # offset=6291456 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@38 -- # size=4194304 00:15:13.103 12:34:55 -- lvol/thin_provisioning.sh@39 -- # run_fio_test /dev/nbd0 6291456 4194304 write 0xcc 00:15:13.103 12:34:55 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:13.103 12:34:55 -- lvol/common.sh@41 -- # local offset=6291456 00:15:13.103 12:34:55 -- lvol/common.sh@42 -- # local size=4194304 00:15:13.103 12:34:55 -- lvol/common.sh@43 -- # local rw=write 00:15:13.103 12:34:55 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:13.103 12:34:55 -- lvol/common.sh@45 -- # local extra_params= 00:15:13.103 12:34:55 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:13.103 12:34:55 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:13.103 12:34:55 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:13.103 12:34:55 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=6291456 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:13.103 12:34:55 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=6291456 --size=4194304 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:13.103 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:13.103 fio-3.35 00:15:13.103 Starting 1 process 00:15:13.705 00:15:13.705 fio_test: (groupid=0, jobs=1): err= 0: pid=63031: Tue Oct 1 12:34:55 2024 00:15:13.705 read: IOPS=9846, BW=38.5MiB/s (40.3MB/s)(4096KiB/104msec) 00:15:13.705 clat (usec): min=68, max=652, avg=99.19, stdev=43.30 00:15:13.705 lat (usec): min=68, max=652, avg=99.31, stdev=43.30 00:15:13.705 clat percentiles (usec): 00:15:13.705 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 86], 00:15:13.705 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:15:13.705 | 70.00th=[ 98], 80.00th=[ 105], 90.00th=[ 119], 95.00th=[ 133], 00:15:13.705 | 99.00th=[ 253], 99.50th=[ 510], 99.90th=[ 578], 99.95th=[ 652], 00:15:13.705 | 99.99th=[ 652] 00:15:13.705 write: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(4096KiB/106msec); 0 zone resets 00:15:13.705 clat (usec): min=72, max=432, avg=100.25, stdev=17.18 00:15:13.705 lat (usec): min=73, max=455, avg=101.22, stdev=17.86 00:15:13.705 clat percentiles (usec): 00:15:13.705 | 1.00th=[ 84], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 91], 00:15:13.705 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 97], 00:15:13.705 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 126], 00:15:13.705 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 233], 99.95th=[ 433], 00:15:13.705 | 99.99th=[ 433] 00:15:13.705 lat (usec) : 100=69.78%, 250=29.64%, 500=0.29%, 750=0.29% 00:15:13.705 cpu : usr=2.40%, sys=8.65%, ctx=2050, majf=0, minf=44 00:15:13.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.705 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.705 00:15:13.705 Run status group 0 (all jobs): 00:15:13.705 READ: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=4096KiB (4194kB), run=104-104msec 00:15:13.705 WRITE: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=4096KiB (4194kB), run=106-106msec 00:15:13.705 00:15:13.705 Disk stats (read/write): 00:15:13.705 nbd0: ios=422/1024, merge=0/0, ticks=49/93, in_queue=142, util=60.91% 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@40 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:13.705 12:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.705 12:34:55 -- common/autotest_common.sh@10 -- # set +x 00:15:13.705 12:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@40 -- # lvs='[ 00:15:13.705 { 00:15:13.705 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:13.705 "name": "lvs_test", 00:15:13.705 "base_bdev": "Malloc0", 00:15:13.705 "total_data_clusters": 31, 00:15:13.705 "free_clusters": 28, 00:15:13.705 "block_size": 512, 00:15:13.705 "cluster_size": 4194304 00:15:13.705 } 00:15:13.705 ]' 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@41 -- # jq -r '.[0].free_clusters' 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@41 -- # free_clusters_second_fio=28 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@42 -- # '[' 31 == 31 ']' 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@45 -- # size=125829120 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@46 -- # offset=12582912 00:15:13.705 12:34:55 -- lvol/thin_provisioning.sh@47 -- # run_fio_test /dev/nbd0 12582912 125829120 write 0xcc 00:15:13.705 12:34:55 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:13.705 12:34:55 -- lvol/common.sh@41 -- # local offset=12582912 00:15:13.705 12:34:55 -- lvol/common.sh@42 -- # local size=125829120 00:15:13.705 12:34:55 -- lvol/common.sh@43 -- # local rw=write 00:15:13.705 12:34:55 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:13.705 12:34:55 -- lvol/common.sh@45 -- # local extra_params= 00:15:13.705 12:34:55 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:13.705 12:34:55 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:13.705 12:34:55 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:13.705 12:34:55 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=125829120 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:13.705 12:34:55 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=12582912 --size=125829120 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:13.705 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:13.705 fio-3.35 00:15:13.705 Starting 1 process 00:15:18.976 00:15:18.976 fio_test: (groupid=0, jobs=1): err= 0: pid=63044: Tue Oct 1 12:35:01 2024 00:15:18.976 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(112MiB/2512msec) 00:15:18.976 clat (usec): min=60, max=2833, avg=86.36, stdev=28.12 00:15:18.976 lat (usec): min=60, max=2833, avg=86.45, stdev=28.13 00:15:18.976 clat percentiles (usec): 00:15:18.976 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 68], 20.00th=[ 71], 00:15:18.976 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 87], 00:15:18.976 | 70.00th=[ 90], 80.00th=[ 98], 90.00th=[ 109], 95.00th=[ 118], 00:15:18.976 | 99.00th=[ 139], 99.50th=[ 153], 99.90th=[ 289], 99.95th=[ 445], 00:15:18.976 | 99.99th=[ 1254] 00:15:18.976 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(112MiB/2398msec); 0 zone resets 00:15:18.976 clat (usec): min=62, max=877, avg=81.94, stdev=19.64 00:15:18.976 lat (usec): min=63, max=878, avg=82.83, stdev=19.98 00:15:18.976 clat percentiles (usec): 00:15:18.976 | 1.00th=[ 65], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 69], 00:15:18.976 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 84], 00:15:18.976 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 115], 00:15:18.976 | 99.00th=[ 137], 99.50th=[ 149], 99.90th=[ 262], 99.95th=[ 314], 00:15:18.976 | 99.99th=[ 553] 00:15:18.976 bw ( KiB/s): min=40128, max=50488, per=95.92%, avg=45875.20, stdev=4316.13, samples=5 00:15:18.976 iops : min=10032, max=12622, avg=11468.80, stdev=1079.03, samples=5 00:15:18.976 lat (usec) : 100=84.95%, 250=14.91%, 500=0.11%, 750=0.02%, 1000=0.01% 00:15:18.976 lat (msec) : 2=0.01%, 4=0.01% 00:15:18.976 cpu : usr=3.30%, sys=7.44%, ctx=60240, majf=0, minf=711 00:15:18.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.976 issued rwts: total=28672,28672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.976 00:15:18.976 Run status group 0 (all jobs): 00:15:18.976 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=112MiB (117MB), run=2512-2512msec 00:15:18.976 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=112MiB (117MB), run=2398-2398msec 00:15:18.976 00:15:18.976 Disk stats (read/write): 00:15:18.976 nbd0: ios=28569/28672, merge=0/0, ticks=2271/2128, in_queue=4400, util=98.17% 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@48 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:18.976 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.976 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.976 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@48 -- # lvs='[ 00:15:18.976 { 00:15:18.976 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:18.976 "name": "lvs_test", 00:15:18.976 "base_bdev": "Malloc0", 00:15:18.976 "total_data_clusters": 31, 00:15:18.976 "free_clusters": 0, 00:15:18.976 "block_size": 512, 00:15:18.976 "cluster_size": 4194304 00:15:18.976 } 00:15:18.976 ]' 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@50 -- # jq -r '.[0].free_clusters' 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@50 -- # free_clusters_third_fio=0 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@51 -- # '[' 0 == 0 ']' 00:15:18.976 12:35:01 -- lvol/thin_provisioning.sh@53 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@51 -- # local i 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.977 12:35:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.235 12:35:01 -- bdev/nbd_common.sh@41 -- # break 00:15:19.235 12:35:01 -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.235 12:35:01 -- lvol/thin_provisioning.sh@54 -- # rpc_cmd bdev_lvol_delete 19770589-6c3b-4869-8670-71ae8edcd9f0 00:15:19.235 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.235 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.235 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@55 -- # rpc_cmd bdev_get_bdevs -b 19770589-6c3b-4869-8670-71ae8edcd9f0 00:15:19.236 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.236 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 [2024-10-01 12:35:01.528805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 19770589-6c3b-4869-8670-71ae8edcd9f0 00:15:19.236 request: 00:15:19.236 { 00:15:19.236 "name": "19770589-6c3b-4869-8670-71ae8edcd9f0", 00:15:19.236 "method": "bdev_get_bdevs", 00:15:19.236 "req_id": 1 00:15:19.236 } 00:15:19.236 Got JSON-RPC error response 00:15:19.236 response: 00:15:19.236 { 00:15:19.236 "code": -19, 00:15:19.236 "message": "No such device" 00:15:19.236 } 00:15:19.236 12:35:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@56 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:19.236 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.236 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@56 -- # lvs='[ 00:15:19.236 { 00:15:19.236 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:19.236 "name": "lvs_test", 00:15:19.236 "base_bdev": "Malloc0", 00:15:19.236 "total_data_clusters": 31, 00:15:19.236 "free_clusters": 31, 00:15:19.236 "block_size": 512, 00:15:19.236 "cluster_size": 4194304 00:15:19.236 } 00:15:19.236 ]' 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@57 -- # jq -r '.[0].free_clusters' 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@57 -- # free_clusters_end=31 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@58 -- # '[' 31 == 31 ']' 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@61 -- # rpc_cmd bdev_lvol_delete_lvstore -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:19.236 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.236 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@62 -- # rpc_cmd bdev_lvol_get_lvstores -u 949d38a4-e0c6-400c-8b41-b1000e93b1ff 00:15:19.236 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.236 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.236 request: 00:15:19.236 { 00:15:19.236 "uuid": "949d38a4-e0c6-400c-8b41-b1000e93b1ff", 00:15:19.236 "method": "bdev_lvol_get_lvstores", 00:15:19.236 "req_id": 1 00:15:19.236 } 00:15:19.236 Got JSON-RPC error response 00:15:19.236 response: 00:15:19.236 { 00:15:19.236 "code": -19, 00:15:19.236 "message": "No such device" 00:15:19.236 } 00:15:19.236 12:35:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:19.236 12:35:01 -- lvol/thin_provisioning.sh@63 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:19.236 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.236 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.495 ************************************ 00:15:19.495 END TEST test_thin_lvol_check_space 00:15:19.495 ************************************ 00:15:19.495 12:35:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.495 00:15:19.495 real 0m7.355s 00:15:19.495 user 0m1.278s 00:15:19.495 sys 0m0.613s 00:15:19.495 12:35:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.495 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.495 12:35:01 -- lvol/thin_provisioning.sh@233 -- # run_test test_thin_lvol_check_zeroes test_thin_lvol_check_zeroes 00:15:19.495 12:35:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:19.495 12:35:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.495 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.495 ************************************ 00:15:19.495 START TEST test_thin_lvol_check_zeroes 00:15:19.495 ************************************ 00:15:19.495 12:35:01 -- common/autotest_common.sh@1104 -- # test_thin_lvol_check_zeroes 00:15:19.495 12:35:01 -- lvol/thin_provisioning.sh@69 -- # rpc_cmd bdev_malloc_create 128 512 00:15:19.495 12:35:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.495 12:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.754 12:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@69 -- # malloc_name=Malloc1 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@70 -- # rpc_cmd bdev_lvol_create_lvstore Malloc1 lvs_test 00:15:19.754 12:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.754 12:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.754 12:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@70 -- # lvs_uuid=10a461a2-e060-40c3-97ad-146e448a21b6 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@71 -- # rpc_cmd bdev_lvol_get_lvstores -u 10a461a2-e060-40c3-97ad-146e448a21b6 00:15:19.754 12:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.754 12:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.754 12:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@71 -- # lvs='[ 00:15:19.754 { 00:15:19.754 "uuid": "10a461a2-e060-40c3-97ad-146e448a21b6", 00:15:19.754 "name": "lvs_test", 00:15:19.754 "base_bdev": "Malloc1", 00:15:19.754 "total_data_clusters": 31, 00:15:19.754 "free_clusters": 31, 00:15:19.754 "block_size": 512, 00:15:19.754 "cluster_size": 4194304 00:15:19.754 } 00:15:19.754 ]' 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@72 -- # jq -r '.[0].free_clusters' 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@72 -- # free_clusters_start=31 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@75 -- # lbd_name0=lvol_test0 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@76 -- # lbd_name1=lvol_test1 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@77 -- # lvol_size_mb=124 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@79 -- # lvol_size_mb=124 00:15:19.754 12:35:02 -- lvol/thin_provisioning.sh@80 -- # lvol_size=130023424 00:15:19.755 12:35:02 -- lvol/thin_provisioning.sh@81 -- # rpc_cmd bdev_lvol_create -u 10a461a2-e060-40c3-97ad-146e448a21b6 lvol_test0 124 00:15:19.755 12:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.755 12:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.755 12:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.755 12:35:02 -- lvol/thin_provisioning.sh@81 -- # lvol_uuid0=f9cdfe07-245e-4096-9c8f-d965f8a5f4a4 00:15:19.755 12:35:02 -- lvol/thin_provisioning.sh@82 -- # rpc_cmd bdev_lvol_create -u 10a461a2-e060-40c3-97ad-146e448a21b6 lvol_test1 124 -t 00:15:19.755 12:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.755 12:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.755 12:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.755 12:35:02 -- lvol/thin_provisioning.sh@82 -- # lvol_uuid1=aa018d20-00b6-43a4-825c-5be7252ce11c 00:15:19.755 12:35:02 -- lvol/thin_provisioning.sh@84 -- # nbd_start_disks /var/tmp/spdk.sock f9cdfe07-245e-4096-9c8f-d965f8a5f4a4 /dev/nbd0 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('f9cdfe07-245e-4096-9c8f-d965f8a5f4a4') 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@12 -- # local i 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.755 12:35:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk f9cdfe07-245e-4096-9c8f-d965f8a5f4a4 /dev/nbd0 00:15:20.013 /dev/nbd0 00:15:20.013 12:35:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.013 12:35:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.013 12:35:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:20.013 12:35:02 -- common/autotest_common.sh@857 -- # local i 00:15:20.013 12:35:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.013 12:35:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.013 12:35:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:20.013 12:35:02 -- common/autotest_common.sh@861 -- # break 00:15:20.013 12:35:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.013 12:35:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.013 12:35:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:20.013 1+0 records in 00:15:20.013 1+0 records out 00:15:20.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005672 s, 7.2 MB/s 00:15:20.014 12:35:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:20.014 12:35:02 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.014 12:35:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:20.014 12:35:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.014 12:35:02 -- common/autotest_common.sh@877 -- # return 0 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.014 12:35:02 -- lvol/thin_provisioning.sh@85 -- # nbd_start_disks /var/tmp/spdk.sock aa018d20-00b6-43a4-825c-5be7252ce11c /dev/nbd1 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('aa018d20-00b6-43a4-825c-5be7252ce11c') 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@12 -- # local i 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.014 12:35:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk aa018d20-00b6-43a4-825c-5be7252ce11c /dev/nbd1 00:15:20.272 /dev/nbd1 00:15:20.531 12:35:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.531 12:35:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.531 12:35:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:20.531 12:35:02 -- common/autotest_common.sh@857 -- # local i 00:15:20.531 12:35:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.531 12:35:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.531 12:35:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:20.531 12:35:02 -- common/autotest_common.sh@861 -- # break 00:15:20.531 12:35:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.531 12:35:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.531 12:35:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:20.531 1+0 records in 00:15:20.531 1+0 records out 00:15:20.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002596 s, 15.8 MB/s 00:15:20.531 12:35:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:20.531 12:35:02 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.531 12:35:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:20.531 12:35:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.531 12:35:02 -- common/autotest_common.sh@877 -- # return 0 00:15:20.531 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.531 12:35:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.531 12:35:02 -- lvol/thin_provisioning.sh@88 -- # run_fio_test /dev/nbd0 0 130023424 write 0xcc 00:15:20.531 12:35:02 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:20.531 12:35:02 -- lvol/common.sh@41 -- # local offset=0 00:15:20.531 12:35:02 -- lvol/common.sh@42 -- # local size=130023424 00:15:20.531 12:35:02 -- lvol/common.sh@43 -- # local rw=write 00:15:20.531 12:35:02 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:20.531 12:35:02 -- lvol/common.sh@45 -- # local extra_params= 00:15:20.531 12:35:02 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:20.531 12:35:02 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:20.531 12:35:02 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:20.531 12:35:02 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:20.531 12:35:02 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:20.531 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:20.531 fio-3.35 00:15:20.531 Starting 1 process 00:15:28.669 00:15:28.670 fio_test: (groupid=0, jobs=1): err= 0: pid=63166: Tue Oct 1 12:35:09 2024 00:15:28.670 read: IOPS=9098, BW=35.5MiB/s (37.3MB/s)(124MiB/3489msec) 00:15:28.670 clat (usec): min=87, max=3049, avg=108.67, stdev=27.06 00:15:28.670 lat (usec): min=88, max=3049, avg=108.75, stdev=27.06 00:15:28.670 clat percentiles (usec): 00:15:28.670 | 1.00th=[ 91], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 95], 00:15:28.670 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 109], 00:15:28.670 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 143], 00:15:28.670 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 198], 99.95th=[ 306], 00:15:28.670 | 99.99th=[ 594] 00:15:28.670 write: IOPS=9185, BW=35.9MiB/s (37.6MB/s)(124MiB/3456msec); 0 zone resets 00:15:28.670 clat (usec): min=79, max=2282, avg=107.25, stdev=31.92 00:15:28.670 lat (usec): min=80, max=2283, avg=108.09, stdev=32.07 00:15:28.670 clat percentiles (usec): 00:15:28.670 | 1.00th=[ 84], 5.00th=[ 89], 10.00th=[ 89], 20.00th=[ 91], 00:15:28.670 | 30.00th=[ 93], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 108], 00:15:28.670 | 70.00th=[ 114], 80.00th=[ 122], 90.00th=[ 133], 95.00th=[ 143], 00:15:28.670 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 215], 99.95th=[ 326], 00:15:28.670 | 99.99th=[ 1500] 00:15:28.670 bw ( KiB/s): min=32808, max=38208, per=98.74%, avg=36278.86, stdev=2348.08, samples=7 00:15:28.670 iops : min= 8202, max= 9552, avg=9069.71, stdev=587.02, samples=7 00:15:28.670 lat (usec) : 100=43.15%, 250=56.78%, 500=0.03%, 750=0.01%, 1000=0.01% 00:15:28.670 lat (msec) : 2=0.01%, 4=0.01% 00:15:28.670 cpu : usr=2.45%, sys=6.25%, ctx=63571, majf=0, minf=783 00:15:28.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.670 issued rwts: total=31744,31744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.670 00:15:28.670 Run status group 0 (all jobs): 00:15:28.670 READ: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=124MiB (130MB), run=3489-3489msec 00:15:28.670 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=124MiB (130MB), run=3456-3456msec 00:15:28.670 00:15:28.670 Disk stats (read/write): 00:15:28.670 nbd0: ios=31340/31744, merge=0/0, ticks=3187/3154, in_queue=6341, util=98.73% 00:15:28.670 12:35:10 -- lvol/thin_provisioning.sh@92 -- # run_fio_test /dev/nbd1 0 130023424 read 0x00 00:15:28.670 12:35:10 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:15:28.670 12:35:10 -- lvol/common.sh@41 -- # local offset=0 00:15:28.670 12:35:10 -- lvol/common.sh@42 -- # local size=130023424 00:15:28.670 12:35:10 -- lvol/common.sh@43 -- # local rw=read 00:15:28.670 12:35:10 -- lvol/common.sh@44 -- # local pattern=0x00 00:15:28.670 12:35:10 -- lvol/common.sh@45 -- # local extra_params= 00:15:28.670 12:35:10 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:28.670 12:35:10 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:15:28.670 12:35:10 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:15:28.670 12:35:10 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=130023424 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:15:28.670 12:35:10 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=130023424 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:15:28.670 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:28.670 fio-3.35 00:15:28.670 Starting 1 process 00:15:31.197 00:15:31.197 fio_test: (groupid=0, jobs=1): err= 0: pid=63257: Tue Oct 1 12:35:13 2024 00:15:31.197 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(124MiB/2972msec) 00:15:31.197 clat (usec): min=75, max=608, avg=92.33, stdev=15.34 00:15:31.197 lat (usec): min=75, max=608, avg=92.45, stdev=15.35 00:15:31.197 clat percentiles (usec): 00:15:31.197 | 1.00th=[ 78], 5.00th=[ 79], 10.00th=[ 79], 20.00th=[ 81], 00:15:31.197 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 90], 00:15:31.197 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 125], 00:15:31.197 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 172], 99.95th=[ 194], 00:15:31.197 | 99.99th=[ 241] 00:15:31.197 bw ( KiB/s): min=41672, max=43448, per=99.89%, avg=42675.20, stdev=821.07, samples=5 00:15:31.197 iops : min=10418, max=10862, avg=10668.80, stdev=205.27, samples=5 00:15:31.197 lat (usec) : 100=76.84%, 250=23.15%, 500=0.01%, 750=0.01% 00:15:31.197 cpu : usr=2.49%, sys=7.74%, ctx=31837, majf=0, minf=10 00:15:31.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.197 issued rwts: total=31744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.197 00:15:31.197 Run status group 0 (all jobs): 00:15:31.197 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=124MiB (130MB), run=2972-2972msec 00:15:31.197 00:15:31.197 Disk stats (read/write): 00:15:31.197 nbd1: ios=30967/0, merge=0/0, ticks=2618/0, in_queue=2617, util=96.76% 00:15:31.197 12:35:13 -- lvol/thin_provisioning.sh@95 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@51 -- # local i 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@41 -- # break 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.197 12:35:13 -- lvol/thin_provisioning.sh@96 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@51 -- # local i 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.197 12:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@41 -- # break 00:15:31.455 12:35:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.455 12:35:13 -- lvol/thin_provisioning.sh@97 -- # rpc_cmd bdev_lvol_delete aa018d20-00b6-43a4-825c-5be7252ce11c 00:15:31.455 12:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.455 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.455 12:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.455 12:35:13 -- lvol/thin_provisioning.sh@98 -- # rpc_cmd bdev_lvol_delete f9cdfe07-245e-4096-9c8f-d965f8a5f4a4 00:15:31.455 12:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.455 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.455 12:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.455 12:35:13 -- lvol/thin_provisioning.sh@99 -- # rpc_cmd bdev_lvol_delete_lvstore -u 10a461a2-e060-40c3-97ad-146e448a21b6 00:15:31.455 12:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.455 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.455 12:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.455 12:35:13 -- lvol/thin_provisioning.sh@100 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:31.455 12:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.455 12:35:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.713 ************************************ 00:15:31.713 END TEST test_thin_lvol_check_zeroes 00:15:31.713 ************************************ 00:15:31.713 12:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.713 00:15:31.713 real 0m12.148s 00:15:31.713 user 0m1.509s 00:15:31.713 sys 0m0.919s 00:15:31.713 12:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.713 12:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.713 12:35:14 -- lvol/thin_provisioning.sh@234 -- # run_test test_thin_lvol_check_integrity test_thin_lvol_check_integrity 00:15:31.713 12:35:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:31.713 12:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:31.713 12:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.713 ************************************ 00:15:31.713 START TEST test_thin_lvol_check_integrity 00:15:31.713 ************************************ 00:15:31.713 12:35:14 -- common/autotest_common.sh@1104 -- # test_thin_lvol_check_integrity 00:15:31.713 12:35:14 -- lvol/thin_provisioning.sh@106 -- # rpc_cmd bdev_malloc_create 128 512 00:15:31.713 12:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.713 12:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.971 12:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@106 -- # malloc_name=Malloc2 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@107 -- # rpc_cmd bdev_lvol_create_lvstore Malloc2 lvs_test 00:15:31.971 12:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.971 12:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.971 12:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@107 -- # lvs_uuid=5a67263f-afde-4c9d-a9bd-0c724315cb6c 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@110 -- # lvol_size_mb=124 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@112 -- # lvol_size_mb=124 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@113 -- # lvol_size=130023424 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@114 -- # rpc_cmd bdev_lvol_create -u 5a67263f-afde-4c9d-a9bd-0c724315cb6c lvol_test 124 -t 00:15:31.971 12:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.971 12:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.971 12:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@114 -- # lvol_uuid=ba7ec2f7-0386-4f94-8a57-3cfe82d9918d 00:15:31.971 12:35:14 -- lvol/thin_provisioning.sh@116 -- # nbd_start_disks /var/tmp/spdk.sock ba7ec2f7-0386-4f94-8a57-3cfe82d9918d /dev/nbd0 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('ba7ec2f7-0386-4f94-8a57-3cfe82d9918d') 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@12 -- # local i 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.971 12:35:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk ba7ec2f7-0386-4f94-8a57-3cfe82d9918d /dev/nbd0 00:15:32.230 /dev/nbd0 00:15:32.230 12:35:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.230 12:35:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.230 12:35:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:32.230 12:35:14 -- common/autotest_common.sh@857 -- # local i 00:15:32.230 12:35:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:32.230 12:35:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:32.230 12:35:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:32.230 12:35:14 -- common/autotest_common.sh@861 -- # break 00:15:32.230 12:35:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:32.230 12:35:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:32.230 12:35:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:32.230 1+0 records in 00:15:32.230 1+0 records out 00:15:32.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541362 s, 7.6 MB/s 00:15:32.230 12:35:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:32.230 12:35:14 -- common/autotest_common.sh@874 -- # size=4096 00:15:32.230 12:35:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:32.230 12:35:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:32.230 12:35:14 -- common/autotest_common.sh@877 -- # return 0 00:15:32.230 12:35:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.230 12:35:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.230 12:35:14 -- lvol/thin_provisioning.sh@117 -- # run_fio_test /dev/nbd0 0 130023424 write 0xcc 00:15:32.230 12:35:14 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:32.230 12:35:14 -- lvol/common.sh@41 -- # local offset=0 00:15:32.230 12:35:14 -- lvol/common.sh@42 -- # local size=130023424 00:15:32.230 12:35:14 -- lvol/common.sh@43 -- # local rw=write 00:15:32.230 12:35:14 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:32.230 12:35:14 -- lvol/common.sh@45 -- # local extra_params= 00:15:32.230 12:35:14 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:32.230 12:35:14 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:32.230 12:35:14 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:32.230 12:35:14 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:32.230 12:35:14 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:32.230 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:32.230 fio-3.35 00:15:32.230 Starting 1 process 00:15:38.817 00:15:38.817 fio_test: (groupid=0, jobs=1): err= 0: pid=63344: Tue Oct 1 12:35:20 2024 00:15:38.817 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(124MiB/2981msec) 00:15:38.817 clat (usec): min=74, max=2159, avg=92.70, stdev=24.14 00:15:38.817 lat (usec): min=74, max=2159, avg=92.79, stdev=24.15 00:15:38.817 clat percentiles (usec): 00:15:38.817 | 1.00th=[ 79], 5.00th=[ 80], 10.00th=[ 80], 20.00th=[ 82], 00:15:38.817 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 90], 00:15:38.817 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 118], 00:15:38.817 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 169], 99.95th=[ 351], 00:15:38.817 | 99.99th=[ 1156] 00:15:38.817 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(124MiB/2753msec); 0 zone resets 00:15:38.817 clat (usec): min=62, max=2188, avg=85.01, stdev=31.02 00:15:38.817 lat (usec): min=63, max=2189, avg=85.93, stdev=31.18 00:15:38.817 clat percentiles (usec): 00:15:38.817 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 73], 00:15:38.817 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 86], 00:15:38.817 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 114], 00:15:38.817 | 99.00th=[ 133], 99.50th=[ 143], 99.90th=[ 204], 99.95th=[ 265], 00:15:38.817 | 99.99th=[ 1860] 00:15:38.817 bw ( KiB/s): min=22112, max=49136, per=91.77%, avg=42325.33, stdev=10122.55, samples=6 00:15:38.817 iops : min= 5528, max=12284, avg=10581.33, stdev=2530.64, samples=6 00:15:38.817 lat (usec) : 100=81.53%, 250=18.42%, 500=0.03%, 750=0.01%, 1000=0.01% 00:15:38.817 lat (msec) : 2=0.01%, 4=0.01% 00:15:38.817 cpu : usr=3.05%, sys=6.72%, ctx=65190, majf=0, minf=781 00:15:38.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.817 issued rwts: total=31744,31744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.817 00:15:38.817 Run status group 0 (all jobs): 00:15:38.817 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=124MiB (130MB), run=2981-2981msec 00:15:38.817 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=124MiB (130MB), run=2753-2753msec 00:15:38.817 00:15:38.817 Disk stats (read/write): 00:15:38.817 nbd0: ios=30765/31744, merge=0/0, ticks=2663/2471, in_queue=5134, util=98.31% 00:15:38.817 12:35:20 -- lvol/thin_provisioning.sh@120 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@51 -- # local i 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@41 -- # break 00:15:38.817 12:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.817 12:35:20 -- lvol/thin_provisioning.sh@121 -- # rpc_cmd bdev_lvol_delete ba7ec2f7-0386-4f94-8a57-3cfe82d9918d 00:15:38.817 12:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.817 12:35:20 -- common/autotest_common.sh@10 -- # set +x 00:15:38.817 12:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.817 12:35:20 -- lvol/thin_provisioning.sh@122 -- # rpc_cmd bdev_lvol_delete_lvstore -u 5a67263f-afde-4c9d-a9bd-0c724315cb6c 00:15:38.817 12:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.817 12:35:20 -- common/autotest_common.sh@10 -- # set +x 00:15:38.817 12:35:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.817 12:35:20 -- lvol/thin_provisioning.sh@123 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:38.817 12:35:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.817 12:35:20 -- common/autotest_common.sh@10 -- # set +x 00:15:38.817 ************************************ 00:15:38.817 END TEST test_thin_lvol_check_integrity 00:15:38.817 ************************************ 00:15:38.817 12:35:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.817 00:15:38.817 real 0m7.065s 00:15:38.817 user 0m0.790s 00:15:38.817 sys 0m0.521s 00:15:38.817 12:35:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.817 12:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:38.817 12:35:21 -- lvol/thin_provisioning.sh@235 -- # run_test test_thin_lvol_resize test_thin_lvol_resize 00:15:38.817 12:35:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:38.817 12:35:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.817 12:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:38.817 ************************************ 00:15:38.817 START TEST test_thin_lvol_resize 00:15:38.817 ************************************ 00:15:38.817 12:35:21 -- common/autotest_common.sh@1104 -- # test_thin_lvol_resize 00:15:38.817 12:35:21 -- lvol/thin_provisioning.sh@128 -- # rpc_cmd bdev_malloc_create 128 512 00:15:38.817 12:35:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.817 12:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.076 12:35:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@128 -- # malloc_name=Malloc3 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@129 -- # rpc_cmd bdev_lvol_create_lvstore Malloc3 lvs_test 00:15:39.076 12:35:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.076 12:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.076 12:35:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@129 -- # lvs_uuid=de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@133 -- # round_down 62 00:15:39.076 12:35:21 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:39.076 12:35:21 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:39.076 12:35:21 -- lvol/common.sh@36 -- # echo 60 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@133 -- # lvol_size_mb=60 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@134 -- # lvol_size=62914560 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@135 -- # rpc_cmd bdev_lvol_create -u de24d1ca-4a94-43df-80e0-65ec91d50833 lvol_test 60 -t 00:15:39.076 12:35:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.076 12:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.076 12:35:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@135 -- # lvol_uuid=7318e97b-4d4e-4461-a23c-df5ec7bea207 00:15:39.076 12:35:21 -- lvol/thin_provisioning.sh@138 -- # nbd_start_disks /var/tmp/spdk.sock 7318e97b-4d4e-4461-a23c-df5ec7bea207 /dev/nbd0 00:15:39.076 12:35:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.076 12:35:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('7318e97b-4d4e-4461-a23c-df5ec7bea207') 00:15:39.076 12:35:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.076 12:35:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.077 12:35:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.077 12:35:21 -- bdev/nbd_common.sh@12 -- # local i 00:15:39.077 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.077 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.077 12:35:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 7318e97b-4d4e-4461-a23c-df5ec7bea207 /dev/nbd0 00:15:39.335 /dev/nbd0 00:15:39.335 12:35:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.335 12:35:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.335 12:35:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:39.335 12:35:21 -- common/autotest_common.sh@857 -- # local i 00:15:39.335 12:35:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:39.335 12:35:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:39.335 12:35:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:39.335 12:35:21 -- common/autotest_common.sh@861 -- # break 00:15:39.335 12:35:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:39.335 12:35:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:39.335 12:35:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:39.335 1+0 records in 00:15:39.335 1+0 records out 00:15:39.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284942 s, 14.4 MB/s 00:15:39.335 12:35:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:39.335 12:35:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:39.335 12:35:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:39.335 12:35:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:39.335 12:35:21 -- common/autotest_common.sh@877 -- # return 0 00:15:39.335 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.335 12:35:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.335 12:35:21 -- lvol/thin_provisioning.sh@139 -- # run_fio_test /dev/nbd0 0 62914560 write 0xcc 00:15:39.335 12:35:21 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:39.335 12:35:21 -- lvol/common.sh@41 -- # local offset=0 00:15:39.335 12:35:21 -- lvol/common.sh@42 -- # local size=62914560 00:15:39.335 12:35:21 -- lvol/common.sh@43 -- # local rw=write 00:15:39.335 12:35:21 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:39.335 12:35:21 -- lvol/common.sh@45 -- # local extra_params= 00:15:39.335 12:35:21 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:39.335 12:35:21 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:39.335 12:35:21 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:39.335 12:35:21 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:39.335 12:35:21 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:39.335 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:39.335 fio-3.35 00:15:39.335 Starting 1 process 00:15:42.645 00:15:42.645 fio_test: (groupid=0, jobs=1): err= 0: pid=63454: Tue Oct 1 12:35:24 2024 00:15:42.645 read: IOPS=12.8k, BW=49.8MiB/s (52.3MB/s)(60.0MiB/1204msec) 00:15:42.645 clat (usec): min=61, max=784, avg=77.02, stdev=17.94 00:15:42.645 lat (usec): min=61, max=784, avg=77.11, stdev=17.95 00:15:42.645 clat percentiles (usec): 00:15:42.645 | 1.00th=[ 65], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 68], 00:15:42.645 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 74], 00:15:42.645 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 103], 00:15:42.645 | 99.00th=[ 127], 99.50th=[ 143], 99.90th=[ 251], 99.95th=[ 318], 00:15:42.645 | 99.99th=[ 545] 00:15:42.645 write: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(60.0MiB/1384msec); 0 zone resets 00:15:42.645 clat (usec): min=61, max=755, avg=88.32, stdev=19.19 00:15:42.645 lat (usec): min=62, max=755, avg=89.23, stdev=19.37 00:15:42.645 clat percentiles (usec): 00:15:42.645 | 1.00th=[ 65], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 73], 00:15:42.645 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 93], 00:15:42.645 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 117], 00:15:42.645 | 99.00th=[ 139], 99.50th=[ 151], 99.90th=[ 249], 99.95th=[ 289], 00:15:42.645 | 99.99th=[ 494] 00:15:42.645 bw ( KiB/s): min=34032, max=45688, per=92.27%, avg=40960.00, stdev=6131.52, samples=3 00:15:42.645 iops : min= 8508, max=11422, avg=10240.00, stdev=1532.88, samples=3 00:15:42.645 lat (usec) : 100=86.83%, 250=13.07%, 500=0.09%, 750=0.01%, 1000=0.01% 00:15:42.645 cpu : usr=3.44%, sys=7.89%, ctx=52798, majf=0, minf=395 00:15:42.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:42.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.645 issued rwts: total=15360,15360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:42.645 00:15:42.645 Run status group 0 (all jobs): 00:15:42.645 READ: bw=49.8MiB/s (52.3MB/s), 49.8MiB/s-49.8MiB/s (52.3MB/s-52.3MB/s), io=60.0MiB (62.9MB), run=1204-1204msec 00:15:42.645 WRITE: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=60.0MiB (62.9MB), run=1384-1384msec 00:15:42.645 00:15:42.645 Disk stats (read/write): 00:15:42.645 nbd0: ios=12942/15360, merge=0/0, ticks=910/1239, in_queue=2149, util=96.02% 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@140 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@51 -- # local i 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@41 -- # break 00:15:42.645 12:35:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@143 -- # rpc_cmd bdev_lvol_get_lvstores -u de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:42.645 12:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.645 12:35:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.645 12:35:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@143 -- # lvs='[ 00:15:42.645 { 00:15:42.645 "uuid": "de24d1ca-4a94-43df-80e0-65ec91d50833", 00:15:42.645 "name": "lvs_test", 00:15:42.645 "base_bdev": "Malloc3", 00:15:42.645 "total_data_clusters": 31, 00:15:42.645 "free_clusters": 16, 00:15:42.645 "block_size": 512, 00:15:42.645 "cluster_size": 4194304 00:15:42.645 } 00:15:42.645 ]' 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@144 -- # jq -r '.[0].free_clusters' 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@144 -- # free_clusters_start=16 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@146 -- # round_down 124 00:15:42.645 12:35:24 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:42.645 12:35:24 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:42.645 12:35:24 -- lvol/common.sh@36 -- # echo 124 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@146 -- # lvol_size_full_mb=124 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@147 -- # lvol_size_full=130023424 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@148 -- # rpc_cmd bdev_lvol_resize 7318e97b-4d4e-4461-a23c-df5ec7bea207 124 00:15:42.645 12:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.645 12:35:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.645 12:35:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@152 -- # rpc_cmd bdev_get_bdevs -b 7318e97b-4d4e-4461-a23c-df5ec7bea207 00:15:42.645 12:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.645 12:35:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.645 12:35:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.645 12:35:24 -- lvol/thin_provisioning.sh@152 -- # lvol='[ 00:15:42.645 { 00:15:42.645 "name": "7318e97b-4d4e-4461-a23c-df5ec7bea207", 00:15:42.645 "aliases": [ 00:15:42.645 "lvs_test/lvol_test" 00:15:42.645 ], 00:15:42.645 "product_name": "Logical Volume", 00:15:42.645 "block_size": 512, 00:15:42.645 "num_blocks": 253952, 00:15:42.645 "uuid": "7318e97b-4d4e-4461-a23c-df5ec7bea207", 00:15:42.645 "assigned_rate_limits": { 00:15:42.645 "rw_ios_per_sec": 0, 00:15:42.645 "rw_mbytes_per_sec": 0, 00:15:42.645 "r_mbytes_per_sec": 0, 00:15:42.645 "w_mbytes_per_sec": 0 00:15:42.645 }, 00:15:42.645 "claimed": false, 00:15:42.645 "zoned": false, 00:15:42.645 "supported_io_types": { 00:15:42.645 "read": true, 00:15:42.645 "write": true, 00:15:42.645 "unmap": true, 00:15:42.645 "write_zeroes": true, 00:15:42.645 "flush": false, 00:15:42.645 "reset": true, 00:15:42.645 "compare": false, 00:15:42.645 "compare_and_write": false, 00:15:42.645 "abort": false, 00:15:42.645 "nvme_admin": false, 00:15:42.645 "nvme_io": false 00:15:42.645 }, 00:15:42.645 "memory_domains": [ 00:15:42.645 { 00:15:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.645 "dma_device_type": 2 00:15:42.645 } 00:15:42.646 ], 00:15:42.646 "driver_specific": { 00:15:42.646 "lvol": { 00:15:42.646 "lvol_store_uuid": "de24d1ca-4a94-43df-80e0-65ec91d50833", 00:15:42.646 "base_bdev": "Malloc3", 00:15:42.646 "thin_provision": true, 00:15:42.646 "snapshot": false, 00:15:42.646 "clone": false, 00:15:42.646 "esnap_clone": false 00:15:42.646 } 00:15:42.646 } 00:15:42.646 } 00:15:42.646 ]' 00:15:42.646 12:35:24 -- lvol/thin_provisioning.sh@153 -- # jq -r '.[0].block_size' 00:15:42.646 12:35:24 -- lvol/thin_provisioning.sh@153 -- # '[' 512 = 512 ']' 00:15:42.646 12:35:24 -- lvol/thin_provisioning.sh@154 -- # jq -r '.[0].num_blocks' 00:15:42.646 12:35:24 -- lvol/thin_provisioning.sh@154 -- # '[' 253952 = 253952 ']' 00:15:42.646 12:35:24 -- lvol/thin_provisioning.sh@157 -- # rpc_cmd bdev_lvol_get_lvstores -u de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:42.646 12:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.646 12:35:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.646 12:35:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.646 12:35:25 -- lvol/thin_provisioning.sh@157 -- # lvs='[ 00:15:42.646 { 00:15:42.646 "uuid": "de24d1ca-4a94-43df-80e0-65ec91d50833", 00:15:42.646 "name": "lvs_test", 00:15:42.646 "base_bdev": "Malloc3", 00:15:42.646 "total_data_clusters": 31, 00:15:42.646 "free_clusters": 16, 00:15:42.646 "block_size": 512, 00:15:42.646 "cluster_size": 4194304 00:15:42.646 } 00:15:42.646 ]' 00:15:42.646 12:35:25 -- lvol/thin_provisioning.sh@158 -- # jq -r '.[0].free_clusters' 00:15:42.646 12:35:25 -- lvol/thin_provisioning.sh@158 -- # free_clusters_resize=16 00:15:42.646 12:35:25 -- lvol/thin_provisioning.sh@159 -- # '[' 16 == 16 ']' 00:15:42.646 12:35:25 -- lvol/thin_provisioning.sh@163 -- # nbd_start_disks /var/tmp/spdk.sock 7318e97b-4d4e-4461-a23c-df5ec7bea207 /dev/nbd0 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('7318e97b-4d4e-4461-a23c-df5ec7bea207') 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@12 -- # local i 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.646 12:35:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 7318e97b-4d4e-4461-a23c-df5ec7bea207 /dev/nbd0 00:15:42.904 /dev/nbd0 00:15:42.904 12:35:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.904 12:35:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.904 12:35:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:42.904 12:35:25 -- common/autotest_common.sh@857 -- # local i 00:15:42.904 12:35:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:42.904 12:35:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:42.904 12:35:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:42.904 12:35:25 -- common/autotest_common.sh@861 -- # break 00:15:42.904 12:35:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:42.904 12:35:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:42.904 12:35:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:42.904 1+0 records in 00:15:42.904 1+0 records out 00:15:42.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334417 s, 12.2 MB/s 00:15:42.904 12:35:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:42.904 12:35:25 -- common/autotest_common.sh@874 -- # size=4096 00:15:42.904 12:35:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:42.904 12:35:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:42.904 12:35:25 -- common/autotest_common.sh@877 -- # return 0 00:15:42.904 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.904 12:35:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.905 12:35:25 -- lvol/thin_provisioning.sh@164 -- # run_fio_test /dev/nbd0 0 130023424 write 0xcc 00:15:42.905 12:35:25 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:42.905 12:35:25 -- lvol/common.sh@41 -- # local offset=0 00:15:42.905 12:35:25 -- lvol/common.sh@42 -- # local size=130023424 00:15:42.905 12:35:25 -- lvol/common.sh@43 -- # local rw=write 00:15:42.905 12:35:25 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:42.905 12:35:25 -- lvol/common.sh@45 -- # local extra_params= 00:15:42.905 12:35:25 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:42.905 12:35:25 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:42.905 12:35:25 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:42.905 12:35:25 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:42.905 12:35:25 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=130023424 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:42.905 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:42.905 fio-3.35 00:15:42.905 Starting 1 process 00:15:49.489 00:15:49.489 fio_test: (groupid=0, jobs=1): err= 0: pid=63526: Tue Oct 1 12:35:31 2024 00:15:49.489 read: IOPS=10.9k, BW=42.6MiB/s (44.7MB/s)(124MiB/2910msec) 00:15:49.489 clat (usec): min=58, max=3637, avg=90.37, stdev=32.25 00:15:49.489 lat (usec): min=58, max=3638, avg=90.46, stdev=32.25 00:15:49.489 clat percentiles (usec): 00:15:49.489 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 78], 00:15:49.489 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 90], 00:15:49.489 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 115], 95.00th=[ 122], 00:15:49.489 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 194], 99.95th=[ 351], 00:15:49.489 | 99.99th=[ 635] 00:15:49.489 write: IOPS=11.1k, BW=43.2MiB/s (45.4MB/s)(124MiB/2867msec); 0 zone resets 00:15:49.489 clat (usec): min=65, max=2467, avg=88.64, stdev=27.26 00:15:49.489 lat (usec): min=69, max=2467, avg=89.57, stdev=27.36 00:15:49.489 clat percentiles (usec): 00:15:49.489 | 1.00th=[ 73], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:15:49.489 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 89], 00:15:49.489 | 70.00th=[ 94], 80.00th=[ 98], 90.00th=[ 110], 95.00th=[ 117], 00:15:49.489 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 190], 99.95th=[ 245], 00:15:49.489 | 99.99th=[ 1418] 00:15:49.489 bw ( KiB/s): min=31192, max=46008, per=95.53%, avg=42310.33, stdev=5532.00, samples=6 00:15:49.489 iops : min= 7798, max=11502, avg=10577.50, stdev=1382.95, samples=6 00:15:49.489 lat (usec) : 100=78.88%, 250=21.07%, 500=0.03%, 750=0.01% 00:15:49.489 lat (msec) : 2=0.01%, 4=0.01% 00:15:49.489 cpu : usr=3.53%, sys=7.13%, ctx=69348, majf=0, minf=782 00:15:49.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:49.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.489 issued rwts: total=31744,31744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:49.489 00:15:49.489 Run status group 0 (all jobs): 00:15:49.489 READ: bw=42.6MiB/s (44.7MB/s), 42.6MiB/s-42.6MiB/s (44.7MB/s-44.7MB/s), io=124MiB (130MB), run=2910-2910msec 00:15:49.489 WRITE: bw=43.2MiB/s (45.4MB/s), 43.2MiB/s-43.2MiB/s (45.4MB/s-45.4MB/s), io=124MiB (130MB), run=2867-2867msec 00:15:49.489 00:15:49.489 Disk stats (read/write): 00:15:49.489 nbd0: ios=30051/31744, merge=0/0, ticks=2525/2568, in_queue=5094, util=98.43% 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@165 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@51 -- # local i 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@41 -- # break 00:15:49.489 12:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@168 -- # rpc_cmd bdev_lvol_get_lvstores -u de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:49.489 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.489 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.489 12:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@168 -- # lvs='[ 00:15:49.489 { 00:15:49.489 "uuid": "de24d1ca-4a94-43df-80e0-65ec91d50833", 00:15:49.489 "name": "lvs_test", 00:15:49.489 "base_bdev": "Malloc3", 00:15:49.489 "total_data_clusters": 31, 00:15:49.489 "free_clusters": 0, 00:15:49.489 "block_size": 512, 00:15:49.489 "cluster_size": 4194304 00:15:49.489 } 00:15:49.489 ]' 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@169 -- # jq -r '.[0].free_clusters' 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@169 -- # free_clusters_start=0 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@170 -- # '[' 0 == 0 ']' 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@173 -- # round_down 31 00:15:49.489 12:35:31 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:49.489 12:35:31 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:49.489 12:35:31 -- lvol/common.sh@36 -- # echo 28 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@173 -- # lvol_size_quarter_mb=28 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@174 -- # rpc_cmd bdev_lvol_resize 7318e97b-4d4e-4461-a23c-df5ec7bea207 28 00:15:49.489 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.489 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.489 12:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@177 -- # rpc_cmd bdev_lvol_get_lvstores -u de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:49.489 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.489 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.489 12:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.489 12:35:31 -- lvol/thin_provisioning.sh@177 -- # lvs='[ 00:15:49.489 { 00:15:49.489 "uuid": "de24d1ca-4a94-43df-80e0-65ec91d50833", 00:15:49.489 "name": "lvs_test", 00:15:49.490 "base_bdev": "Malloc3", 00:15:49.490 "total_data_clusters": 31, 00:15:49.490 "free_clusters": 24, 00:15:49.490 "block_size": 512, 00:15:49.490 "cluster_size": 4194304 00:15:49.490 } 00:15:49.490 ]' 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@178 -- # jq -r '.[0].free_clusters' 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@178 -- # free_clusters_resize_quarter=24 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@179 -- # free_clusters_expected=24 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@180 -- # '[' 24 == 24 ']' 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@182 -- # rpc_cmd bdev_lvol_delete 7318e97b-4d4e-4461-a23c-df5ec7bea207 00:15:49.490 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.490 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 12:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@183 -- # rpc_cmd bdev_lvol_delete_lvstore -u de24d1ca-4a94-43df-80e0-65ec91d50833 00:15:49.490 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.490 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 12:35:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.490 12:35:31 -- lvol/thin_provisioning.sh@184 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:49.490 12:35:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.490 12:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 ************************************ 00:15:49.755 END TEST test_thin_lvol_resize 00:15:49.755 ************************************ 00:15:49.755 12:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.755 00:15:49.755 real 0m10.789s 00:15:49.755 user 0m1.696s 00:15:49.755 sys 0m0.878s 00:15:49.755 12:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@236 -- # run_test test_thin_overprovisioning test_thin_overprovisioning 00:15:49.755 12:35:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:49.755 12:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 ************************************ 00:15:49.755 START TEST test_thin_overprovisioning 00:15:49.755 ************************************ 00:15:49.755 12:35:32 -- common/autotest_common.sh@1104 -- # test_thin_overprovisioning 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@188 -- # rpc_cmd bdev_malloc_create 128 512 00:15:49.755 12:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 12:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@188 -- # malloc_name=Malloc4 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@189 -- # rpc_cmd bdev_lvol_create_lvstore Malloc4 lvs_test 00:15:49.755 12:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 12:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@189 -- # lvs_uuid=27743670-3212-4370-b34a-a67bfdd1a991 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@193 -- # round_down 124 00:15:49.755 12:35:32 -- lvol/common.sh@32 -- # local CLUSTER_SIZE_MB=4 00:15:49.755 12:35:32 -- lvol/common.sh@33 -- # '[' -n '' ']' 00:15:49.755 12:35:32 -- lvol/common.sh@36 -- # echo 124 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@193 -- # lvol_size_mb=124 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@194 -- # lvol_size=130023424 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@195 -- # rpc_cmd bdev_lvol_create -u 27743670-3212-4370-b34a-a67bfdd1a991 lvol_test1 124 -t 00:15:49.755 12:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 12:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@195 -- # lvol_uuid1=61ff9554-b043-4c86-aa9b-1eb5b2df93cf 00:15:49.755 12:35:32 -- lvol/thin_provisioning.sh@196 -- # rpc_cmd bdev_lvol_create -u 27743670-3212-4370-b34a-a67bfdd1a991 lvol_test2 124 -t 00:15:49.755 12:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.755 12:35:32 -- common/autotest_common.sh@10 -- # set +x 00:15:50.014 12:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.014 12:35:32 -- lvol/thin_provisioning.sh@196 -- # lvol_uuid2=92c3b7f1-c754-40c7-82e7-9a34714657e7 00:15:50.014 12:35:32 -- lvol/thin_provisioning.sh@198 -- # nbd_start_disks /var/tmp/spdk.sock 61ff9554-b043-4c86-aa9b-1eb5b2df93cf /dev/nbd0 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('61ff9554-b043-4c86-aa9b-1eb5b2df93cf') 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@12 -- # local i 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.014 12:35:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 61ff9554-b043-4c86-aa9b-1eb5b2df93cf /dev/nbd0 00:15:50.272 /dev/nbd0 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.272 12:35:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:50.272 12:35:32 -- common/autotest_common.sh@857 -- # local i 00:15:50.272 12:35:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:50.272 12:35:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:50.272 12:35:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:50.272 12:35:32 -- common/autotest_common.sh@861 -- # break 00:15:50.272 12:35:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:50.272 12:35:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:50.272 12:35:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:50.272 1+0 records in 00:15:50.272 1+0 records out 00:15:50.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346729 s, 11.8 MB/s 00:15:50.272 12:35:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:50.272 12:35:32 -- common/autotest_common.sh@874 -- # size=4096 00:15:50.272 12:35:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:50.272 12:35:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:50.272 12:35:32 -- common/autotest_common.sh@877 -- # return 0 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.272 12:35:32 -- lvol/thin_provisioning.sh@199 -- # nbd_start_disks /var/tmp/spdk.sock 92c3b7f1-c754-40c7-82e7-9a34714657e7 /dev/nbd1 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('92c3b7f1-c754-40c7-82e7-9a34714657e7') 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@12 -- # local i 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.272 12:35:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk 92c3b7f1-c754-40c7-82e7-9a34714657e7 /dev/nbd1 00:15:50.530 /dev/nbd1 00:15:50.530 12:35:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.530 12:35:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.530 12:35:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:50.530 12:35:32 -- common/autotest_common.sh@857 -- # local i 00:15:50.530 12:35:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:50.530 12:35:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:50.530 12:35:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:50.530 12:35:32 -- common/autotest_common.sh@861 -- # break 00:15:50.530 12:35:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:50.530 12:35:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:50.530 12:35:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/lvol/nbdtest bs=4096 count=1 iflag=direct 00:15:50.530 1+0 records in 00:15:50.530 1+0 records out 00:15:50.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538015 s, 7.6 MB/s 00:15:50.531 12:35:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:50.531 12:35:32 -- common/autotest_common.sh@874 -- # size=4096 00:15:50.531 12:35:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/nbdtest 00:15:50.531 12:35:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:50.531 12:35:32 -- common/autotest_common.sh@877 -- # return 0 00:15:50.531 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.531 12:35:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.531 12:35:32 -- lvol/thin_provisioning.sh@201 -- # fill_size=60 00:15:50.531 12:35:32 -- lvol/thin_provisioning.sh@202 -- # fill_size=62914560 00:15:50.531 12:35:32 -- lvol/thin_provisioning.sh@203 -- # run_fio_test /dev/nbd0 0 62914560 write 0xcc 00:15:50.531 12:35:32 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:50.531 12:35:32 -- lvol/common.sh@41 -- # local offset=0 00:15:50.531 12:35:32 -- lvol/common.sh@42 -- # local size=62914560 00:15:50.531 12:35:32 -- lvol/common.sh@43 -- # local rw=write 00:15:50.531 12:35:32 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:50.531 12:35:32 -- lvol/common.sh@45 -- # local extra_params= 00:15:50.531 12:35:32 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:50.531 12:35:32 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:50.531 12:35:32 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:50.531 12:35:32 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:50.531 12:35:32 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:50.531 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:50.531 fio-3.35 00:15:50.531 Starting 1 process 00:15:53.818 00:15:53.818 fio_test: (groupid=0, jobs=1): err= 0: pid=63654: Tue Oct 1 12:35:36 2024 00:15:53.818 read: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(60.0MiB/1419msec) 00:15:53.818 clat (usec): min=73, max=906, avg=91.18, stdev=17.28 00:15:53.818 lat (usec): min=73, max=906, avg=91.27, stdev=17.29 00:15:53.818 clat percentiles (usec): 00:15:53.818 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 80], 00:15:53.818 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 90], 00:15:53.818 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 121], 00:15:53.818 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 229], 99.95th=[ 265], 00:15:53.818 | 99.99th=[ 367] 00:15:53.818 write: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(60.0MiB/1491msec); 0 zone resets 00:15:53.818 clat (usec): min=73, max=652, avg=95.37, stdev=20.04 00:15:53.818 lat (usec): min=74, max=682, avg=96.23, stdev=20.36 00:15:53.818 clat percentiles (usec): 00:15:53.818 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 80], 00:15:53.818 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 94], 00:15:53.818 | 70.00th=[ 101], 80.00th=[ 111], 90.00th=[ 124], 95.00th=[ 135], 00:15:53.818 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 217], 99.95th=[ 241], 00:15:53.818 | 99.99th=[ 322] 00:15:53.818 bw ( KiB/s): min=37800, max=43280, per=99.40%, avg=40960.00, stdev=2834.93, samples=3 00:15:53.818 iops : min= 9450, max=10820, avg=10240.00, stdev=708.73, samples=3 00:15:53.818 lat (usec) : 100=73.55%, 250=26.39%, 500=0.05%, 750=0.01%, 1000=0.01% 00:15:53.818 cpu : usr=2.37%, sys=5.71%, ctx=30821, majf=0, minf=393 00:15:53.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.818 issued rwts: total=15360,15360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.818 00:15:53.818 Run status group 0 (all jobs): 00:15:53.818 READ: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=60.0MiB (62.9MB), run=1419-1419msec 00:15:53.818 WRITE: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=60.0MiB (62.9MB), run=1491-1491msec 00:15:53.818 00:15:53.818 Disk stats (read/write): 00:15:53.818 nbd0: ios=15252/15360, merge=0/0, ticks=1317/1376, in_queue=2693, util=96.92% 00:15:53.818 12:35:36 -- lvol/thin_provisioning.sh@206 -- # run_fio_test /dev/nbd1 0 62914560 write 0xcc 00:15:53.818 12:35:36 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:15:53.818 12:35:36 -- lvol/common.sh@41 -- # local offset=0 00:15:53.818 12:35:36 -- lvol/common.sh@42 -- # local size=62914560 00:15:53.818 12:35:36 -- lvol/common.sh@43 -- # local rw=write 00:15:53.818 12:35:36 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:53.818 12:35:36 -- lvol/common.sh@45 -- # local extra_params= 00:15:53.818 12:35:36 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:53.818 12:35:36 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:53.818 12:35:36 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:53.818 12:35:36 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:53.818 12:35:36 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=0 --size=62914560 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:53.818 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:53.818 fio-3.35 00:15:53.818 Starting 1 process 00:15:57.106 00:15:57.106 fio_test: (groupid=0, jobs=1): err= 0: pid=63690: Tue Oct 1 12:35:39 2024 00:15:57.106 read: IOPS=9993, BW=39.0MiB/s (40.9MB/s)(60.0MiB/1537msec) 00:15:57.106 clat (usec): min=74, max=3285, avg=98.84, stdev=31.38 00:15:57.106 lat (usec): min=74, max=3285, avg=98.92, stdev=31.38 00:15:57.106 clat percentiles (usec): 00:15:57.106 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:15:57.106 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 99], 00:15:57.106 | 70.00th=[ 104], 80.00th=[ 113], 90.00th=[ 124], 95.00th=[ 133], 00:15:57.106 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 206], 99.95th=[ 253], 00:15:57.106 | 99.99th=[ 404] 00:15:57.106 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(60.0MiB/1461msec); 0 zone resets 00:15:57.106 clat (usec): min=76, max=666, avg=93.58, stdev=17.85 00:15:57.106 lat (usec): min=77, max=686, avg=94.41, stdev=18.23 00:15:57.106 clat percentiles (usec): 00:15:57.106 | 1.00th=[ 79], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 82], 00:15:57.106 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 90], 00:15:57.106 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 129], 00:15:57.106 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 221], 99.95th=[ 249], 00:15:57.106 | 99.99th=[ 510] 00:15:57.106 bw ( KiB/s): min=38952, max=42504, per=97.40%, avg=40960.00, stdev=1820.89, samples=3 00:15:57.106 iops : min= 9738, max=10626, avg=10240.00, stdev=455.22, samples=3 00:15:57.106 lat (usec) : 100=69.97%, 250=29.99%, 500=0.04%, 750=0.01% 00:15:57.106 lat (msec) : 4=0.01% 00:15:57.106 cpu : usr=3.34%, sys=5.97%, ctx=30748, majf=0, minf=396 00:15:57.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.106 issued rwts: total=15360,15360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.106 00:15:57.106 Run status group 0 (all jobs): 00:15:57.106 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=60.0MiB (62.9MB), run=1537-1537msec 00:15:57.106 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=60.0MiB (62.9MB), run=1461-1461msec 00:15:57.106 00:15:57.106 Disk stats (read/write): 00:15:57.106 nbd1: ios=14563/15360, merge=0/0, ticks=1339/1323, in_queue=2662, util=97.06% 00:15:57.106 12:35:39 -- lvol/thin_provisioning.sh@210 -- # offset=62914560 00:15:57.106 12:35:39 -- lvol/thin_provisioning.sh@211 -- # fill_size_rest=67108864 00:15:57.106 12:35:39 -- lvol/thin_provisioning.sh@212 -- # run_fio_test /dev/nbd1 62914560 67108864 write 0xcc 00:15:57.106 12:35:39 -- lvol/common.sh@40 -- # local file=/dev/nbd1 00:15:57.106 12:35:39 -- lvol/common.sh@41 -- # local offset=62914560 00:15:57.106 12:35:39 -- lvol/common.sh@42 -- # local size=67108864 00:15:57.106 12:35:39 -- lvol/common.sh@43 -- # local rw=write 00:15:57.106 12:35:39 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:57.106 12:35:39 -- lvol/common.sh@45 -- # local extra_params= 00:15:57.106 12:35:39 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:57.106 12:35:39 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:57.106 12:35:39 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:57.106 12:35:39 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd1 --offset=62914560 --size=67108864 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:57.106 12:35:39 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd1 --offset=62914560 --size=67108864 --rw=write --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:57.106 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:57.106 fio-3.35 00:15:57.106 Starting 1 process 00:15:57.106 fio: io_u error on file /dev/nbd1: Input/output error: write offset=67108864, buflen=4096 00:15:57.106 fio: pid=63726, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:57.365 00:15:57.365 fio_test: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=63726: Tue Oct 1 12:35:39 2024 00:15:57.365 write: IOPS=8760, BW=34.2MiB/s (35.8MB/s)(4096KiB/117msec); 0 zone resets 00:15:57.365 clat (usec): min=86, max=1467, avg=109.36, stdev=48.01 00:15:57.365 lat (usec): min=87, max=1467, avg=110.45, stdev=48.41 00:15:57.365 clat percentiles (usec): 00:15:57.365 | 1.00th=[ 88], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 91], 00:15:57.365 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 108], 00:15:57.365 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 139], 95.00th=[ 151], 00:15:57.365 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 416], 99.95th=[ 1467], 00:15:57.365 | 99.99th=[ 1467] 00:15:57.365 lat (usec) : 100=44.20%, 250=55.51%, 500=0.10% 00:15:57.365 lat (msec) : 2=0.10% 00:15:57.365 cpu : usr=0.86%, sys=10.34%, ctx=1029, majf=0, minf=50 00:15:57.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.365 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.365 issued rwts: total=0,1025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.366 00:15:57.366 Run status group 0 (all jobs): 00:15:57.366 WRITE: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=4096KiB (4194kB), run=117-117msec 00:15:57.366 00:15:57.366 Disk stats (read/write): 00:15:57.366 nbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% 00:15:57.366 12:35:39 -- lvol/thin_provisioning.sh@215 -- # run_fio_test /dev/nbd0 0 62914560 read 0xcc 00:15:57.366 12:35:39 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:57.366 12:35:39 -- lvol/common.sh@41 -- # local offset=0 00:15:57.366 12:35:39 -- lvol/common.sh@42 -- # local size=62914560 00:15:57.366 12:35:39 -- lvol/common.sh@43 -- # local rw=read 00:15:57.366 12:35:39 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:57.366 12:35:39 -- lvol/common.sh@45 -- # local extra_params= 00:15:57.366 12:35:39 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:57.366 12:35:39 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:57.366 12:35:39 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:57.366 12:35:39 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:57.366 12:35:39 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=0 --size=62914560 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:57.366 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:57.366 fio-3.35 00:15:57.366 Starting 1 process 00:15:59.268 00:15:59.268 fio_test: (groupid=0, jobs=1): err= 0: pid=63735: Tue Oct 1 12:35:41 2024 00:15:59.268 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(60.0MiB/1530msec) 00:15:59.268 clat (usec): min=76, max=327, avg=98.25, stdev=20.36 00:15:59.268 lat (usec): min=76, max=327, avg=98.37, stdev=20.36 00:15:59.268 clat percentiles (usec): 00:15:59.268 | 1.00th=[ 79], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 82], 00:15:59.268 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 97], 00:15:59.268 | 70.00th=[ 104], 80.00th=[ 117], 90.00th=[ 129], 95.00th=[ 139], 00:15:59.268 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 241], 00:15:59.268 | 99.99th=[ 326] 00:15:59.269 bw ( KiB/s): min=35880, max=42480, per=99.95%, avg=40138.67, stdev=3694.20, samples=3 00:15:59.269 iops : min= 8970, max=10620, avg=10034.67, stdev=923.55, samples=3 00:15:59.269 lat (usec) : 100=65.11%, 250=34.84%, 500=0.05% 00:15:59.269 cpu : usr=2.62%, sys=6.08%, ctx=21422, majf=0, minf=11 00:15:59.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.269 issued rwts: total=15360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.269 00:15:59.269 Run status group 0 (all jobs): 00:15:59.269 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=60.0MiB (62.9MB), run=1530-1530msec 00:15:59.269 00:15:59.269 Disk stats (read/write): 00:15:59.269 nbd0: ios=13883/0, merge=0/0, ticks=1278/0, in_queue=1278, util=93.36% 00:15:59.269 12:35:41 -- lvol/thin_provisioning.sh@216 -- # run_fio_test /dev/nbd0 62914560 67108864 read 0x00 00:15:59.269 12:35:41 -- lvol/common.sh@40 -- # local file=/dev/nbd0 00:15:59.269 12:35:41 -- lvol/common.sh@41 -- # local offset=62914560 00:15:59.269 12:35:41 -- lvol/common.sh@42 -- # local size=67108864 00:15:59.269 12:35:41 -- lvol/common.sh@43 -- # local rw=read 00:15:59.269 12:35:41 -- lvol/common.sh@44 -- # local pattern=0x00 00:15:59.269 12:35:41 -- lvol/common.sh@45 -- # local extra_params= 00:15:59.269 12:35:41 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:59.269 12:35:41 -- lvol/common.sh@48 -- # [[ -n 0x00 ]] 00:15:59.269 12:35:41 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:15:59.269 12:35:41 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/nbd0 --offset=62914560 --size=67108864 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0' 00:15:59.269 12:35:41 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/nbd0 --offset=62914560 --size=67108864 --rw=read --direct=1 --do_verify=1 --verify=pattern --verify_pattern=0x00 --verify_state_save=0 00:15:59.269 fio_test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:59.269 fio-3.35 00:15:59.269 Starting 1 process 00:16:00.645 00:16:00.645 fio_test: (groupid=0, jobs=1): err= 0: pid=63754: Tue Oct 1 12:35:43 2024 00:16:00.645 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(64.0MiB/1480msec) 00:16:00.645 clat (usec): min=60, max=1830, avg=88.95, stdev=28.60 00:16:00.645 lat (usec): min=60, max=1830, avg=89.07, stdev=28.62 00:16:00.645 clat percentiles (usec): 00:16:00.645 | 1.00th=[ 64], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 67], 00:16:00.645 | 30.00th=[ 69], 40.00th=[ 76], 50.00th=[ 87], 60.00th=[ 97], 00:16:00.645 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 129], 00:16:00.645 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 233], 99.95th=[ 297], 00:16:00.645 | 99.99th=[ 1074] 00:16:00.645 bw ( KiB/s): min=39560, max=41320, per=91.33%, avg=40440.00, stdev=1244.51, samples=2 00:16:00.645 iops : min= 9890, max=10330, avg=10110.00, stdev=311.13, samples=2 00:16:00.645 lat (usec) : 100=68.03%, 250=31.88%, 500=0.06%, 750=0.01% 00:16:00.645 lat (msec) : 2=0.02% 00:16:00.645 cpu : usr=3.04%, sys=6.09%, ctx=16740, majf=0, minf=11 00:16:00.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.645 issued rwts: total=16384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.645 00:16:00.645 Run status group 0 (all jobs): 00:16:00.645 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=64.0MiB (67.1MB), run=1480-1480msec 00:16:00.645 00:16:00.645 Disk stats (read/write): 00:16:00.645 nbd0: ios=15278/0, merge=0/0, ticks=1271/0, in_queue=1271, util=93.36% 00:16:00.645 12:35:43 -- lvol/thin_provisioning.sh@218 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@51 -- # local i 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.645 12:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@41 -- # break 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.904 12:35:43 -- lvol/thin_provisioning.sh@219 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@51 -- # local i 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.904 12:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@41 -- # break 00:16:01.163 12:35:43 -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.163 12:35:43 -- lvol/thin_provisioning.sh@221 -- # rpc_cmd bdev_lvol_delete 92c3b7f1-c754-40c7-82e7-9a34714657e7 00:16:01.163 12:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.163 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 12:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.163 12:35:43 -- lvol/thin_provisioning.sh@222 -- # rpc_cmd bdev_lvol_delete 61ff9554-b043-4c86-aa9b-1eb5b2df93cf 00:16:01.163 12:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.163 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 12:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.163 12:35:43 -- lvol/thin_provisioning.sh@223 -- # rpc_cmd bdev_lvol_delete_lvstore -u 27743670-3212-4370-b34a-a67bfdd1a991 00:16:01.163 12:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.163 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 12:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.163 12:35:43 -- lvol/thin_provisioning.sh@224 -- # rpc_cmd bdev_malloc_delete Malloc4 00:16:01.163 12:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.163 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.731 ************************************ 00:16:01.731 END TEST test_thin_overprovisioning 00:16:01.731 ************************************ 00:16:01.731 12:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.731 00:16:01.731 real 0m11.829s 00:16:01.731 user 0m1.669s 00:16:01.731 sys 0m0.843s 00:16:01.731 12:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.731 12:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.731 12:35:43 -- lvol/thin_provisioning.sh@238 -- # trap - SIGINT SIGTERM EXIT 00:16:01.731 12:35:43 -- lvol/thin_provisioning.sh@239 -- # killprocess 62960 00:16:01.731 12:35:43 -- common/autotest_common.sh@926 -- # '[' -z 62960 ']' 00:16:01.731 12:35:43 -- common/autotest_common.sh@930 -- # kill -0 62960 00:16:01.731 12:35:43 -- common/autotest_common.sh@931 -- # uname 00:16:01.731 12:35:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:01.731 12:35:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62960 00:16:01.731 killing process with pid 62960 00:16:01.731 12:35:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:01.731 12:35:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:01.731 12:35:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62960' 00:16:01.731 12:35:44 -- common/autotest_common.sh@945 -- # kill 62960 00:16:01.731 12:35:44 -- common/autotest_common.sh@950 -- # wait 62960 00:16:03.635 00:16:03.635 real 0m53.241s 00:16:03.635 user 0m45.672s 00:16:03.635 sys 0m15.342s 00:16:03.635 12:35:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.635 ************************************ 00:16:03.635 END TEST lvol_provisioning 00:16:03.635 ************************************ 00:16:03.635 12:35:45 -- common/autotest_common.sh@10 -- # set +x 00:16:03.635 12:35:45 -- lvol/lvol.sh@21 -- # run_test lvol_esnap /home/vagrant/spdk_repo/spdk/test/lvol/esnap/esnap 00:16:03.635 12:35:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:03.635 12:35:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.635 12:35:45 -- common/autotest_common.sh@10 -- # set +x 00:16:03.635 ************************************ 00:16:03.635 START TEST lvol_esnap 00:16:03.635 ************************************ 00:16:03.635 12:35:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/esnap/esnap 00:16:03.635 00:16:03.635 00:16:03.635 CUnit - A unit testing framework for C - Version 2.1-3 00:16:03.635 http://cunit.sourceforge.net/ 00:16:03.635 00:16:03.635 00:16:03.635 Suite: esnap_io 00:16:03.635 Test: esnap_clone_io ...passed 00:16:03.635 Test: esnap_hotplug ...[2024-10-01 12:35:46.106607] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev esnap_malloc already claimed: type read_many_write_none by module lvol 00:16:03.635 passed 00:16:03.893 Test: esnap_remove_degraded ...[2024-10-01 12:35:46.194724] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 640:_vbdev_lvol_destroy: *ERROR*: Cannot delete lvol 00:16:03.893 passed 00:16:03.893 Test: late_delete ...passed 00:16:03.893 00:16:03.893 Run Summary: Type Total Ran Passed Failed Inactive 00:16:03.893 suites 1 1 n/a 0 0 00:16:03.893 tests 4 4 4 0 0 00:16:03.893 asserts 590 590 590 0 n/a 00:16:03.893 00:16:03.893 Elapsed time = 0.117 seconds 00:16:03.893 ************************************ 00:16:03.893 END TEST lvol_esnap 00:16:03.893 ************************************ 00:16:03.893 00:16:03.893 real 0m0.351s 00:16:03.893 user 0m0.116s 00:16:03.893 sys 0m0.108s 00:16:03.893 12:35:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.893 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:16:03.893 12:35:46 -- lvol/lvol.sh@22 -- # run_test lvol_external_snapshot /home/vagrant/spdk_repo/spdk/test/lvol/external_snapshot.sh 00:16:03.893 12:35:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:03.893 12:35:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.893 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:16:03.893 ************************************ 00:16:03.893 START TEST lvol_external_snapshot 00:16:03.893 ************************************ 00:16:03.893 12:35:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/lvol/external_snapshot.sh 00:16:04.151 * Looking for test storage... 00:16:04.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/lvol 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:04.151 12:35:46 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:04.151 12:35:46 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:04.151 12:35:46 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:04.151 12:35:46 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:04.151 12:35:46 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:04.151 12:35:46 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:04.151 12:35:46 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:04.151 12:35:46 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:04.151 12:35:46 -- bdev/nbd_common.sh@6 -- # set -e 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@11 -- # set -u 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@13 -- # g_nbd_dev=INVALID 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@14 -- # g_cluster_size=INVALID 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@15 -- # g_block_size=INVALID 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@463 -- # spdk_pid=63896 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@462 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@464 -- # trap 'killprocess "$spdk_pid"; rm -f "$testdir/aio_bdev_0"; exit 1' SIGINT SIGTERM SIGPIPE EXIT 00:16:04.151 12:35:46 -- lvol/external_snapshot.sh@465 -- # waitforlisten 63896 00:16:04.151 12:35:46 -- common/autotest_common.sh@819 -- # '[' -z 63896 ']' 00:16:04.151 12:35:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.151 12:35:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:04.151 12:35:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.151 12:35:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:04.151 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:16:04.152 [2024-10-01 12:35:46.572695] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:04.152 [2024-10-01 12:35:46.572866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63896 ] 00:16:04.409 [2024-10-01 12:35:46.742506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.409 [2024-10-01 12:35:46.907281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:04.409 [2024-10-01 12:35:46.907746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.785 12:35:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:05.785 12:35:48 -- common/autotest_common.sh@852 -- # return 0 00:16:05.785 12:35:48 -- lvol/external_snapshot.sh@466 -- # modprobe nbd 00:16:05.785 12:35:48 -- lvol/external_snapshot.sh@468 -- # run_test test_esnap_reload test_esnap_reload 00:16:05.785 12:35:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:05.785 12:35:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.785 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:05.785 ************************************ 00:16:05.785 START TEST test_esnap_reload 00:16:05.785 ************************************ 00:16:05.785 12:35:48 -- common/autotest_common.sh@1104 -- # test_esnap_reload 00:16:05.785 12:35:48 -- lvol/external_snapshot.sh@18 -- # local bs_dev esnap_dev 00:16:05.785 12:35:48 -- lvol/external_snapshot.sh@19 -- # local block_size=512 00:16:05.785 12:35:48 -- lvol/external_snapshot.sh@20 -- # local esnap_size_mb=1 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@21 -- # local lvs_cluster_size=16384 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@22 -- # local lvs_uuid esnap_uuid eclone_uuid snap_uuid clone_uuid uuid 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@23 -- # local aio_bdev=test_esnap_reload_aio0 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@27 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@28 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@29 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:06.044 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.044 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.044 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@29 -- # bs_dev=test_esnap_reload_aio0 00:16:06.044 12:35:48 -- lvol/external_snapshot.sh@30 -- # rpc_cmd bdev_lvol_create_lvstore -c 16384 test_esnap_reload_aio0 lvs_test 00:16:06.044 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.044 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@30 -- # lvs_uuid=ea0c924d-685b-4883-be22-1a4f1936dede 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@33 -- # esnap_uuid=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@34 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:06.304 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.304 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@34 -- # esnap_dev=Malloc0 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@35 -- # rpc_cmd bdev_lvol_clone_bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd lvs_test eclone1 00:16:06.304 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.304 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@35 -- # eclone_uuid=3191b7a8-b1a2-4286-bb3c-97439f3b6696 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@38 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:06.304 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.304 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 [2024-10-01 12:35:48.800683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:06.304 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.304 12:35:48 -- lvol/external_snapshot.sh@39 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.304 12:35:48 -- common/autotest_common.sh@640 -- # local es=0 00:16:06.304 12:35:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.304 12:35:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:06.304 12:35:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.304 12:35:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:06.304 12:35:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.304 12:35:48 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.304 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.304 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 request: 00:16:06.563 { 00:16:06.563 "lvs_name": "lvs_test", 00:16:06.563 "method": "bdev_lvol_get_lvstores", 00:16:06.563 "req_id": 1 00:16:06.563 } 00:16:06.563 Got JSON-RPC error response 00:16:06.563 response: 00:16:06.563 { 00:16:06.563 "code": -19, 00:16:06.563 "message": "No such device" 00:16:06.563 } 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:06.563 12:35:48 -- common/autotest_common.sh@643 -- # es=1 00:16:06.563 12:35:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:06.563 12:35:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:06.563 12:35:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@42 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@42 -- # bs_dev=test_esnap_reload_aio0 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@43 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@43 -- # lvs_uuid='[ 00:16:06.563 { 00:16:06.563 "uuid": "ea0c924d-685b-4883-be22-1a4f1936dede", 00:16:06.563 "name": "lvs_test", 00:16:06.563 "base_bdev": "test_esnap_reload_aio0", 00:16:06.563 "total_data_clusters": 19199, 00:16:06.563 "free_clusters": 19199, 00:16:06.563 "block_size": 512, 00:16:06.563 "cluster_size": 16384 00:16:06.563 } 00:16:06.563 ]' 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@44 -- # jq -r '.[].name' 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@44 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone1 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@44 -- # uuid=3191b7a8-b1a2-4286-bb3c-97439f3b6696 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@45 -- # [[ 3191b7a8-b1a2-4286-bb3c-97439f3b6696 == \3\1\9\1\b\7\a\8\-\b\1\a\2\-\4\2\8\6\-\b\b\3\c\-\9\7\4\3\9\f\3\b\6\6\9\6 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@48 -- # rpc_cmd bdev_lvol_snapshot 3191b7a8-b1a2-4286-bb3c-97439f3b6696 snap1 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@48 -- # snap_uuid=829100b5-53ca-46d2-bcd9-bd59df3784f5 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@49 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 [2024-10-01 12:35:48.940714] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@50 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.563 12:35:48 -- common/autotest_common.sh@640 -- # local es=0 00:16:06.563 12:35:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.563 12:35:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:06.563 12:35:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.563 12:35:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:06.563 12:35:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.563 12:35:48 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 request: 00:16:06.563 { 00:16:06.563 "lvs_name": "lvs_test", 00:16:06.563 "method": "bdev_lvol_get_lvstores", 00:16:06.563 "req_id": 1 00:16:06.563 } 00:16:06.563 Got JSON-RPC error response 00:16:06.563 response: 00:16:06.563 { 00:16:06.563 "code": -19, 00:16:06.563 "message": "No such device" 00:16:06.563 } 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:06.563 12:35:48 -- common/autotest_common.sh@643 -- # es=1 00:16:06.563 12:35:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:06.563 12:35:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:06.563 12:35:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:06.563 12:35:48 -- lvol/external_snapshot.sh@51 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:06.563 12:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@51 -- # bs_dev=test_esnap_reload_aio0 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@52 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.563 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@52 -- # lvs_uuid='[ 00:16:06.563 { 00:16:06.563 "uuid": "ea0c924d-685b-4883-be22-1a4f1936dede", 00:16:06.563 "name": "lvs_test", 00:16:06.563 "base_bdev": "test_esnap_reload_aio0", 00:16:06.563 "total_data_clusters": 19199, 00:16:06.563 "free_clusters": 19199, 00:16:06.563 "block_size": 512, 00:16:06.563 "cluster_size": 16384 00:16:06.563 } 00:16:06.563 ]' 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@53 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone1 00:16:06.563 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@53 -- # jq -r '.[].name' 00:16:06.563 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@53 -- # uuid=3191b7a8-b1a2-4286-bb3c-97439f3b6696 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@54 -- # [[ 3191b7a8-b1a2-4286-bb3c-97439f3b6696 == \3\1\9\1\b\7\a\8\-\b\1\a\2\-\4\2\8\6\-\b\b\3\c\-\9\7\4\3\9\f\3\b\6\6\9\6 ]] 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@55 -- # rpc_cmd bdev_get_bdevs -b lvs_test/snap1 00:16:06.563 12:35:49 -- lvol/external_snapshot.sh@55 -- # jq -r '.[].name' 00:16:06.563 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.563 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@55 -- # uuid=829100b5-53ca-46d2-bcd9-bd59df3784f5 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@56 -- # [[ 829100b5-53ca-46d2-bcd9-bd59df3784f5 == \8\2\9\1\0\0\b\5\-\5\3\c\a\-\4\6\d\2\-\b\c\d\9\-\b\d\5\9\d\f\3\7\8\4\f\5 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@59 -- # rpc_cmd bdev_lvol_clone 829100b5-53ca-46d2-bcd9-bd59df3784f5 clone1 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@59 -- # clone_uuid=f0932ed0-ab7d-41a9-9226-0c787745ce3d 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@60 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 [2024-10-01 12:35:49.127675] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@61 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.822 12:35:49 -- common/autotest_common.sh@640 -- # local es=0 00:16:06.822 12:35:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.822 12:35:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:06.822 12:35:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.822 12:35:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:06.822 12:35:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:06.822 12:35:49 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 request: 00:16:06.822 { 00:16:06.822 "lvs_name": "lvs_test", 00:16:06.822 "method": "bdev_lvol_get_lvstores", 00:16:06.822 "req_id": 1 00:16:06.822 } 00:16:06.822 Got JSON-RPC error response 00:16:06.822 response: 00:16:06.822 { 00:16:06.822 "code": -19, 00:16:06.822 "message": "No such device" 00:16:06.822 } 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:06.822 12:35:49 -- common/autotest_common.sh@643 -- # es=1 00:16:06.822 12:35:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:06.822 12:35:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:06.822 12:35:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@62 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@62 -- # bs_dev=test_esnap_reload_aio0 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@63 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@63 -- # lvs_uuid='[ 00:16:06.822 { 00:16:06.822 "uuid": "ea0c924d-685b-4883-be22-1a4f1936dede", 00:16:06.822 "name": "lvs_test", 00:16:06.822 "base_bdev": "test_esnap_reload_aio0", 00:16:06.822 "total_data_clusters": 19199, 00:16:06.822 "free_clusters": 19199, 00:16:06.822 "block_size": 512, 00:16:06.822 "cluster_size": 16384 00:16:06.822 } 00:16:06.822 ]' 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@64 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone1 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@64 -- # jq -r '.[].name' 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@64 -- # uuid=3191b7a8-b1a2-4286-bb3c-97439f3b6696 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@65 -- # [[ 3191b7a8-b1a2-4286-bb3c-97439f3b6696 == \3\1\9\1\b\7\a\8\-\b\1\a\2\-\4\2\8\6\-\b\b\3\c\-\9\7\4\3\9\f\3\b\6\6\9\6 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@66 -- # rpc_cmd bdev_get_bdevs -b lvs_test/snap1 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@66 -- # jq -r '.[].name' 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@66 -- # uuid=829100b5-53ca-46d2-bcd9-bd59df3784f5 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@67 -- # [[ 829100b5-53ca-46d2-bcd9-bd59df3784f5 == \8\2\9\1\0\0\b\5\-\5\3\c\a\-\4\6\d\2\-\b\c\d\9\-\b\d\5\9\d\f\3\7\8\4\f\5 ]] 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@68 -- # jq -r '.[].name' 00:16:06.822 12:35:49 -- lvol/external_snapshot.sh@68 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone1 00:16:06.822 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.822 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.822 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@68 -- # uuid=f0932ed0-ab7d-41a9-9226-0c787745ce3d 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@69 -- # [[ f0932ed0-ab7d-41a9-9226-0c787745ce3d == \f\0\9\3\2\e\d\0\-\a\b\7\d\-\4\1\a\9\-\9\2\2\6\-\0\c\7\8\7\7\4\5\c\e\3\d ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@71 -- # rpc_cmd bdev_lvol_delete f0932ed0-ab7d-41a9-9226-0c787745ce3d 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@72 -- # rpc_cmd bdev_lvol_delete 829100b5-53ca-46d2-bcd9-bd59df3784f5 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@73 -- # rpc_cmd bdev_lvol_delete 3191b7a8-b1a2-4286-bb3c-97439f3b6696 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@74 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 [2024-10-01 12:35:49.384250] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@75 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 ************************************ 00:16:07.081 END TEST test_esnap_reload 00:16:07.081 ************************************ 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 00:16:07.081 real 0m1.109s 00:16:07.081 user 0m0.363s 00:16:07.081 sys 0m0.095s 00:16:07.081 12:35:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@469 -- # run_test test_esnap_reload test_esnap_reload_missing 00:16:07.081 12:35:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:07.081 12:35:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 ************************************ 00:16:07.081 START TEST test_esnap_reload 00:16:07.081 ************************************ 00:16:07.081 12:35:49 -- common/autotest_common.sh@1104 -- # test_esnap_reload_missing 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@79 -- # local bs_dev esnap_dev 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@80 -- # local block_size=512 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@81 -- # local esnap_size_mb=1 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@82 -- # local lvs_cluster_size=16384 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@83 -- # local lvs_uuid esnap_uuid eclone_uuid snap_uuid clone_uuid uuid 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@84 -- # local aio_bdev=test_esnap_reload_aio0 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@85 -- # local lvols 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@89 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@90 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@91 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.081 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@91 -- # bs_dev=test_esnap_reload_aio0 00:16:07.081 12:35:49 -- lvol/external_snapshot.sh@92 -- # rpc_cmd bdev_lvol_create_lvstore -c 16384 test_esnap_reload_aio0 lvs_test 00:16:07.081 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.081 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.648 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@92 -- # lvs_uuid=8a1cb080-657a-4a14-a3c7-0cf71ada542b 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@97 -- # esnap_uuid=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@98 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:07.648 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.648 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.648 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@98 -- # esnap_dev=Malloc1 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@99 -- # rpc_cmd bdev_lvol_clone_bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd lvs_test eclone 00:16:07.648 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.648 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.648 12:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@99 -- # eclone_uuid=5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:07.648 12:35:49 -- lvol/external_snapshot.sh@100 -- # rpc_cmd bdev_lvol_get_lvols 00:16:07.648 12:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.648 12:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.648 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@100 -- # lvols='[ 00:16:07.648 { 00:16:07.648 "alias": "lvs_test/eclone", 00:16:07.648 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:07.648 "name": "eclone", 00:16:07.648 "is_thin_provisioned": true, 00:16:07.648 "is_snapshot": false, 00:16:07.648 "is_clone": false, 00:16:07.648 "is_esnap_clone": true, 00:16:07.648 "is_degraded": false, 00:16:07.648 "lvs": { 00:16:07.648 "name": "lvs_test", 00:16:07.648 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:07.648 } 00:16:07.648 } 00:16:07.648 ]' 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@101 -- # jq -r '. | length' 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@101 -- # [[ 1 == \1 ]] 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@102 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@102 -- # [[ true == \t\r\u\e ]] 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@103 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@103 -- # [[ false == \f\a\l\s\e ]] 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@106 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:07.648 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.648 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.648 [2024-10-01 12:35:50.152985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:07.648 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.648 12:35:50 -- lvol/external_snapshot.sh@107 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.908 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 request: 00:16:07.908 { 00:16:07.908 "lvs_name": "lvs_test", 00:16:07.908 "method": "bdev_lvol_get_lvstores", 00:16:07.908 "req_id": 1 00:16:07.908 } 00:16:07.908 Got JSON-RPC error response 00:16:07.908 response: 00:16:07.908 { 00:16:07.908 "code": -19, 00:16:07.908 "message": "No such device" 00:16:07.908 } 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # es=1 00:16:07.908 12:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.908 12:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@108 -- # rpc_cmd bdev_malloc_delete e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@113 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 [2024-10-01 12:35:50.215085] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:07.908 [2024-10-01 12:35:50.215188] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:07.908 [2024-10-01 12:35:50.215242] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: blob is degraded: deferring bdev creation 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@113 -- # bs_dev=test_esnap_reload_aio0 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@114 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:07.908 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.908 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:07.908 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 [2024-10-01 12:35:50.224577] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/eclone 00:16:07.908 request: 00:16:07.908 { 00:16:07.908 "name": "lvs_test/eclone", 00:16:07.908 "method": "bdev_get_bdevs", 00:16:07.908 "req_id": 1 00:16:07.908 } 00:16:07.908 Got JSON-RPC error response 00:16:07.908 response: 00:16:07.908 { 00:16:07.908 "code": -19, 00:16:07.908 "message": "No such device" 00:16:07.908 } 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # es=1 00:16:07.908 12:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.908 12:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@115 -- # NOT rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:07.908 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.908 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:07.908 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:07.908 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 [2024-10-01 12:35:50.236602] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:07.908 request: 00:16:07.908 { 00:16:07.908 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:07.908 "method": "bdev_get_bdevs", 00:16:07.908 "req_id": 1 00:16:07.908 } 00:16:07.908 Got JSON-RPC error response 00:16:07.908 response: 00:16:07.908 { 00:16:07.908 "code": -19, 00:16:07.908 "message": "No such device" 00:16:07.908 } 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@643 -- # es=1 00:16:07.908 12:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.908 12:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.908 12:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@116 -- # rpc_cmd bdev_lvol_get_lvols 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@116 -- # lvols='[ 00:16:07.908 { 00:16:07.908 "alias": "lvs_test/eclone", 00:16:07.908 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:07.908 "name": "eclone", 00:16:07.908 "is_thin_provisioned": true, 00:16:07.908 "is_snapshot": false, 00:16:07.908 "is_clone": false, 00:16:07.908 "is_esnap_clone": true, 00:16:07.908 "is_degraded": true, 00:16:07.908 "lvs": { 00:16:07.908 "name": "lvs_test", 00:16:07.908 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:07.908 } 00:16:07.908 } 00:16:07.908 ]' 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@117 -- # jq -r '. | length' 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@117 -- # [[ 1 == \1 ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@118 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@118 -- # [[ true == \t\r\u\e ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@123 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:07.908 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.908 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 [2024-10-01 12:35:50.360716] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.908 12:35:50 -- lvol/external_snapshot.sh@124 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.908 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.908 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:07.909 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.909 12:35:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:07.909 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.909 12:35:50 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.909 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.909 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 request: 00:16:07.909 { 00:16:07.909 "lvs_name": "lvs_test", 00:16:07.909 "method": "bdev_lvol_get_lvstores", 00:16:07.909 "req_id": 1 00:16:07.909 } 00:16:07.909 Got JSON-RPC error response 00:16:07.909 response: 00:16:07.909 { 00:16:07.909 "code": -19, 00:16:07.909 "message": "No such device" 00:16:07.909 } 00:16:07.909 12:35:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:07.909 12:35:50 -- common/autotest_common.sh@643 -- # es=1 00:16:07.909 12:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.909 12:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.909 12:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.909 12:35:50 -- lvol/external_snapshot.sh@125 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:07.909 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.909 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.909 12:35:50 -- lvol/external_snapshot.sh@125 -- # esnap_dev=Malloc2 00:16:07.909 12:35:50 -- lvol/external_snapshot.sh@126 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:07.909 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.909 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.909 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.909 12:35:50 -- lvol/external_snapshot.sh@126 -- # bs_dev=test_esnap_reload_aio0 00:16:07.909 12:35:50 -- lvol/external_snapshot.sh@127 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:07.909 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.909 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 [ 00:16:08.168 { 00:16:08.168 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.168 "name": "lvs_test", 00:16:08.168 "base_bdev": "test_esnap_reload_aio0", 00:16:08.168 "total_data_clusters": 19199, 00:16:08.168 "free_clusters": 19199, 00:16:08.168 "block_size": 512, 00:16:08.168 "cluster_size": 16384 00:16:08.168 } 00:16:08.168 ] 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@128 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 [ 00:16:08.168 { 00:16:08.168 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.168 "aliases": [ 00:16:08.168 "lvs_test/eclone" 00:16:08.168 ], 00:16:08.168 "product_name": "Logical Volume", 00:16:08.168 "block_size": 512, 00:16:08.168 "num_blocks": 2048, 00:16:08.168 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.168 "assigned_rate_limits": { 00:16:08.168 "rw_ios_per_sec": 0, 00:16:08.168 "rw_mbytes_per_sec": 0, 00:16:08.168 "r_mbytes_per_sec": 0, 00:16:08.168 "w_mbytes_per_sec": 0 00:16:08.168 }, 00:16:08.168 "claimed": false, 00:16:08.168 "zoned": false, 00:16:08.168 "supported_io_types": { 00:16:08.168 "read": true, 00:16:08.168 "write": true, 00:16:08.168 "unmap": true, 00:16:08.168 "write_zeroes": true, 00:16:08.168 "flush": false, 00:16:08.168 "reset": true, 00:16:08.168 "compare": false, 00:16:08.168 "compare_and_write": false, 00:16:08.168 "abort": false, 00:16:08.168 "nvme_admin": false, 00:16:08.168 "nvme_io": false 00:16:08.168 }, 00:16:08.168 "memory_domains": [ 00:16:08.168 { 00:16:08.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.168 "dma_device_type": 2 00:16:08.168 } 00:16:08.168 ], 00:16:08.168 "driver_specific": { 00:16:08.168 "lvol": { 00:16:08.168 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.168 "base_bdev": "test_esnap_reload_aio0", 00:16:08.168 "thin_provision": true, 00:16:08.168 "snapshot": false, 00:16:08.168 "clone": false, 00:16:08.168 "esnap_clone": true, 00:16:08.168 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:08.168 } 00:16:08.168 } 00:16:08.168 } 00:16:08.168 ] 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@129 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 [ 00:16:08.168 { 00:16:08.168 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.168 "aliases": [ 00:16:08.168 "lvs_test/eclone" 00:16:08.168 ], 00:16:08.168 "product_name": "Logical Volume", 00:16:08.168 "block_size": 512, 00:16:08.168 "num_blocks": 2048, 00:16:08.168 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.168 "assigned_rate_limits": { 00:16:08.168 "rw_ios_per_sec": 0, 00:16:08.168 "rw_mbytes_per_sec": 0, 00:16:08.168 "r_mbytes_per_sec": 0, 00:16:08.168 "w_mbytes_per_sec": 0 00:16:08.168 }, 00:16:08.168 "claimed": false, 00:16:08.168 "zoned": false, 00:16:08.168 "supported_io_types": { 00:16:08.168 "read": true, 00:16:08.168 "write": true, 00:16:08.168 "unmap": true, 00:16:08.168 "write_zeroes": true, 00:16:08.168 "flush": false, 00:16:08.168 "reset": true, 00:16:08.168 "compare": false, 00:16:08.168 "compare_and_write": false, 00:16:08.168 "abort": false, 00:16:08.168 "nvme_admin": false, 00:16:08.168 "nvme_io": false 00:16:08.168 }, 00:16:08.168 "memory_domains": [ 00:16:08.168 { 00:16:08.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.168 "dma_device_type": 2 00:16:08.168 } 00:16:08.168 ], 00:16:08.168 "driver_specific": { 00:16:08.168 "lvol": { 00:16:08.168 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.168 "base_bdev": "test_esnap_reload_aio0", 00:16:08.168 "thin_provision": true, 00:16:08.168 "snapshot": false, 00:16:08.168 "clone": false, 00:16:08.168 "esnap_clone": true, 00:16:08.168 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:08.168 } 00:16:08.168 } 00:16:08.168 } 00:16:08.168 ] 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@130 -- # rpc_cmd bdev_lvol_get_lvols 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@130 -- # lvols='[ 00:16:08.168 { 00:16:08.168 "alias": "lvs_test/eclone", 00:16:08.168 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.168 "name": "eclone", 00:16:08.168 "is_thin_provisioned": true, 00:16:08.168 "is_snapshot": false, 00:16:08.168 "is_clone": false, 00:16:08.168 "is_esnap_clone": true, 00:16:08.168 "is_degraded": false, 00:16:08.168 "lvs": { 00:16:08.168 "name": "lvs_test", 00:16:08.168 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:08.168 } 00:16:08.168 } 00:16:08.168 ]' 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@131 -- # jq -r '. | length' 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@131 -- # [[ 1 == \1 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@132 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@132 -- # [[ true == \t\r\u\e ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@133 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@133 -- # [[ false == \f\a\l\s\e ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@138 -- # rpc_cmd bdev_lvol_set_read_only 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@139 -- # rpc_cmd bdev_lvol_clone 5f37a229-cdb8-4ab5-9185-ec95cc356372 clone 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@139 -- # clone_uuid=93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.168 12:35:50 -- lvol/external_snapshot.sh@140 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:08.168 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.168 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.168 [ 00:16:08.168 { 00:16:08.168 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.168 "aliases": [ 00:16:08.168 "lvs_test/clone" 00:16:08.168 ], 00:16:08.168 "product_name": "Logical Volume", 00:16:08.168 "block_size": 512, 00:16:08.168 "num_blocks": 2048, 00:16:08.168 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.168 "assigned_rate_limits": { 00:16:08.168 "rw_ios_per_sec": 0, 00:16:08.168 "rw_mbytes_per_sec": 0, 00:16:08.168 "r_mbytes_per_sec": 0, 00:16:08.168 "w_mbytes_per_sec": 0 00:16:08.168 }, 00:16:08.168 "claimed": false, 00:16:08.168 "zoned": false, 00:16:08.168 "supported_io_types": { 00:16:08.168 "read": true, 00:16:08.168 "write": true, 00:16:08.168 "unmap": true, 00:16:08.428 "write_zeroes": true, 00:16:08.428 "flush": false, 00:16:08.428 "reset": true, 00:16:08.428 "compare": false, 00:16:08.428 "compare_and_write": false, 00:16:08.428 "abort": false, 00:16:08.428 "nvme_admin": false, 00:16:08.428 "nvme_io": false 00:16:08.428 }, 00:16:08.428 "driver_specific": { 00:16:08.428 "lvol": { 00:16:08.428 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.428 "base_bdev": "test_esnap_reload_aio0", 00:16:08.428 "thin_provision": true, 00:16:08.428 "snapshot": false, 00:16:08.428 "clone": true, 00:16:08.428 "base_snapshot": "eclone", 00:16:08.428 "esnap_clone": false 00:16:08.428 } 00:16:08.428 } 00:16:08.428 } 00:16:08.428 ] 00:16:08.428 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.428 12:35:50 -- lvol/external_snapshot.sh@141 -- # rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.428 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.428 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.428 [ 00:16:08.428 { 00:16:08.428 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.428 "aliases": [ 00:16:08.428 "lvs_test/clone" 00:16:08.428 ], 00:16:08.428 "product_name": "Logical Volume", 00:16:08.428 "block_size": 512, 00:16:08.428 "num_blocks": 2048, 00:16:08.428 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.428 "assigned_rate_limits": { 00:16:08.428 "rw_ios_per_sec": 0, 00:16:08.428 "rw_mbytes_per_sec": 0, 00:16:08.428 "r_mbytes_per_sec": 0, 00:16:08.428 "w_mbytes_per_sec": 0 00:16:08.428 }, 00:16:08.428 "claimed": false, 00:16:08.428 "zoned": false, 00:16:08.428 "supported_io_types": { 00:16:08.428 "read": true, 00:16:08.428 "write": true, 00:16:08.428 "unmap": true, 00:16:08.428 "write_zeroes": true, 00:16:08.428 "flush": false, 00:16:08.428 "reset": true, 00:16:08.428 "compare": false, 00:16:08.428 "compare_and_write": false, 00:16:08.428 "abort": false, 00:16:08.428 "nvme_admin": false, 00:16:08.428 "nvme_io": false 00:16:08.428 }, 00:16:08.428 "driver_specific": { 00:16:08.428 "lvol": { 00:16:08.428 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.428 "base_bdev": "test_esnap_reload_aio0", 00:16:08.428 "thin_provision": true, 00:16:08.428 "snapshot": false, 00:16:08.428 "clone": true, 00:16:08.428 "base_snapshot": "eclone", 00:16:08.428 "esnap_clone": false 00:16:08.428 } 00:16:08.428 } 00:16:08.428 } 00:16:08.428 ] 00:16:08.428 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.428 12:35:50 -- lvol/external_snapshot.sh@142 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:08.428 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.428 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.428 [2024-10-01 12:35:50.717963] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:08.428 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.428 12:35:50 -- lvol/external_snapshot.sh@143 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.428 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.428 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.428 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.428 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.428 12:35:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.428 12:35:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.428 12:35:50 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.428 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.428 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.428 request: 00:16:08.428 { 00:16:08.428 "lvs_name": "lvs_test", 00:16:08.428 "method": "bdev_lvol_get_lvstores", 00:16:08.428 "req_id": 1 00:16:08.428 } 00:16:08.428 Got JSON-RPC error response 00:16:08.428 response: 00:16:08.428 { 00:16:08.428 "code": -19, 00:16:08.428 "message": "No such device" 00:16:08.428 } 00:16:08.428 12:35:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.428 12:35:50 -- common/autotest_common.sh@643 -- # es=1 00:16:08.428 12:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.428 12:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.428 12:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.428 12:35:50 -- lvol/external_snapshot.sh@144 -- # rpc_cmd bdev_malloc_delete e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:08.428 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.428 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.428 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.428 12:35:50 -- lvol/external_snapshot.sh@145 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:08.428 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.428 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.428 [2024-10-01 12:35:50.781378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:08.428 [2024-10-01 12:35:50.781660] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:08.428 [2024-10-01 12:35:50.781732] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: blob is degraded: deferring bdev creation 00:16:08.428 [2024-10-01 12:35:50.781837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:08.428 [2024-10-01 12:35:50.781884] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:08.429 [2024-10-01 12:35:50.781929] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 93beb3b2-99c3-4bdd-92ce-f2355cee5c87: blob is degraded: deferring bdev creation 00:16:08.429 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@145 -- # bs_dev=test_esnap_reload_aio0 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@146 -- # rpc_cmd bdev_lvol_get_lvols 00:16:08.429 12:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.429 12:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:08.429 12:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@146 -- # lvols='[ 00:16:08.429 { 00:16:08.429 "alias": "lvs_test/eclone", 00:16:08.429 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.429 "name": "eclone", 00:16:08.429 "is_thin_provisioned": true, 00:16:08.429 "is_snapshot": true, 00:16:08.429 "is_clone": false, 00:16:08.429 "is_esnap_clone": true, 00:16:08.429 "is_degraded": true, 00:16:08.429 "lvs": { 00:16:08.429 "name": "lvs_test", 00:16:08.429 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:08.429 } 00:16:08.429 }, 00:16:08.429 { 00:16:08.429 "alias": "lvs_test/clone", 00:16:08.429 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.429 "name": "clone", 00:16:08.429 "is_thin_provisioned": true, 00:16:08.429 "is_snapshot": false, 00:16:08.429 "is_clone": true, 00:16:08.429 "is_esnap_clone": false, 00:16:08.429 "is_degraded": true, 00:16:08.429 "lvs": { 00:16:08.429 "name": "lvs_test", 00:16:08.429 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:08.429 } 00:16:08.429 } 00:16:08.429 ]' 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@147 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@147 -- # [[ true == \t\r\u\e ]] 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@148 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@148 -- # [[ true == \t\r\u\e ]] 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@149 -- # jq -r '.[] | select(.name == "clone").is_clone' 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@149 -- # [[ true == \t\r\u\e ]] 00:16:08.429 12:35:50 -- lvol/external_snapshot.sh@150 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:08.690 12:35:50 -- lvol/external_snapshot.sh@150 -- # [[ true == \t\r\u\e ]] 00:16:08.690 12:35:50 -- lvol/external_snapshot.sh@151 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:08.690 12:35:50 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.690 12:35:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:08.690 12:35:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:08.690 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.690 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 [2024-10-01 12:35:51.007867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/eclone 00:16:08.690 request: 00:16:08.690 { 00:16:08.690 "name": "lvs_test/eclone", 00:16:08.690 "method": "bdev_get_bdevs", 00:16:08.690 "req_id": 1 00:16:08.690 } 00:16:08.690 Got JSON-RPC error response 00:16:08.690 response: 00:16:08.690 { 00:16:08.690 "code": -19, 00:16:08.690 "message": "No such device" 00:16:08.690 } 00:16:08.690 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:08.690 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.690 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.690 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.690 12:35:51 -- lvol/external_snapshot.sh@152 -- # NOT rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.690 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.690 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.690 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.690 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.690 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 [2024-10-01 12:35:51.023823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.690 request: 00:16:08.690 { 00:16:08.690 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.690 "method": "bdev_get_bdevs", 00:16:08.690 "req_id": 1 00:16:08.690 } 00:16:08.690 Got JSON-RPC error response 00:16:08.690 response: 00:16:08.690 { 00:16:08.690 "code": -19, 00:16:08.690 "message": "No such device" 00:16:08.690 } 00:16:08.690 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:08.690 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.690 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.690 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.690 12:35:51 -- lvol/external_snapshot.sh@153 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:08.690 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.690 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:08.690 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.690 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:08.690 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.690 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.690 [2024-10-01 12:35:51.039809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/clone 00:16:08.690 request: 00:16:08.690 { 00:16:08.690 "name": "lvs_test/clone", 00:16:08.690 "method": "bdev_get_bdevs", 00:16:08.690 "req_id": 1 00:16:08.690 } 00:16:08.690 Got JSON-RPC error response 00:16:08.690 response: 00:16:08.690 { 00:16:08.690 "code": -19, 00:16:08.690 "message": "No such device" 00:16:08.690 } 00:16:08.690 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.690 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:08.690 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.691 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.691 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@154 -- # NOT rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.691 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.691 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.691 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.691 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 [2024-10-01 12:35:51.055801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.691 request: 00:16:08.691 { 00:16:08.691 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.691 "method": "bdev_get_bdevs", 00:16:08.691 "req_id": 1 00:16:08.691 } 00:16:08.691 Got JSON-RPC error response 00:16:08.691 response: 00:16:08.691 { 00:16:08.691 "code": -19, 00:16:08.691 "message": "No such device" 00:16:08.691 } 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.691 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:08.691 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.691 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.691 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@160 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 [2024-10-01 12:35:51.067867] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@161 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.691 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:08.691 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.691 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:08.691 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:08.691 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 request: 00:16:08.691 { 00:16:08.691 "lvs_name": "lvs_test", 00:16:08.691 "method": "bdev_lvol_get_lvstores", 00:16:08.691 "req_id": 1 00:16:08.691 } 00:16:08.691 Got JSON-RPC error response 00:16:08.691 response: 00:16:08.691 { 00:16:08.691 "code": -19, 00:16:08.691 "message": "No such device" 00:16:08.691 } 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:08.691 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:08.691 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:08.691 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:08.691 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@162 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@162 -- # esnap_dev=Malloc3 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@163 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@163 -- # bs_dev=test_esnap_reload_aio0 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@164 -- # rpc_cmd bdev_lvol_get_lvols 00:16:08.691 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.691 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.691 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@164 -- # lvols='[ 00:16:08.691 { 00:16:08.691 "alias": "lvs_test/eclone", 00:16:08.691 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.691 "name": "eclone", 00:16:08.691 "is_thin_provisioned": true, 00:16:08.691 "is_snapshot": true, 00:16:08.691 "is_clone": false, 00:16:08.691 "is_esnap_clone": true, 00:16:08.691 "is_degraded": false, 00:16:08.691 "lvs": { 00:16:08.691 "name": "lvs_test", 00:16:08.691 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:08.691 } 00:16:08.691 }, 00:16:08.691 { 00:16:08.691 "alias": "lvs_test/clone", 00:16:08.691 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.691 "name": "clone", 00:16:08.691 "is_thin_provisioned": true, 00:16:08.691 "is_snapshot": false, 00:16:08.691 "is_clone": true, 00:16:08.691 "is_esnap_clone": false, 00:16:08.691 "is_degraded": false, 00:16:08.691 "lvs": { 00:16:08.691 "name": "lvs_test", 00:16:08.691 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:08.691 } 00:16:08.691 } 00:16:08.691 ]' 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@165 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@165 -- # [[ true == \t\r\u\e ]] 00:16:08.691 12:35:51 -- lvol/external_snapshot.sh@166 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@166 -- # [[ false == \f\a\l\s\e ]] 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@167 -- # jq -r '.[] | select(.name == "clone").is_clone' 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@167 -- # [[ true == \t\r\u\e ]] 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@168 -- # jq -r '.[] | select(.name == "clone").is_degraded' 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@168 -- # [[ false == \f\a\l\s\e ]] 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@169 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:08.950 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.950 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.950 [ 00:16:08.950 { 00:16:08.950 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.950 "aliases": [ 00:16:08.950 "lvs_test/eclone" 00:16:08.950 ], 00:16:08.950 "product_name": "Logical Volume", 00:16:08.950 "block_size": 512, 00:16:08.950 "num_blocks": 2048, 00:16:08.950 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.950 "assigned_rate_limits": { 00:16:08.950 "rw_ios_per_sec": 0, 00:16:08.950 "rw_mbytes_per_sec": 0, 00:16:08.950 "r_mbytes_per_sec": 0, 00:16:08.950 "w_mbytes_per_sec": 0 00:16:08.950 }, 00:16:08.950 "claimed": false, 00:16:08.950 "zoned": false, 00:16:08.950 "supported_io_types": { 00:16:08.950 "read": true, 00:16:08.950 "write": false, 00:16:08.950 "unmap": false, 00:16:08.950 "write_zeroes": false, 00:16:08.950 "flush": false, 00:16:08.950 "reset": true, 00:16:08.950 "compare": false, 00:16:08.950 "compare_and_write": false, 00:16:08.950 "abort": false, 00:16:08.950 "nvme_admin": false, 00:16:08.950 "nvme_io": false 00:16:08.950 }, 00:16:08.950 "memory_domains": [ 00:16:08.950 { 00:16:08.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.950 "dma_device_type": 2 00:16:08.950 } 00:16:08.950 ], 00:16:08.950 "driver_specific": { 00:16:08.950 "lvol": { 00:16:08.950 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.950 "base_bdev": "test_esnap_reload_aio0", 00:16:08.950 "thin_provision": true, 00:16:08.950 "snapshot": true, 00:16:08.950 "clone": false, 00:16:08.950 "clones": [ 00:16:08.950 "clone" 00:16:08.950 ], 00:16:08.950 "esnap_clone": true, 00:16:08.950 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:08.950 } 00:16:08.950 } 00:16:08.950 } 00:16:08.950 ] 00:16:08.950 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.950 12:35:51 -- lvol/external_snapshot.sh@170 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:08.950 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.950 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.950 [ 00:16:08.950 { 00:16:08.950 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.950 "aliases": [ 00:16:08.950 "lvs_test/eclone" 00:16:08.950 ], 00:16:08.950 "product_name": "Logical Volume", 00:16:08.950 "block_size": 512, 00:16:08.950 "num_blocks": 2048, 00:16:08.950 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:08.950 "assigned_rate_limits": { 00:16:08.950 "rw_ios_per_sec": 0, 00:16:08.950 "rw_mbytes_per_sec": 0, 00:16:08.950 "r_mbytes_per_sec": 0, 00:16:08.950 "w_mbytes_per_sec": 0 00:16:08.950 }, 00:16:08.950 "claimed": false, 00:16:08.950 "zoned": false, 00:16:08.950 "supported_io_types": { 00:16:08.950 "read": true, 00:16:08.950 "write": false, 00:16:08.950 "unmap": false, 00:16:08.950 "write_zeroes": false, 00:16:08.950 "flush": false, 00:16:08.950 "reset": true, 00:16:08.950 "compare": false, 00:16:08.950 "compare_and_write": false, 00:16:08.950 "abort": false, 00:16:08.951 "nvme_admin": false, 00:16:08.951 "nvme_io": false 00:16:08.951 }, 00:16:08.951 "memory_domains": [ 00:16:08.951 { 00:16:08.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.951 "dma_device_type": 2 00:16:08.951 } 00:16:08.951 ], 00:16:08.951 "driver_specific": { 00:16:08.951 "lvol": { 00:16:08.951 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.951 "base_bdev": "test_esnap_reload_aio0", 00:16:08.951 "thin_provision": true, 00:16:08.951 "snapshot": true, 00:16:08.951 "clone": false, 00:16:08.951 "clones": [ 00:16:08.951 "clone" 00:16:08.951 ], 00:16:08.951 "esnap_clone": true, 00:16:08.951 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:08.951 } 00:16:08.951 } 00:16:08.951 } 00:16:08.951 ] 00:16:08.951 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.951 12:35:51 -- lvol/external_snapshot.sh@171 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:08.951 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.951 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.951 [ 00:16:08.951 { 00:16:08.951 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.951 "aliases": [ 00:16:08.951 "lvs_test/clone" 00:16:08.951 ], 00:16:08.951 "product_name": "Logical Volume", 00:16:08.951 "block_size": 512, 00:16:08.951 "num_blocks": 2048, 00:16:08.951 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.951 "assigned_rate_limits": { 00:16:08.951 "rw_ios_per_sec": 0, 00:16:08.951 "rw_mbytes_per_sec": 0, 00:16:08.951 "r_mbytes_per_sec": 0, 00:16:08.951 "w_mbytes_per_sec": 0 00:16:08.951 }, 00:16:08.951 "claimed": false, 00:16:08.951 "zoned": false, 00:16:08.951 "supported_io_types": { 00:16:08.951 "read": true, 00:16:08.951 "write": true, 00:16:08.951 "unmap": true, 00:16:08.951 "write_zeroes": true, 00:16:08.951 "flush": false, 00:16:08.951 "reset": true, 00:16:08.951 "compare": false, 00:16:08.951 "compare_and_write": false, 00:16:08.951 "abort": false, 00:16:08.951 "nvme_admin": false, 00:16:08.951 "nvme_io": false 00:16:08.951 }, 00:16:08.951 "driver_specific": { 00:16:08.951 "lvol": { 00:16:08.951 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.951 "base_bdev": "test_esnap_reload_aio0", 00:16:08.951 "thin_provision": true, 00:16:08.951 "snapshot": false, 00:16:08.951 "clone": true, 00:16:08.951 "base_snapshot": "eclone", 00:16:08.951 "esnap_clone": false 00:16:08.951 } 00:16:08.951 } 00:16:08.951 } 00:16:08.951 ] 00:16:08.951 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.951 12:35:51 -- lvol/external_snapshot.sh@172 -- # rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:08.951 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.951 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.951 [ 00:16:08.951 { 00:16:08.951 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.951 "aliases": [ 00:16:08.951 "lvs_test/clone" 00:16:08.951 ], 00:16:08.951 "product_name": "Logical Volume", 00:16:08.951 "block_size": 512, 00:16:08.951 "num_blocks": 2048, 00:16:08.951 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:08.951 "assigned_rate_limits": { 00:16:08.951 "rw_ios_per_sec": 0, 00:16:08.951 "rw_mbytes_per_sec": 0, 00:16:08.951 "r_mbytes_per_sec": 0, 00:16:08.951 "w_mbytes_per_sec": 0 00:16:08.951 }, 00:16:08.951 "claimed": false, 00:16:08.951 "zoned": false, 00:16:08.951 "supported_io_types": { 00:16:08.951 "read": true, 00:16:08.951 "write": true, 00:16:08.951 "unmap": true, 00:16:08.951 "write_zeroes": true, 00:16:08.951 "flush": false, 00:16:08.951 "reset": true, 00:16:08.951 "compare": false, 00:16:08.951 "compare_and_write": false, 00:16:08.951 "abort": false, 00:16:08.951 "nvme_admin": false, 00:16:08.951 "nvme_io": false 00:16:08.951 }, 00:16:08.951 "driver_specific": { 00:16:08.951 "lvol": { 00:16:08.951 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.951 "base_bdev": "test_esnap_reload_aio0", 00:16:08.951 "thin_provision": true, 00:16:08.951 "snapshot": false, 00:16:08.951 "clone": true, 00:16:08.951 "base_snapshot": "eclone", 00:16:08.951 "esnap_clone": false 00:16:08.951 } 00:16:08.951 } 00:16:08.951 } 00:16:08.951 ] 00:16:08.951 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.951 12:35:51 -- lvol/external_snapshot.sh@177 -- # rpc_cmd bdev_lvol_snapshot 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 snap 00:16:08.951 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.951 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.951 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.951 12:35:51 -- lvol/external_snapshot.sh@177 -- # snap_uuid=8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:08.951 12:35:51 -- lvol/external_snapshot.sh@178 -- # rpc_cmd bdev_get_bdevs -b lvs_test/snap 00:16:08.951 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.951 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.951 [ 00:16:08.951 { 00:16:08.951 "name": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:08.951 "aliases": [ 00:16:08.951 "lvs_test/snap" 00:16:08.951 ], 00:16:08.951 "product_name": "Logical Volume", 00:16:08.951 "block_size": 512, 00:16:08.951 "num_blocks": 2048, 00:16:08.951 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:08.951 "assigned_rate_limits": { 00:16:08.951 "rw_ios_per_sec": 0, 00:16:08.951 "rw_mbytes_per_sec": 0, 00:16:08.951 "r_mbytes_per_sec": 0, 00:16:08.951 "w_mbytes_per_sec": 0 00:16:08.951 }, 00:16:08.951 "claimed": false, 00:16:08.951 "zoned": false, 00:16:08.951 "supported_io_types": { 00:16:08.951 "read": true, 00:16:08.951 "write": false, 00:16:08.951 "unmap": false, 00:16:08.951 "write_zeroes": false, 00:16:08.951 "flush": false, 00:16:08.951 "reset": true, 00:16:08.951 "compare": false, 00:16:08.951 "compare_and_write": false, 00:16:08.951 "abort": false, 00:16:08.951 "nvme_admin": false, 00:16:08.951 "nvme_io": false 00:16:08.951 }, 00:16:08.951 "driver_specific": { 00:16:08.951 "lvol": { 00:16:08.951 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:08.951 "base_bdev": "test_esnap_reload_aio0", 00:16:08.951 "thin_provision": true, 00:16:08.951 "snapshot": true, 00:16:08.951 "clone": true, 00:16:08.951 "base_snapshot": "eclone", 00:16:08.951 "clones": [ 00:16:08.952 "clone" 00:16:08.952 ], 00:16:08.952 "esnap_clone": false 00:16:08.952 } 00:16:08.952 } 00:16:08.952 } 00:16:08.952 ] 00:16:08.952 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.952 12:35:51 -- lvol/external_snapshot.sh@179 -- # rpc_cmd bdev_get_bdevs -b 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:08.952 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.952 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.952 [ 00:16:08.952 { 00:16:08.952 "name": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:08.952 "aliases": [ 00:16:08.952 "lvs_test/snap" 00:16:08.952 ], 00:16:08.952 "product_name": "Logical Volume", 00:16:08.952 "block_size": 512, 00:16:08.952 "num_blocks": 2048, 00:16:08.952 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:08.952 "assigned_rate_limits": { 00:16:08.952 "rw_ios_per_sec": 0, 00:16:08.952 "rw_mbytes_per_sec": 0, 00:16:09.211 "r_mbytes_per_sec": 0, 00:16:09.211 "w_mbytes_per_sec": 0 00:16:09.211 }, 00:16:09.211 "claimed": false, 00:16:09.211 "zoned": false, 00:16:09.211 "supported_io_types": { 00:16:09.211 "read": true, 00:16:09.211 "write": false, 00:16:09.211 "unmap": false, 00:16:09.211 "write_zeroes": false, 00:16:09.211 "flush": false, 00:16:09.211 "reset": true, 00:16:09.211 "compare": false, 00:16:09.211 "compare_and_write": false, 00:16:09.211 "abort": false, 00:16:09.211 "nvme_admin": false, 00:16:09.211 "nvme_io": false 00:16:09.211 }, 00:16:09.211 "driver_specific": { 00:16:09.211 "lvol": { 00:16:09.211 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:09.211 "base_bdev": "test_esnap_reload_aio0", 00:16:09.211 "thin_provision": true, 00:16:09.211 "snapshot": true, 00:16:09.211 "clone": true, 00:16:09.211 "base_snapshot": "eclone", 00:16:09.211 "clones": [ 00:16:09.211 "clone" 00:16:09.211 ], 00:16:09.211 "esnap_clone": false 00:16:09.211 } 00:16:09.211 } 00:16:09.211 } 00:16:09.211 ] 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@180 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:09.211 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.211 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.211 [2024-10-01 12:35:51.486149] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@181 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:09.211 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.211 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:09.211 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.211 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.211 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.211 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.211 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:09.211 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.211 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.211 request: 00:16:09.211 { 00:16:09.211 "lvs_name": "lvs_test", 00:16:09.211 "method": "bdev_lvol_get_lvstores", 00:16:09.211 "req_id": 1 00:16:09.211 } 00:16:09.211 Got JSON-RPC error response 00:16:09.211 response: 00:16:09.211 { 00:16:09.211 "code": -19, 00:16:09.211 "message": "No such device" 00:16:09.211 } 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.211 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:09.211 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.211 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.211 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@182 -- # rpc_cmd bdev_malloc_delete e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:09.211 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.211 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@183 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:09.211 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.211 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.211 [2024-10-01 12:35:51.555594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:09.211 [2024-10-01 12:35:51.555693] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:09.211 [2024-10-01 12:35:51.555738] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 5f37a229-cdb8-4ab5-9185-ec95cc356372: blob is degraded: deferring bdev creation 00:16:09.211 [2024-10-01 12:35:51.555900] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 8e6ae10a-4a0b-484b-8741-b2e9305ff539: blob is degraded: deferring bdev creation 00:16:09.211 [2024-10-01 12:35:51.556045] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 93beb3b2-99c3-4bdd-92ce-f2355cee5c87: blob is degraded: deferring bdev creation 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@183 -- # bs_dev=test_esnap_reload_aio0 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@184 -- # rpc_cmd bdev_lvol_get_lvols 00:16:09.211 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.211 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.211 12:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.211 12:35:51 -- lvol/external_snapshot.sh@184 -- # lvols='[ 00:16:09.211 { 00:16:09.211 "alias": "lvs_test/eclone", 00:16:09.211 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:09.211 "name": "eclone", 00:16:09.211 "is_thin_provisioned": true, 00:16:09.211 "is_snapshot": true, 00:16:09.211 "is_clone": false, 00:16:09.211 "is_esnap_clone": true, 00:16:09.211 "is_degraded": true, 00:16:09.211 "lvs": { 00:16:09.211 "name": "lvs_test", 00:16:09.211 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.211 } 00:16:09.211 }, 00:16:09.211 { 00:16:09.211 "alias": "lvs_test/clone", 00:16:09.211 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:09.211 "name": "clone", 00:16:09.211 "is_thin_provisioned": true, 00:16:09.211 "is_snapshot": false, 00:16:09.211 "is_clone": true, 00:16:09.211 "is_esnap_clone": false, 00:16:09.211 "is_degraded": true, 00:16:09.211 "lvs": { 00:16:09.211 "name": "lvs_test", 00:16:09.211 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.211 } 00:16:09.211 }, 00:16:09.211 { 00:16:09.211 "alias": "lvs_test/snap", 00:16:09.211 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:09.211 "name": "snap", 00:16:09.211 "is_thin_provisioned": true, 00:16:09.212 "is_snapshot": true, 00:16:09.212 "is_clone": true, 00:16:09.212 "is_esnap_clone": false, 00:16:09.212 "is_degraded": true, 00:16:09.212 "lvs": { 00:16:09.212 "name": "lvs_test", 00:16:09.212 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.212 } 00:16:09.212 } 00:16:09.212 ]' 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@185 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@185 -- # [[ true == \t\r\u\e ]] 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@186 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@186 -- # [[ true == \t\r\u\e ]] 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@187 -- # jq -r '.[] | select(.name == "clone").is_clone' 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@187 -- # [[ true == \t\r\u\e ]] 00:16:09.212 12:35:51 -- lvol/external_snapshot.sh@188 -- # jq -r '.[] | select(.name == "clone").is_degraded' 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@188 -- # [[ true == \t\r\u\e ]] 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@189 -- # jq -r '.[] | select(.name == "snap").is_clone' 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@189 -- # [[ true == \t\r\u\e ]] 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@190 -- # jq -r '.[] | select(.name == "snap").is_snapshot' 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@190 -- # [[ true == \t\r\u\e ]] 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@191 -- # jq -r '.[] | select(.name == "snap").is_degraded' 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@191 -- # [[ true == \t\r\u\e ]] 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@192 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:09.470 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.470 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:09.470 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.470 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:09.470 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.470 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.470 [2024-10-01 12:35:51.948381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/eclone 00:16:09.470 request: 00:16:09.470 { 00:16:09.470 "name": "lvs_test/eclone", 00:16:09.470 "method": "bdev_get_bdevs", 00:16:09.470 "req_id": 1 00:16:09.470 } 00:16:09.470 Got JSON-RPC error response 00:16:09.470 response: 00:16:09.470 { 00:16:09.470 "code": -19, 00:16:09.470 "message": "No such device" 00:16:09.470 } 00:16:09.470 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.470 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:09.470 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.470 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.470 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.470 12:35:51 -- lvol/external_snapshot.sh@193 -- # NOT rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:09.470 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.470 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:09.470 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.470 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.470 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:09.471 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.471 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.471 [2024-10-01 12:35:51.964346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:09.471 request: 00:16:09.471 { 00:16:09.471 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:09.471 "method": "bdev_get_bdevs", 00:16:09.471 "req_id": 1 00:16:09.471 } 00:16:09.471 Got JSON-RPC error response 00:16:09.471 response: 00:16:09.471 { 00:16:09.471 "code": -19, 00:16:09.471 "message": "No such device" 00:16:09.471 } 00:16:09.471 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.471 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:09.471 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.471 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.471 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.471 12:35:51 -- lvol/external_snapshot.sh@194 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:09.471 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.471 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:09.471 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.471 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:09.471 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.471 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.471 [2024-10-01 12:35:51.976346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/clone 00:16:09.471 request: 00:16:09.471 { 00:16:09.471 "name": "lvs_test/clone", 00:16:09.471 "method": "bdev_get_bdevs", 00:16:09.471 "req_id": 1 00:16:09.471 } 00:16:09.471 Got JSON-RPC error response 00:16:09.471 response: 00:16:09.471 { 00:16:09.471 "code": -19, 00:16:09.471 "message": "No such device" 00:16:09.471 } 00:16:09.471 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.471 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:09.471 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.471 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.471 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.471 12:35:51 -- lvol/external_snapshot.sh@195 -- # NOT rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:09.471 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.471 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:09.471 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.471 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.471 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:09.471 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.471 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.471 [2024-10-01 12:35:51.988367] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:09.730 request: 00:16:09.730 { 00:16:09.730 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:09.730 "method": "bdev_get_bdevs", 00:16:09.730 "req_id": 1 00:16:09.730 } 00:16:09.730 Got JSON-RPC error response 00:16:09.730 response: 00:16:09.730 { 00:16:09.730 "code": -19, 00:16:09.730 "message": "No such device" 00:16:09.730 } 00:16:09.730 12:35:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.730 12:35:51 -- common/autotest_common.sh@643 -- # es=1 00:16:09.730 12:35:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.730 12:35:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.730 12:35:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.730 12:35:51 -- lvol/external_snapshot.sh@196 -- # NOT rpc_cmd bdev_get_bdevs -b lvs_test/snap 00:16:09.730 12:35:51 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.730 12:35:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b lvs_test/snap 00:16:09.730 12:35:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.730 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.730 12:35:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.730 12:35:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.730 12:35:51 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b lvs_test/snap 00:16:09.730 12:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.730 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:09.730 [2024-10-01 12:35:52.004368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: lvs_test/snap 00:16:09.730 request: 00:16:09.730 { 00:16:09.730 "name": "lvs_test/snap", 00:16:09.730 "method": "bdev_get_bdevs", 00:16:09.730 "req_id": 1 00:16:09.730 } 00:16:09.730 Got JSON-RPC error response 00:16:09.730 response: 00:16:09.730 { 00:16:09.730 "code": -19, 00:16:09.730 "message": "No such device" 00:16:09.730 } 00:16:09.730 12:35:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.730 12:35:52 -- common/autotest_common.sh@643 -- # es=1 00:16:09.730 12:35:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.730 12:35:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.730 12:35:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@197 -- # NOT rpc_cmd bdev_get_bdevs -b 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:09.730 12:35:52 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.730 12:35:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:09.730 12:35:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.730 12:35:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.730 12:35:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.730 12:35:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.730 12:35:52 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:09.730 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.730 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:09.730 [2024-10-01 12:35:52.020390] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:09.730 request: 00:16:09.730 { 00:16:09.730 "name": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:09.730 "method": "bdev_get_bdevs", 00:16:09.730 "req_id": 1 00:16:09.730 } 00:16:09.730 Got JSON-RPC error response 00:16:09.730 response: 00:16:09.730 { 00:16:09.730 "code": -19, 00:16:09.730 "message": "No such device" 00:16:09.730 } 00:16:09.730 12:35:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.730 12:35:52 -- common/autotest_common.sh@643 -- # es=1 00:16:09.730 12:35:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.730 12:35:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.730 12:35:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@200 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:09.730 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.730 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:09.730 [2024-10-01 12:35:52.033748] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc4 already claimed: type read_many_write_none by module lvol 00:16:09.730 [2024-10-01 12:35:52.033939] blobstore.c:9230:blob_frozen_set_back_bs_dev: *NOTICE*: blob 0x100000001: hotplugged back_bs_dev 00:16:09.730 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@200 -- # esnap_dev=Malloc4 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@201 -- # rpc_cmd bdev_wait_for_examine 00:16:09.730 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.730 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:09.730 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@202 -- # rpc_cmd bdev_lvol_get_lvols 00:16:09.730 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.730 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:09.730 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@202 -- # lvols='[ 00:16:09.730 { 00:16:09.730 "alias": "lvs_test/eclone", 00:16:09.730 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:09.730 "name": "eclone", 00:16:09.730 "is_thin_provisioned": true, 00:16:09.730 "is_snapshot": true, 00:16:09.730 "is_clone": false, 00:16:09.730 "is_esnap_clone": true, 00:16:09.730 "is_degraded": false, 00:16:09.730 "lvs": { 00:16:09.730 "name": "lvs_test", 00:16:09.730 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.730 } 00:16:09.730 }, 00:16:09.730 { 00:16:09.730 "alias": "lvs_test/clone", 00:16:09.730 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:09.730 "name": "clone", 00:16:09.730 "is_thin_provisioned": true, 00:16:09.730 "is_snapshot": false, 00:16:09.730 "is_clone": true, 00:16:09.730 "is_esnap_clone": false, 00:16:09.730 "is_degraded": false, 00:16:09.730 "lvs": { 00:16:09.730 "name": "lvs_test", 00:16:09.730 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.730 } 00:16:09.730 }, 00:16:09.730 { 00:16:09.730 "alias": "lvs_test/snap", 00:16:09.730 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:09.730 "name": "snap", 00:16:09.730 "is_thin_provisioned": true, 00:16:09.730 "is_snapshot": true, 00:16:09.730 "is_clone": true, 00:16:09.730 "is_esnap_clone": false, 00:16:09.730 "is_degraded": false, 00:16:09.730 "lvs": { 00:16:09.730 "name": "lvs_test", 00:16:09.730 "uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b" 00:16:09.730 } 00:16:09.730 } 00:16:09.730 ]' 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@203 -- # jq -r '.[] | select(.name == "eclone").is_esnap_clone' 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@203 -- # [[ true == \t\r\u\e ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@204 -- # jq -r '.[] | select(.name == "eclone").is_degraded' 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@204 -- # [[ false == \f\a\l\s\e ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@205 -- # jq -r '.[] | select(.name == "clone").is_clone' 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@205 -- # [[ true == \t\r\u\e ]] 00:16:09.730 12:35:52 -- lvol/external_snapshot.sh@206 -- # jq -r '.[] | select(.name == "clone").is_degraded' 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@206 -- # [[ false == \f\a\l\s\e ]] 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@207 -- # jq -r '.[] | select(.name == "snap").is_clone' 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@207 -- # [[ true == \t\r\u\e ]] 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@208 -- # jq -r '.[] | select(.name == "snap").is_snapshot' 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@208 -- # [[ true == \t\r\u\e ]] 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@209 -- # jq -r '.[] | select(.name == "snap").is_degraded' 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@209 -- # [[ false == \f\a\l\s\e ]] 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@210 -- # rpc_cmd bdev_get_bdevs -b lvs_test/eclone 00:16:09.989 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.989 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:09.989 [ 00:16:09.989 { 00:16:09.989 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:09.989 "aliases": [ 00:16:09.989 "lvs_test/eclone" 00:16:09.989 ], 00:16:09.989 "product_name": "Logical Volume", 00:16:09.989 "block_size": 512, 00:16:09.989 "num_blocks": 2048, 00:16:09.989 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:09.989 "assigned_rate_limits": { 00:16:09.989 "rw_ios_per_sec": 0, 00:16:09.989 "rw_mbytes_per_sec": 0, 00:16:09.989 "r_mbytes_per_sec": 0, 00:16:09.989 "w_mbytes_per_sec": 0 00:16:09.989 }, 00:16:09.989 "claimed": false, 00:16:09.989 "zoned": false, 00:16:09.989 "supported_io_types": { 00:16:09.989 "read": true, 00:16:09.989 "write": false, 00:16:09.989 "unmap": false, 00:16:09.989 "write_zeroes": false, 00:16:09.989 "flush": false, 00:16:09.989 "reset": true, 00:16:09.989 "compare": false, 00:16:09.989 "compare_and_write": false, 00:16:09.989 "abort": false, 00:16:09.989 "nvme_admin": false, 00:16:09.989 "nvme_io": false 00:16:09.989 }, 00:16:09.989 "memory_domains": [ 00:16:09.989 { 00:16:09.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.989 "dma_device_type": 2 00:16:09.989 } 00:16:09.989 ], 00:16:09.989 "driver_specific": { 00:16:09.989 "lvol": { 00:16:09.989 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:09.989 "base_bdev": "test_esnap_reload_aio0", 00:16:09.989 "thin_provision": true, 00:16:09.989 "snapshot": true, 00:16:09.989 "clone": false, 00:16:09.989 "clones": [ 00:16:09.989 "snap" 00:16:09.989 ], 00:16:09.989 "esnap_clone": true, 00:16:09.989 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:09.989 } 00:16:09.989 } 00:16:09.989 } 00:16:09.989 ] 00:16:09.989 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.989 12:35:52 -- lvol/external_snapshot.sh@211 -- # rpc_cmd bdev_get_bdevs -b 5f37a229-cdb8-4ab5-9185-ec95cc356372 00:16:09.989 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.989 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.249 [ 00:16:10.249 { 00:16:10.249 "name": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:10.249 "aliases": [ 00:16:10.249 "lvs_test/eclone" 00:16:10.249 ], 00:16:10.249 "product_name": "Logical Volume", 00:16:10.249 "block_size": 512, 00:16:10.249 "num_blocks": 2048, 00:16:10.249 "uuid": "5f37a229-cdb8-4ab5-9185-ec95cc356372", 00:16:10.249 "assigned_rate_limits": { 00:16:10.249 "rw_ios_per_sec": 0, 00:16:10.249 "rw_mbytes_per_sec": 0, 00:16:10.249 "r_mbytes_per_sec": 0, 00:16:10.249 "w_mbytes_per_sec": 0 00:16:10.249 }, 00:16:10.249 "claimed": false, 00:16:10.249 "zoned": false, 00:16:10.249 "supported_io_types": { 00:16:10.249 "read": true, 00:16:10.249 "write": false, 00:16:10.249 "unmap": false, 00:16:10.249 "write_zeroes": false, 00:16:10.249 "flush": false, 00:16:10.249 "reset": true, 00:16:10.249 "compare": false, 00:16:10.249 "compare_and_write": false, 00:16:10.249 "abort": false, 00:16:10.249 "nvme_admin": false, 00:16:10.249 "nvme_io": false 00:16:10.249 }, 00:16:10.249 "memory_domains": [ 00:16:10.249 { 00:16:10.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.249 "dma_device_type": 2 00:16:10.249 } 00:16:10.249 ], 00:16:10.249 "driver_specific": { 00:16:10.249 "lvol": { 00:16:10.249 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:10.249 "base_bdev": "test_esnap_reload_aio0", 00:16:10.249 "thin_provision": true, 00:16:10.249 "snapshot": true, 00:16:10.249 "clone": false, 00:16:10.249 "clones": [ 00:16:10.249 "snap" 00:16:10.249 ], 00:16:10.249 "esnap_clone": true, 00:16:10.249 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:10.249 } 00:16:10.249 } 00:16:10.249 } 00:16:10.249 ] 00:16:10.249 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.249 12:35:52 -- lvol/external_snapshot.sh@212 -- # rpc_cmd bdev_get_bdevs -b lvs_test/clone 00:16:10.249 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.249 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.249 [ 00:16:10.249 { 00:16:10.249 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:10.249 "aliases": [ 00:16:10.249 "lvs_test/clone" 00:16:10.249 ], 00:16:10.249 "product_name": "Logical Volume", 00:16:10.249 "block_size": 512, 00:16:10.249 "num_blocks": 2048, 00:16:10.249 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:10.249 "assigned_rate_limits": { 00:16:10.249 "rw_ios_per_sec": 0, 00:16:10.249 "rw_mbytes_per_sec": 0, 00:16:10.249 "r_mbytes_per_sec": 0, 00:16:10.249 "w_mbytes_per_sec": 0 00:16:10.249 }, 00:16:10.249 "claimed": false, 00:16:10.249 "zoned": false, 00:16:10.249 "supported_io_types": { 00:16:10.249 "read": true, 00:16:10.249 "write": true, 00:16:10.249 "unmap": true, 00:16:10.249 "write_zeroes": true, 00:16:10.249 "flush": false, 00:16:10.249 "reset": true, 00:16:10.249 "compare": false, 00:16:10.249 "compare_and_write": false, 00:16:10.249 "abort": false, 00:16:10.249 "nvme_admin": false, 00:16:10.249 "nvme_io": false 00:16:10.249 }, 00:16:10.249 "driver_specific": { 00:16:10.249 "lvol": { 00:16:10.249 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:10.249 "base_bdev": "test_esnap_reload_aio0", 00:16:10.249 "thin_provision": true, 00:16:10.249 "snapshot": false, 00:16:10.249 "clone": true, 00:16:10.249 "base_snapshot": "snap", 00:16:10.249 "esnap_clone": false 00:16:10.249 } 00:16:10.249 } 00:16:10.249 } 00:16:10.249 ] 00:16:10.249 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.249 12:35:52 -- lvol/external_snapshot.sh@213 -- # rpc_cmd bdev_get_bdevs -b 93beb3b2-99c3-4bdd-92ce-f2355cee5c87 00:16:10.249 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.249 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.249 [ 00:16:10.249 { 00:16:10.249 "name": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:10.249 "aliases": [ 00:16:10.249 "lvs_test/clone" 00:16:10.249 ], 00:16:10.249 "product_name": "Logical Volume", 00:16:10.249 "block_size": 512, 00:16:10.249 "num_blocks": 2048, 00:16:10.249 "uuid": "93beb3b2-99c3-4bdd-92ce-f2355cee5c87", 00:16:10.249 "assigned_rate_limits": { 00:16:10.249 "rw_ios_per_sec": 0, 00:16:10.249 "rw_mbytes_per_sec": 0, 00:16:10.249 "r_mbytes_per_sec": 0, 00:16:10.249 "w_mbytes_per_sec": 0 00:16:10.249 }, 00:16:10.249 "claimed": false, 00:16:10.249 "zoned": false, 00:16:10.250 "supported_io_types": { 00:16:10.250 "read": true, 00:16:10.250 "write": true, 00:16:10.250 "unmap": true, 00:16:10.250 "write_zeroes": true, 00:16:10.250 "flush": false, 00:16:10.250 "reset": true, 00:16:10.250 "compare": false, 00:16:10.250 "compare_and_write": false, 00:16:10.250 "abort": false, 00:16:10.250 "nvme_admin": false, 00:16:10.250 "nvme_io": false 00:16:10.250 }, 00:16:10.250 "driver_specific": { 00:16:10.250 "lvol": { 00:16:10.250 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:10.250 "base_bdev": "test_esnap_reload_aio0", 00:16:10.250 "thin_provision": true, 00:16:10.250 "snapshot": false, 00:16:10.250 "clone": true, 00:16:10.250 "base_snapshot": "snap", 00:16:10.250 "esnap_clone": false 00:16:10.250 } 00:16:10.250 } 00:16:10.250 } 00:16:10.250 ] 00:16:10.250 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@214 -- # rpc_cmd bdev_get_bdevs -b lvs_test/snap 00:16:10.250 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 [ 00:16:10.250 { 00:16:10.250 "name": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:10.250 "aliases": [ 00:16:10.250 "lvs_test/snap" 00:16:10.250 ], 00:16:10.250 "product_name": "Logical Volume", 00:16:10.250 "block_size": 512, 00:16:10.250 "num_blocks": 2048, 00:16:10.250 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:10.250 "assigned_rate_limits": { 00:16:10.250 "rw_ios_per_sec": 0, 00:16:10.250 "rw_mbytes_per_sec": 0, 00:16:10.250 "r_mbytes_per_sec": 0, 00:16:10.250 "w_mbytes_per_sec": 0 00:16:10.250 }, 00:16:10.250 "claimed": false, 00:16:10.250 "zoned": false, 00:16:10.250 "supported_io_types": { 00:16:10.250 "read": true, 00:16:10.250 "write": false, 00:16:10.250 "unmap": false, 00:16:10.250 "write_zeroes": false, 00:16:10.250 "flush": false, 00:16:10.250 "reset": true, 00:16:10.250 "compare": false, 00:16:10.250 "compare_and_write": false, 00:16:10.250 "abort": false, 00:16:10.250 "nvme_admin": false, 00:16:10.250 "nvme_io": false 00:16:10.250 }, 00:16:10.250 "driver_specific": { 00:16:10.250 "lvol": { 00:16:10.250 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:10.250 "base_bdev": "test_esnap_reload_aio0", 00:16:10.250 "thin_provision": true, 00:16:10.250 "snapshot": true, 00:16:10.250 "clone": true, 00:16:10.250 "base_snapshot": "eclone", 00:16:10.250 "clones": [ 00:16:10.250 "clone" 00:16:10.250 ], 00:16:10.250 "esnap_clone": false 00:16:10.250 } 00:16:10.250 } 00:16:10.250 } 00:16:10.250 ] 00:16:10.250 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@215 -- # rpc_cmd bdev_get_bdevs -b 8e6ae10a-4a0b-484b-8741-b2e9305ff539 00:16:10.250 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 [ 00:16:10.250 { 00:16:10.250 "name": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:10.250 "aliases": [ 00:16:10.250 "lvs_test/snap" 00:16:10.250 ], 00:16:10.250 "product_name": "Logical Volume", 00:16:10.250 "block_size": 512, 00:16:10.250 "num_blocks": 2048, 00:16:10.250 "uuid": "8e6ae10a-4a0b-484b-8741-b2e9305ff539", 00:16:10.250 "assigned_rate_limits": { 00:16:10.250 "rw_ios_per_sec": 0, 00:16:10.250 "rw_mbytes_per_sec": 0, 00:16:10.250 "r_mbytes_per_sec": 0, 00:16:10.250 "w_mbytes_per_sec": 0 00:16:10.250 }, 00:16:10.250 "claimed": false, 00:16:10.250 "zoned": false, 00:16:10.250 "supported_io_types": { 00:16:10.250 "read": true, 00:16:10.250 "write": false, 00:16:10.250 "unmap": false, 00:16:10.250 "write_zeroes": false, 00:16:10.250 "flush": false, 00:16:10.250 "reset": true, 00:16:10.250 "compare": false, 00:16:10.250 "compare_and_write": false, 00:16:10.250 "abort": false, 00:16:10.250 "nvme_admin": false, 00:16:10.250 "nvme_io": false 00:16:10.250 }, 00:16:10.250 "driver_specific": { 00:16:10.250 "lvol": { 00:16:10.250 "lvol_store_uuid": "8a1cb080-657a-4a14-a3c7-0cf71ada542b", 00:16:10.250 "base_bdev": "test_esnap_reload_aio0", 00:16:10.250 "thin_provision": true, 00:16:10.250 "snapshot": true, 00:16:10.250 "clone": true, 00:16:10.250 "base_snapshot": "eclone", 00:16:10.250 "clones": [ 00:16:10.250 "clone" 00:16:10.250 ], 00:16:10.250 "esnap_clone": false 00:16:10.250 } 00:16:10.250 } 00:16:10.250 } 00:16:10.250 ] 00:16:10.250 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@217 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:10.250 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 [2024-10-01 12:35:52.608801] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:10.250 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@218 -- # rpc_cmd bdev_malloc_delete e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:10.250 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.250 00:16:10.250 real 0m3.172s 00:16:10.250 user 0m1.626s 00:16:10.250 sys 0m0.303s 00:16:10.250 12:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 ************************************ 00:16:10.250 END TEST test_esnap_reload 00:16:10.250 ************************************ 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@470 -- # run_test test_esnap_clones test_esnap_clones 00:16:10.250 12:35:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.250 12:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 ************************************ 00:16:10.250 START TEST test_esnap_clones 00:16:10.250 ************************************ 00:16:10.250 12:35:52 -- common/autotest_common.sh@1104 -- # test_esnap_clones 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@263 -- # local bs_dev esnap_dev 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@264 -- # local block_size=512 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@265 -- # local lvs_size_mb=100 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@266 -- # local esnap_size_mb=1 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@267 -- # local lvs_cluster_size=16384 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@268 -- # local lvs_uuid esnap_uuid 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@269 -- # local vol1_uuid vol2_uuid vol3_uuid vol3_uuid vol4_uuid vol5_uuid 00:16:10.250 12:35:52 -- lvol/external_snapshot.sh@272 -- # rpc_cmd bdev_malloc_create 100 512 00:16:10.250 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.250 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.511 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@272 -- # bs_dev=Malloc5 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@273 -- # rpc_cmd bdev_lvol_create_lvstore -c 16384 Malloc5 lvs_test 00:16:10.511 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.511 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.511 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@273 -- # lvs_uuid=6dbb7000-c83c-4a01-a9eb-1065d2ab77a1 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@278 -- # esnap_uuid=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@280 -- # rpc_cmd bdev_malloc_create -b esnap1 -u 2abddd12-c08d-40ad-bccf-ab131586ee4c 1 512 00:16:10.511 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.511 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.511 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@280 -- # esnap_dev=esnap1 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@285 -- # rpc_cmd bdev_lvol_clone_bdev 2abddd12-c08d-40ad-bccf-ab131586ee4c lvs_test vol1 00:16:10.511 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.511 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.511 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@285 -- # vol1_uuid=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@286 -- # verify_esnap_clone 25b4e0f6-7b88-478c-b029-9da6d4ec997d 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@249 -- # local bdev=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@251 -- # local writable=true 00:16:10.511 12:35:52 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.511 12:35:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.511 12:35:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.511 12:35:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.511 12:35:52 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.511 12:35:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.511 12:35:52 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.511 12:35:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.511 12:35:52 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.511 12:35:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.511 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.511 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.511 12:35:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.511 12:35:52 -- common/autotest_common.sh@620 -- # shift 00:16:10.511 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.511 12:35:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.511 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.511 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 12:35:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.512 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol1 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@222 -- # local key 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:10.512 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 aliases[0] = lvs_test/vol1 00:16:10.512 block_size = 512 00:16:10.512 driver_specific.lvol.base_snapshot = null 00:16:10.512 driver_specific.lvol.clone = false 00:16:10.512 driver_specific.lvol.esnap_clone = true 00:16:10.512 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.512 name = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.512 num_blocks = 2048 00:16:10.512 product_name = Logical Volume 00:16:10.512 supported_io_types.read = true 00:16:10.512 supported_io_types.write = true 00:16:10.512 uuid = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@257 -- # [[ true == \t\r\u\e ]] 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@291 -- # rpc_cmd bdev_lvol_snapshot 25b4e0f6-7b88-478c-b029-9da6d4ec997d vol2 00:16:10.512 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.512 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@291 -- # vol2_uuid=efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@292 -- # verify_esnap_clone efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@249 -- # local bdev=efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:10.512 12:35:52 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.512 12:35:52 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.512 12:35:52 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.512 12:35:52 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.512 12:35:52 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.512 12:35:52 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.512 12:35:52 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.512 12:35:52 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.512 12:35:52 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.512 12:35:52 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.512 12:35:52 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.512 12:35:52 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.512 12:35:52 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.512 12:35:52 -- common/autotest_common.sh@620 -- # shift 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.512 12:35:52 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.512 12:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.512 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 12:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol2 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:10.512 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.512 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.513 12:35:52 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:52 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:10.513 12:35:52 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:10.513 12:35:52 -- lvol/external_snapshot.sh@222 -- # local key 00:16:10.513 12:35:52 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:10.513 12:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:10.513 aliases[0] = lvs_test/vol2 00:16:10.513 block_size = 512 00:16:10.513 driver_specific.lvol.base_snapshot = null 00:16:10.513 driver_specific.lvol.clone = false 00:16:10.513 driver_specific.lvol.esnap_clone = true 00:16:10.513 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.513 name = efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.513 num_blocks = 2048 00:16:10.513 product_name = Logical Volume 00:16:10.513 supported_io_types.read = true 00:16:10.513 supported_io_types.write = false 00:16:10.513 uuid = efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@293 -- # verify_clone 25b4e0f6-7b88-478c-b029-9da6d4ec997d vol2 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol2 00:16:10.513 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.513 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.513 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.513 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.513 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.513 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.513 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.513 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.513 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.513 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.513 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.513 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.513 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.513 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:10.513 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.513 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.513 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.513 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.513 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.513 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.773 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol1 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol2 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:10.774 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.774 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:10.774 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.774 aliases[0] = lvs_test/vol1 00:16:10.774 block_size = 512 00:16:10.774 driver_specific.lvol.base_snapshot = vol2 00:16:10.774 driver_specific.lvol.clone = true 00:16:10.774 driver_specific.lvol.esnap_clone = false 00:16:10.774 driver_specific.lvol.external_snapshot_name = null 00:16:10.774 name = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 num_blocks = 2048 00:16:10.774 product_name = Logical Volume 00:16:10.774 supported_io_types.read = true 00:16:10.774 supported_io_types.write = true 00:16:10.774 uuid = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@243 -- # [[ vol2 == \v\o\l\2 ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@298 -- # rpc_cmd bdev_lvol_delete efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.774 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.774 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.774 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@299 -- # NOT rpc_cmd bdev_get_bdevs -b efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.774 12:35:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.774 12:35:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.774 12:35:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:10.774 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.774 12:35:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:10.774 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.774 12:35:53 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.774 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.774 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.774 [2024-10-01 12:35:53.090143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc 00:16:10.774 request: 00:16:10.774 { 00:16:10.774 "name": "efc61df9-ffdc-47fe-a8b1-a7ea9b3f2edc", 00:16:10.774 "method": "bdev_get_bdevs", 00:16:10.774 "req_id": 1 00:16:10.774 } 00:16:10.774 Got JSON-RPC error response 00:16:10.774 response: 00:16:10.774 { 00:16:10.774 "code": -19, 00:16:10.774 "message": "No such device" 00:16:10.774 } 00:16:10.774 12:35:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:10.774 12:35:53 -- common/autotest_common.sh@643 -- # es=1 00:16:10.774 12:35:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.774 12:35:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.774 12:35:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@300 -- # verify_esnap_clone 25b4e0f6-7b88-478c-b029-9da6d4ec997d 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=true 00:16:10.774 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.774 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.774 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.774 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.774 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.774 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.774 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.774 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.774 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.774 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.774 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.774 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.775 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.775 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.775 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.775 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.775 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.775 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol1 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:10.775 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.775 aliases[0] = lvs_test/vol1 00:16:10.775 block_size = 512 00:16:10.775 driver_specific.lvol.base_snapshot = null 00:16:10.775 driver_specific.lvol.clone = false 00:16:10.775 driver_specific.lvol.esnap_clone = true 00:16:10.775 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.775 name = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.775 num_blocks = 2048 00:16:10.775 product_name = Logical Volume 00:16:10.775 supported_io_types.read = true 00:16:10.775 supported_io_types.write = true 00:16:10.775 uuid = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ true == \t\r\u\e ]] 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@301 -- # vol2_uuid= 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@306 -- # rpc_cmd bdev_lvol_snapshot 25b4e0f6-7b88-478c-b029-9da6d4ec997d vol3 00:16:10.775 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.775 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.775 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@306 -- # vol3_uuid=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@307 -- # verify_esnap_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:10.775 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.775 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.775 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.775 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.775 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.775 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.775 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.775 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.775 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.775 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.775 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.775 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.775 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.775 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:10.775 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.775 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.776 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.776 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.776 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.776 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol3 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:10.776 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.776 aliases[0] = lvs_test/vol3 00:16:10.776 block_size = 512 00:16:10.776 driver_specific.lvol.base_snapshot = null 00:16:10.776 driver_specific.lvol.clone = false 00:16:10.776 driver_specific.lvol.esnap_clone = true 00:16:10.776 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:10.776 name = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.776 num_blocks = 2048 00:16:10.776 product_name = Logical Volume 00:16:10.776 supported_io_types.read = true 00:16:10.776 supported_io_types.write = false 00:16:10.776 uuid = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@308 -- # verify_clone 25b4e0f6-7b88-478c-b029-9da6d4ec997d vol3 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol3 00:16:10.776 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.776 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:10.776 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:10.776 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:10.776 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:10.776 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:10.776 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:10.776 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:10.776 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:10.776 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:10.776 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:10.776 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:10.776 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:10.776 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:10.776 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:10.776 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:10.776 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:10.776 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.776 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:10.776 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol1 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol3 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.037 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.037 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.037 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 aliases[0] = lvs_test/vol1 00:16:11.037 block_size = 512 00:16:11.037 driver_specific.lvol.base_snapshot = vol3 00:16:11.037 driver_specific.lvol.clone = true 00:16:11.037 driver_specific.lvol.esnap_clone = false 00:16:11.037 driver_specific.lvol.external_snapshot_name = null 00:16:11.037 name = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 num_blocks = 2048 00:16:11.037 product_name = Logical Volume 00:16:11.037 supported_io_types.read = true 00:16:11.037 supported_io_types.write = true 00:16:11.037 uuid = 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@243 -- # [[ vol3 == \v\o\l\3 ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@313 -- # rpc_cmd bdev_lvol_delete 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.037 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@314 -- # NOT rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.037 12:35:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:11.037 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.037 12:35:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:11.037 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.037 12:35:53 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.037 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 [2024-10-01 12:35:53.350251] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 25b4e0f6-7b88-478c-b029-9da6d4ec997d 00:16:11.037 request: 00:16:11.037 { 00:16:11.037 "name": "25b4e0f6-7b88-478c-b029-9da6d4ec997d", 00:16:11.037 "method": "bdev_get_bdevs", 00:16:11.037 "req_id": 1 00:16:11.037 } 00:16:11.037 Got JSON-RPC error response 00:16:11.037 response: 00:16:11.037 { 00:16:11.037 "code": -19, 00:16:11.037 "message": "No such device" 00:16:11.037 } 00:16:11.037 12:35:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.037 12:35:53 -- common/autotest_common.sh@643 -- # es=1 00:16:11.037 12:35:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.037 12:35:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.037 12:35:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@315 -- # verify_esnap_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:11.037 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.037 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.037 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.037 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.037 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.037 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.037 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.037 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.037 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.037 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.037 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.037 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.038 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.038 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.038 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.038 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.038 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol3 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.038 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.038 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.038 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.038 aliases[0] = lvs_test/vol3 00:16:11.038 block_size = 512 00:16:11.038 driver_specific.lvol.base_snapshot = null 00:16:11.038 driver_specific.lvol.clone = false 00:16:11.038 driver_specific.lvol.esnap_clone = true 00:16:11.038 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.038 name = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 num_blocks = 2048 00:16:11.038 product_name = Logical Volume 00:16:11.038 supported_io_types.read = true 00:16:11.038 supported_io_types.write = false 00:16:11.038 uuid = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@316 -- # vol1_uuid= 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@322 -- # rpc_cmd bdev_lvol_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a vol4 00:16:11.038 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.038 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.038 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@322 -- # vol4_uuid=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@323 -- # rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.038 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.038 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.038 [ 00:16:11.038 { 00:16:11.038 "name": "731b2e43-fb4a-4c3a-9451-761180117d9c", 00:16:11.038 "aliases": [ 00:16:11.038 "lvs_test/vol4" 00:16:11.038 ], 00:16:11.038 "product_name": "Logical Volume", 00:16:11.038 "block_size": 512, 00:16:11.038 "num_blocks": 2048, 00:16:11.038 "uuid": "731b2e43-fb4a-4c3a-9451-761180117d9c", 00:16:11.038 "assigned_rate_limits": { 00:16:11.038 "rw_ios_per_sec": 0, 00:16:11.038 "rw_mbytes_per_sec": 0, 00:16:11.038 "r_mbytes_per_sec": 0, 00:16:11.038 "w_mbytes_per_sec": 0 00:16:11.038 }, 00:16:11.038 "claimed": false, 00:16:11.038 "zoned": false, 00:16:11.038 "supported_io_types": { 00:16:11.038 "read": true, 00:16:11.038 "write": true, 00:16:11.038 "unmap": true, 00:16:11.038 "write_zeroes": true, 00:16:11.038 "flush": false, 00:16:11.038 "reset": true, 00:16:11.038 "compare": false, 00:16:11.038 "compare_and_write": false, 00:16:11.038 "abort": false, 00:16:11.038 "nvme_admin": false, 00:16:11.038 "nvme_io": false 00:16:11.038 }, 00:16:11.038 "memory_domains": [ 00:16:11.038 { 00:16:11.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.038 "dma_device_type": 2 00:16:11.038 } 00:16:11.038 ], 00:16:11.038 "driver_specific": { 00:16:11.038 "lvol": { 00:16:11.038 "lvol_store_uuid": "6dbb7000-c83c-4a01-a9eb-1065d2ab77a1", 00:16:11.038 "base_bdev": "Malloc5", 00:16:11.038 "thin_provision": true, 00:16:11.038 "snapshot": false, 00:16:11.038 "clone": true, 00:16:11.038 "base_snapshot": "vol3", 00:16:11.038 "esnap_clone": false 00:16:11.038 } 00:16:11.038 } 00:16:11.038 } 00:16:11.038 ] 00:16:11.038 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@324 -- # verify_esnap_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:11.038 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.038 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.038 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.038 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.038 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.038 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.038 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.038 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.038 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.038 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.038 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.038 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.039 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.039 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.039 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.039 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.039 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.039 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol3 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.039 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.039 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.039 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.039 aliases[0] = lvs_test/vol3 00:16:11.039 block_size = 512 00:16:11.039 driver_specific.lvol.base_snapshot = null 00:16:11.039 driver_specific.lvol.clone = false 00:16:11.039 driver_specific.lvol.esnap_clone = true 00:16:11.039 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.039 name = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.039 num_blocks = 2048 00:16:11.039 product_name = Logical Volume 00:16:11.039 supported_io_types.read = true 00:16:11.039 supported_io_types.write = false 00:16:11.039 uuid = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@325 -- # verify_clone 731b2e43-fb4a-4c3a-9451-761180117d9c vol3 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol3 00:16:11.039 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.039 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.039 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.039 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.039 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.039 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.039 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.039 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.039 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.039 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.039 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.039 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.040 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.040 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.040 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.040 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.040 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.040 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.040 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.040 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.040 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.040 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.040 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.040 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.040 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.040 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.300 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol4 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol3 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.300 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.300 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.300 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.300 aliases[0] = lvs_test/vol4 00:16:11.300 block_size = 512 00:16:11.300 driver_specific.lvol.base_snapshot = vol3 00:16:11.300 driver_specific.lvol.clone = true 00:16:11.300 driver_specific.lvol.esnap_clone = false 00:16:11.300 driver_specific.lvol.external_snapshot_name = null 00:16:11.300 name = 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.300 num_blocks = 2048 00:16:11.300 product_name = Logical Volume 00:16:11.300 supported_io_types.read = true 00:16:11.300 supported_io_types.write = true 00:16:11.300 uuid = 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@243 -- # [[ vol3 == \v\o\l\3 ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@331 -- # rpc_cmd bdev_lvol_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a vol5 00:16:11.300 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.300 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.300 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@331 -- # vol5_uuid=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@332 -- # verify_esnap_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:11.300 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.300 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.300 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.300 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.300 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.300 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.300 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.300 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.300 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.300 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.300 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.300 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.301 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.301 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.301 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.301 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.301 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.301 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol3 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.301 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.301 aliases[0] = lvs_test/vol3 00:16:11.301 block_size = 512 00:16:11.301 driver_specific.lvol.base_snapshot = null 00:16:11.301 driver_specific.lvol.clone = false 00:16:11.301 driver_specific.lvol.esnap_clone = true 00:16:11.301 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.301 name = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.301 num_blocks = 2048 00:16:11.301 product_name = Logical Volume 00:16:11.301 supported_io_types.read = true 00:16:11.301 supported_io_types.write = false 00:16:11.301 uuid = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@333 -- # verify_clone 731b2e43-fb4a-4c3a-9451-761180117d9c vol3 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol3 00:16:11.301 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.301 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.301 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.301 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.301 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.301 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.301 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.301 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.301 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.301 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.301 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.301 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.301 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.301 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.301 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.301 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.302 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.302 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.302 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.302 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol4 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol3 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.302 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.302 aliases[0] = lvs_test/vol4 00:16:11.302 block_size = 512 00:16:11.302 driver_specific.lvol.base_snapshot = vol3 00:16:11.302 driver_specific.lvol.clone = true 00:16:11.302 driver_specific.lvol.esnap_clone = false 00:16:11.302 driver_specific.lvol.external_snapshot_name = null 00:16:11.302 name = 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.302 num_blocks = 2048 00:16:11.302 product_name = Logical Volume 00:16:11.302 supported_io_types.read = true 00:16:11.302 supported_io_types.write = true 00:16:11.302 uuid = 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@243 -- # [[ vol3 == \v\o\l\3 ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@334 -- # verify_clone f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 vol3 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol3 00:16:11.302 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.302 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.302 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.302 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.302 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.302 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.302 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.302 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.302 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.302 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.302 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.302 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.302 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.302 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.302 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.302 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.302 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.302 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.302 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.302 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.562 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.562 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.562 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol5 00:16:11.562 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.562 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.562 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.562 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.562 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.562 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.562 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol3 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.563 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.563 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.563 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.563 aliases[0] = lvs_test/vol5 00:16:11.563 block_size = 512 00:16:11.563 driver_specific.lvol.base_snapshot = vol3 00:16:11.563 driver_specific.lvol.clone = true 00:16:11.563 driver_specific.lvol.esnap_clone = false 00:16:11.563 driver_specific.lvol.external_snapshot_name = null 00:16:11.563 name = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.563 num_blocks = 2048 00:16:11.563 product_name = Logical Volume 00:16:11.563 supported_io_types.read = true 00:16:11.563 supported_io_types.write = true 00:16:11.563 uuid = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@243 -- # [[ vol3 == \v\o\l\3 ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@337 -- # NOT rpc_cmd bdev_lvol_delete 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.563 12:35:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.563 12:35:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_delete 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.563 12:35:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.563 12:35:53 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_delete 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.563 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.563 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.563 [2024-10-01 12:35:53.862547] vbdev_lvol.c: 640:_vbdev_lvol_destroy: *ERROR*: Cannot delete lvol 00:16:11.563 request: 00:16:11.563 { 00:16:11.563 "name": "3908e8bf-1e62-48de-a8e4-6dcebb45015a", 00:16:11.563 "method": "bdev_lvol_delete", 00:16:11.563 "req_id": 1 00:16:11.563 } 00:16:11.563 Got JSON-RPC error response 00:16:11.563 response: 00:16:11.563 { 00:16:11.563 "code": -32603, 00:16:11.563 "message": "Operation not permitted" 00:16:11.563 } 00:16:11.563 12:35:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.563 12:35:53 -- common/autotest_common.sh@643 -- # es=1 00:16:11.563 12:35:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.563 12:35:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.563 12:35:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@342 -- # rpc_cmd bdev_lvol_delete 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.563 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.563 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.563 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@343 -- # NOT rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.563 12:35:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.563 12:35:53 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.563 12:35:53 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:11.563 12:35:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.563 12:35:53 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.563 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.563 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.563 [2024-10-01 12:35:53.886583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 731b2e43-fb4a-4c3a-9451-761180117d9c 00:16:11.563 request: 00:16:11.563 { 00:16:11.563 "name": "731b2e43-fb4a-4c3a-9451-761180117d9c", 00:16:11.563 "method": "bdev_get_bdevs", 00:16:11.563 "req_id": 1 00:16:11.563 } 00:16:11.563 Got JSON-RPC error response 00:16:11.563 response: 00:16:11.563 { 00:16:11.563 "code": -19, 00:16:11.563 "message": "No such device" 00:16:11.563 } 00:16:11.563 12:35:53 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.563 12:35:53 -- common/autotest_common.sh@643 -- # es=1 00:16:11.563 12:35:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.563 12:35:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.563 12:35:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@344 -- # verify_esnap_clone 3908e8bf-1e62-48de-a8e4-6dcebb45015a 2abddd12-c08d-40ad-bccf-ab131586ee4c false 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@249 -- # local bdev=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@251 -- # local writable=false 00:16:11.563 12:35:53 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.563 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.563 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.563 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.563 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.563 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.563 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.563 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.563 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.563 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.563 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.563 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.564 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.564 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.564 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.564 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.564 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.564 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol3 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.564 12:35:53 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.564 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.564 aliases[0] = lvs_test/vol3 00:16:11.564 block_size = 512 00:16:11.564 driver_specific.lvol.base_snapshot = null 00:16:11.564 driver_specific.lvol.clone = false 00:16:11.564 driver_specific.lvol.esnap_clone = true 00:16:11.564 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.564 name = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.564 num_blocks = 2048 00:16:11.564 product_name = Logical Volume 00:16:11.564 supported_io_types.read = true 00:16:11.564 supported_io_types.write = false 00:16:11.564 uuid = 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@257 -- # [[ false == \f\a\l\s\e ]] 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@345 -- # verify_clone f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 vol3 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@234 -- # local bdev=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@235 -- # local parent=vol3 00:16:11.564 12:35:53 -- lvol/external_snapshot.sh@237 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.564 12:35:53 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.564 12:35:53 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.564 12:35:53 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.564 12:35:53 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.564 12:35:53 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.564 12:35:53 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.564 12:35:53 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.564 12:35:53 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.564 12:35:53 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.564 12:35:53 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.564 12:35:53 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.564 12:35:53 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.564 12:35:53 -- common/autotest_common.sh@620 -- # shift 00:16:11.564 12:35:53 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:53 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.565 12:35:53 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.565 12:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:11.565 12:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol5 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=vol3 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.565 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.565 12:35:54 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@238 -- # log_jq_out 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.565 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.565 aliases[0] = lvs_test/vol5 00:16:11.565 block_size = 512 00:16:11.565 driver_specific.lvol.base_snapshot = vol3 00:16:11.565 driver_specific.lvol.clone = true 00:16:11.565 driver_specific.lvol.esnap_clone = false 00:16:11.565 driver_specific.lvol.external_snapshot_name = null 00:16:11.565 name = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 num_blocks = 2048 00:16:11.565 product_name = Logical Volume 00:16:11.565 supported_io_types.read = true 00:16:11.565 supported_io_types.write = true 00:16:11.565 uuid = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@240 -- # [[ true == true ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@241 -- # [[ true == true ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@242 -- # [[ true == true ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@243 -- # [[ vol3 == \v\o\l\3 ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@244 -- # [[ false == false ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@245 -- # [[ null == null ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@350 -- # rpc_cmd bdev_lvol_delete 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.565 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.565 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.565 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@351 -- # NOT rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.565 12:35:54 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.565 12:35:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.565 12:35:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:11.565 12:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.565 12:35:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:11.565 12:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.565 12:35:54 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.565 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.565 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.565 [2024-10-01 12:35:54.054698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 3908e8bf-1e62-48de-a8e4-6dcebb45015a 00:16:11.565 request: 00:16:11.565 { 00:16:11.565 "name": "3908e8bf-1e62-48de-a8e4-6dcebb45015a", 00:16:11.565 "method": "bdev_get_bdevs", 00:16:11.565 "req_id": 1 00:16:11.565 } 00:16:11.565 Got JSON-RPC error response 00:16:11.565 response: 00:16:11.565 { 00:16:11.565 "code": -19, 00:16:11.565 "message": "No such device" 00:16:11.565 } 00:16:11.565 12:35:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.565 12:35:54 -- common/autotest_common.sh@643 -- # es=1 00:16:11.565 12:35:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.565 12:35:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.565 12:35:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@352 -- # verify_esnap_clone f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@249 -- # local bdev=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@250 -- # local parent=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@251 -- # local writable=true 00:16:11.565 12:35:54 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.565 12:35:54 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:11.565 12:35:54 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:11.565 12:35:54 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:11.565 12:35:54 -- common/autotest_common.sh@586 -- # local jq val 00:16:11.565 12:35:54 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:11.565 12:35:54 -- common/autotest_common.sh@596 -- # local lvs 00:16:11.565 12:35:54 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:11.565 12:35:54 -- common/autotest_common.sh@611 -- # local bdev 00:16:11.565 12:35:54 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.565 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:11.565 12:35:54 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:11.566 12:35:54 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:11.566 12:35:54 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:11.566 12:35:54 -- common/autotest_common.sh@620 -- # shift 00:16:11.566 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.566 12:35:54 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.566 12:35:54 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:11.566 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.566 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.825 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/vol5 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.825 12:35:54 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:11.825 12:35:54 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@222 -- # local key 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:11.825 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.825 aliases[0] = lvs_test/vol5 00:16:11.825 block_size = 512 00:16:11.825 driver_specific.lvol.base_snapshot = null 00:16:11.825 driver_specific.lvol.clone = false 00:16:11.825 driver_specific.lvol.esnap_clone = true 00:16:11.825 driver_specific.lvol.external_snapshot_name = 2abddd12-c08d-40ad-bccf-ab131586ee4c 00:16:11.825 name = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 num_blocks = 2048 00:16:11.825 product_name = Logical Volume 00:16:11.825 supported_io_types.read = true 00:16:11.825 supported_io_types.write = true 00:16:11.825 uuid = f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@257 -- # [[ true == \t\r\u\e ]] 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@259 -- # [[ 2abddd12-c08d-40ad-bccf-ab131586ee4c == \2\a\b\d\d\d\1\2\-\c\0\8\d\-\4\0\a\d\-\b\c\c\f\-\a\b\1\3\1\5\8\6\e\e\4\c ]] 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@357 -- # rpc_cmd bdev_lvol_delete f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.825 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.825 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.825 12:35:54 -- lvol/external_snapshot.sh@358 -- # NOT rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.825 12:35:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.825 12:35:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:11.825 12:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.825 12:35:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:11.825 12:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.826 12:35:54 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.826 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.826 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.826 [2024-10-01 12:35:54.150788] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: f2a4fded-624c-49dd-9d6f-d13d4d1b3e68 00:16:11.826 request: 00:16:11.826 { 00:16:11.826 "name": "f2a4fded-624c-49dd-9d6f-d13d4d1b3e68", 00:16:11.826 "method": "bdev_get_bdevs", 00:16:11.826 "req_id": 1 00:16:11.826 } 00:16:11.826 Got JSON-RPC error response 00:16:11.826 response: 00:16:11.826 { 00:16:11.826 "code": -19, 00:16:11.826 "message": "No such device" 00:16:11.826 } 00:16:11.826 12:35:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.826 12:35:54 -- common/autotest_common.sh@643 -- # es=1 00:16:11.826 12:35:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.826 12:35:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.826 12:35:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.826 12:35:54 -- lvol/external_snapshot.sh@360 -- # rpc_cmd bdev_malloc_delete Malloc5 00:16:11.826 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.826 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.826 [2024-10-01 12:35:54.162849] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Malloc5 being removed: closing lvstore lvs_test 00:16:12.085 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.085 12:35:54 -- lvol/external_snapshot.sh@361 -- # rpc_cmd bdev_malloc_delete esnap1 00:16:12.085 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.085 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.085 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.085 00:16:12.085 real 0m1.750s 00:16:12.085 user 0m1.063s 00:16:12.085 sys 0m0.283s 00:16:12.085 12:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.085 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.085 ************************************ 00:16:12.085 END TEST test_esnap_clones 00:16:12.085 ************************************ 00:16:12.085 12:35:54 -- lvol/external_snapshot.sh@471 -- # run_test test_esnap_late_arrival test_esnap_late_arrival 00:16:12.086 12:35:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:12.086 12:35:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.086 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.086 ************************************ 00:16:12.086 START TEST test_esnap_late_arrival 00:16:12.086 ************************************ 00:16:12.086 12:35:54 -- common/autotest_common.sh@1104 -- # test_esnap_late_arrival 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@365 -- # local bs_dev esnap_dev 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@366 -- # local block_size=512 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@367 -- # local esnap_size_mb=1 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@368 -- # local lvs_cluster_size=16384 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@369 -- # local lvs_uuid esnap_uuid eclone_uuid snap_uuid clone_uuid uuid 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@370 -- # local aio_bdev=test_esnap_reload_aio0 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@371 -- # local lvols 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@375 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@376 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@377 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:12.086 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.086 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.086 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@377 -- # bs_dev=test_esnap_reload_aio0 00:16:12.086 12:35:54 -- lvol/external_snapshot.sh@378 -- # rpc_cmd bdev_lvol_create_lvstore -c 16384 test_esnap_reload_aio0 lvs_test 00:16:12.086 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.086 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.653 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:54 -- lvol/external_snapshot.sh@378 -- # lvs_uuid=5aa3cbdc-d937-4715-ab7b-753550d2322f 00:16:12.654 12:35:54 -- lvol/external_snapshot.sh@381 -- # esnap_uuid=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 12:35:54 -- lvol/external_snapshot.sh@382 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:12.654 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 12:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:54 -- lvol/external_snapshot.sh@382 -- # esnap_dev=Malloc6 00:16:12.654 12:35:54 -- lvol/external_snapshot.sh@383 -- # rpc_cmd bdev_lvol_clone_bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd lvs_test eclone1 00:16:12.654 12:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@383 -- # eclone_uuid=52400d09-e617-4997-94e5-f4a593886678 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@386 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 [2024-10-01 12:35:55.011986] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@387 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:12.654 12:35:55 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.654 12:35:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:12.654 12:35:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 request: 00:16:12.654 { 00:16:12.654 "lvs_name": "lvs_test", 00:16:12.654 "method": "bdev_lvol_get_lvstores", 00:16:12.654 "req_id": 1 00:16:12.654 } 00:16:12.654 Got JSON-RPC error response 00:16:12.654 response: 00:16:12.654 { 00:16:12.654 "code": -19, 00:16:12.654 "message": "No such device" 00:16:12.654 } 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # es=1 00:16:12.654 12:35:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.654 12:35:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@390 -- # rpc_cmd bdev_malloc_delete Malloc6 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@391 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 [2024-10-01 12:35:55.085525] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 [2024-10-01 12:35:55.085596] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 52400d09-e617-4997-94e5-f4a593886678: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:12.654 [2024-10-01 12:35:55.085695] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 52400d09-e617-4997-94e5-f4a593886678: blob is degraded: deferring bdev creation 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@391 -- # bs_dev=test_esnap_reload_aio0 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@392 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@392 -- # lvs_uuid='[ 00:16:12.654 { 00:16:12.654 "uuid": "5aa3cbdc-d937-4715-ab7b-753550d2322f", 00:16:12.654 "name": "lvs_test", 00:16:12.654 "base_bdev": "test_esnap_reload_aio0", 00:16:12.654 "total_data_clusters": 19199, 00:16:12.654 "free_clusters": 19199, 00:16:12.654 "block_size": 512, 00:16:12.654 "cluster_size": 16384 00:16:12.654 } 00:16:12.654 ]' 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@395 -- # NOT rpc_cmd bdev_get_bdevs -b e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 12:35:55 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.654 12:35:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 12:35:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 [2024-10-01 12:35:55.112156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.654 request: 00:16:12.654 { 00:16:12.654 "name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd", 00:16:12.654 "method": "bdev_get_bdevs", 00:16:12.654 "req_id": 1 00:16:12.654 } 00:16:12.654 Got JSON-RPC error response 00:16:12.654 response: 00:16:12.654 { 00:16:12.654 "code": -19, 00:16:12.654 "message": "No such device" 00:16:12.654 } 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # es=1 00:16:12.654 12:35:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.654 12:35:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@396 -- # NOT rpc_cmd bdev_get_bdevs -b 52400d09-e617-4997-94e5-f4a593886678 00:16:12.654 12:35:55 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.654 12:35:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 52400d09-e617-4997-94e5-f4a593886678 00:16:12.654 12:35:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:12.654 12:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 52400d09-e617-4997-94e5-f4a593886678 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 [2024-10-01 12:35:55.124185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 52400d09-e617-4997-94e5-f4a593886678 00:16:12.654 request: 00:16:12.654 { 00:16:12.654 "name": "52400d09-e617-4997-94e5-f4a593886678", 00:16:12.654 "method": "bdev_get_bdevs", 00:16:12.654 "req_id": 1 00:16:12.654 } 00:16:12.654 Got JSON-RPC error response 00:16:12.654 response: 00:16:12.654 { 00:16:12.654 "code": -19, 00:16:12.654 "message": "No such device" 00:16:12.654 } 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@643 -- # es=1 00:16:12.654 12:35:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.654 12:35:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.654 12:35:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@397 -- # rpc_cmd bdev_lvol_get_lvols 00:16:12.654 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.654 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.654 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@397 -- # lvols='[ 00:16:12.654 { 00:16:12.654 "alias": "lvs_test/eclone1", 00:16:12.654 "uuid": "52400d09-e617-4997-94e5-f4a593886678", 00:16:12.654 "name": "eclone1", 00:16:12.654 "is_thin_provisioned": true, 00:16:12.654 "is_snapshot": false, 00:16:12.654 "is_clone": false, 00:16:12.654 "is_esnap_clone": true, 00:16:12.654 "is_degraded": true, 00:16:12.654 "lvs": { 00:16:12.654 "name": "lvs_test", 00:16:12.654 "uuid": "5aa3cbdc-d937-4715-ab7b-753550d2322f" 00:16:12.654 } 00:16:12.654 } 00:16:12.654 ]' 00:16:12.654 12:35:55 -- lvol/external_snapshot.sh@398 -- # jq -r '.[] | select(.uuid == "52400d09-e617-4997-94e5-f4a593886678").is_esnap_clone' 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@398 -- # [[ true == \t\r\u\e ]] 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@399 -- # jq -r '.[] | select(.uuid == "52400d09-e617-4997-94e5-f4a593886678").is_degraded' 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@399 -- # [[ true == \t\r\u\e ]] 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@402 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:12.914 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.914 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.914 [2024-10-01 12:35:55.257586] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc7 already claimed: type read_many_write_none by module lvol 00:16:12.914 [2024-10-01 12:35:55.257784] blobstore.c:9230:blob_frozen_set_back_bs_dev: *NOTICE*: blob 0x100000001: hotplugged back_bs_dev 00:16:12.914 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@402 -- # esnap_dev=Malloc7 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@403 -- # rpc_cmd bdev_wait_for_examine 00:16:12.914 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.914 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.914 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@404 -- # verify_esnap_clone 52400d09-e617-4997-94e5-f4a593886678 e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@249 -- # local bdev=52400d09-e617-4997-94e5-f4a593886678 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@250 -- # local parent=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@251 -- # local writable=true 00:16:12.914 12:35:55 -- lvol/external_snapshot.sh@253 -- # rpc_cmd_simple_data_json bdev bdev_get_bdevs -b 52400d09-e617-4997-94e5-f4a593886678 00:16:12.914 12:35:55 -- common/autotest_common.sh@584 -- # local 'elems=bdev[@]' elem 00:16:12.914 12:35:55 -- common/autotest_common.sh@585 -- # jq_out=() 00:16:12.914 12:35:55 -- common/autotest_common.sh@585 -- # local -gA jq_out 00:16:12.914 12:35:55 -- common/autotest_common.sh@586 -- # local jq val 00:16:12.914 12:35:55 -- common/autotest_common.sh@596 -- # lvs=('uuid' 'name' 'base_bdev' 'total_data_clusters' 'free_clusters' 'block_size' 'cluster_size') 00:16:12.914 12:35:55 -- common/autotest_common.sh@596 -- # local lvs 00:16:12.914 12:35:55 -- common/autotest_common.sh@611 -- # bdev=('name' 'aliases[0]' 'block_size' 'num_blocks' 'uuid' 'product_name' 'supported_io_types.read' 'supported_io_types.write' 'driver_specific.lvol.clone' 'driver_specific.lvol.base_snapshot' 'driver_specific.lvol.esnap_clone' 'driver_specific.lvol.external_snapshot_name') 00:16:12.914 12:35:55 -- common/autotest_common.sh@611 -- # local bdev 00:16:12.914 12:35:55 -- common/autotest_common.sh@613 -- # [[ -v bdev[@] ]] 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0]' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.914 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read' 00:16:12.914 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.915 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write' 00:16:12.915 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.915 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone' 00:16:12.915 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.915 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot' 00:16:12.915 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.915 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone' 00:16:12.915 12:35:55 -- common/autotest_common.sh@615 -- # for elem in "${!elems}" 00:16:12.915 12:35:55 -- common/autotest_common.sh@616 -- # jq='"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name' 00:16:12.915 12:35:55 -- common/autotest_common.sh@618 -- # jq+=',"\n"' 00:16:12.915 12:35:55 -- common/autotest_common.sh@620 -- # shift 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@582 -- # rpc_cmd bdev_get_bdevs -b 52400d09-e617-4997-94e5-f4a593886678 00:16:12.915 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.915 12:35:55 -- common/autotest_common.sh@582 -- # jq -jr '"name"," ",.[0].name,"\n","aliases[0]"," ",.[0].aliases[0],"\n","block_size"," ",.[0].block_size,"\n","num_blocks"," ",.[0].num_blocks,"\n","uuid"," ",.[0].uuid,"\n","product_name"," ",.[0].product_name,"\n","supported_io_types.read"," ",.[0].supported_io_types.read,"\n","supported_io_types.write"," ",.[0].supported_io_types.write,"\n","driver_specific.lvol.clone"," ",.[0].driver_specific.lvol.clone,"\n","driver_specific.lvol.base_snapshot"," ",.[0].driver_specific.lvol.base_snapshot,"\n","driver_specific.lvol.esnap_clone"," ",.[0].driver_specific.lvol.esnap_clone,"\n","driver_specific.lvol.external_snapshot_name"," ",.[0].driver_specific.lvol.external_snapshot_name,"\n"' 00:16:12.915 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.915 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=52400d09-e617-4997-94e5-f4a593886678 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=lvs_test/eclone1 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=512 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=2048 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=52400d09-e617-4997-94e5-f4a593886678 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]='Logical Volume' 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=false 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=null 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=true 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@622 -- # jq_out["$elem"]=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.915 12:35:55 -- common/autotest_common.sh@621 -- # read -r elem val 00:16:12.915 12:35:55 -- common/autotest_common.sh@624 -- # (( 12 > 0 )) 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@254 -- # log_jq_out 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@222 -- # local key 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@224 -- # xtrace_disable 00:16:12.915 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.915 aliases[0] = lvs_test/eclone1 00:16:12.915 block_size = 512 00:16:12.915 driver_specific.lvol.base_snapshot = null 00:16:12.915 driver_specific.lvol.clone = false 00:16:12.915 driver_specific.lvol.esnap_clone = true 00:16:12.915 driver_specific.lvol.external_snapshot_name = e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:12.915 name = 52400d09-e617-4997-94e5-f4a593886678 00:16:12.915 num_blocks = 2048 00:16:12.915 product_name = Logical Volume 00:16:12.915 supported_io_types.read = true 00:16:12.915 supported_io_types.write = true 00:16:12.915 uuid = 52400d09-e617-4997-94e5-f4a593886678 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@256 -- # [[ true == true ]] 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@257 -- # [[ true == \t\r\u\e ]] 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@258 -- # [[ true == true ]] 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@259 -- # [[ e4b40d8b-f623-416d-8234-baf5a4c83cbd == \e\4\b\4\0\d\8\b\-\f\6\2\3\-\4\1\6\d\-\8\2\3\4\-\b\a\f\5\a\4\c\8\3\c\b\d ]] 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@406 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:12.915 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.915 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.915 [2024-10-01 12:35:55.360343] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:12.915 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.915 12:35:55 -- lvol/external_snapshot.sh@407 -- # rpc_cmd bdev_malloc_delete Malloc7 00:16:12.915 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.915 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.915 ************************************ 00:16:12.915 END TEST test_esnap_late_arrival 00:16:12.915 ************************************ 00:16:12.915 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.915 00:16:12.915 real 0m0.894s 00:16:12.915 user 0m0.201s 00:16:12.915 sys 0m0.073s 00:16:12.915 12:35:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.915 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@472 -- # run_test test_esnap_remove_degraded test_esnap_remove_degraded 00:16:13.175 12:35:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:13.175 12:35:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.175 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.175 ************************************ 00:16:13.175 START TEST test_esnap_remove_degraded 00:16:13.175 ************************************ 00:16:13.175 12:35:55 -- common/autotest_common.sh@1104 -- # test_esnap_remove_degraded 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@411 -- # local bs_dev esnap_dev 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@412 -- # local block_size=512 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@413 -- # local esnap_size_mb=1 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@414 -- # local lvs_cluster_size=16384 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@415 -- # local lvs_uuid esnap_uuid eclone_uuid snap_uuid clone_uuid uuid 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@416 -- # local aio_bdev=test_esnap_reload_aio0 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@417 -- # local lvols 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@421 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@422 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@423 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:13.175 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.175 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.175 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@423 -- # bs_dev=test_esnap_reload_aio0 00:16:13.175 12:35:55 -- lvol/external_snapshot.sh@424 -- # rpc_cmd bdev_lvol_create_lvstore -c 16384 test_esnap_reload_aio0 lvs_test 00:16:13.175 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.175 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.434 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.434 12:35:55 -- lvol/external_snapshot.sh@424 -- # lvs_uuid=38f582c7-3a32-4dfb-9deb-88de60769354 00:16:13.434 12:35:55 -- lvol/external_snapshot.sh@427 -- # esnap_uuid=e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:13.434 12:35:55 -- lvol/external_snapshot.sh@428 -- # rpc_cmd bdev_malloc_create -u e4b40d8b-f623-416d-8234-baf5a4c83cbd 1 512 00:16:13.434 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.435 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@428 -- # esnap_dev=Malloc8 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@429 -- # rpc_cmd bdev_lvol_clone_bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd lvs_test eclone 00:16:13.435 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.435 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@429 -- # eclone_uuid=95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@430 -- # rpc_cmd bdev_get_bdevs -b 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.435 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.435 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 [ 00:16:13.435 { 00:16:13.435 "name": "95bc9ceb-7650-4075-ac41-dc2e5c002714", 00:16:13.435 "aliases": [ 00:16:13.435 "lvs_test/eclone" 00:16:13.435 ], 00:16:13.435 "product_name": "Logical Volume", 00:16:13.435 "block_size": 512, 00:16:13.435 "num_blocks": 2048, 00:16:13.435 "uuid": "95bc9ceb-7650-4075-ac41-dc2e5c002714", 00:16:13.435 "assigned_rate_limits": { 00:16:13.435 "rw_ios_per_sec": 0, 00:16:13.435 "rw_mbytes_per_sec": 0, 00:16:13.435 "r_mbytes_per_sec": 0, 00:16:13.435 "w_mbytes_per_sec": 0 00:16:13.435 }, 00:16:13.435 "claimed": false, 00:16:13.435 "zoned": false, 00:16:13.435 "supported_io_types": { 00:16:13.435 "read": true, 00:16:13.435 "write": true, 00:16:13.435 "unmap": true, 00:16:13.435 "write_zeroes": true, 00:16:13.435 "flush": false, 00:16:13.435 "reset": true, 00:16:13.435 "compare": false, 00:16:13.435 "compare_and_write": false, 00:16:13.435 "abort": false, 00:16:13.435 "nvme_admin": false, 00:16:13.435 "nvme_io": false 00:16:13.435 }, 00:16:13.435 "memory_domains": [ 00:16:13.435 { 00:16:13.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.435 "dma_device_type": 2 00:16:13.435 } 00:16:13.435 ], 00:16:13.435 "driver_specific": { 00:16:13.435 "lvol": { 00:16:13.435 "lvol_store_uuid": "38f582c7-3a32-4dfb-9deb-88de60769354", 00:16:13.435 "base_bdev": "test_esnap_reload_aio0", 00:16:13.435 "thin_provision": true, 00:16:13.435 "snapshot": false, 00:16:13.435 "clone": false, 00:16:13.435 "esnap_clone": true, 00:16:13.435 "external_snapshot_name": "e4b40d8b-f623-416d-8234-baf5a4c83cbd" 00:16:13.435 } 00:16:13.435 } 00:16:13.435 } 00:16:13.435 ] 00:16:13.435 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@433 -- # rpc_cmd bdev_lvol_set_read_only 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.435 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.435 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.435 12:35:55 -- lvol/external_snapshot.sh@434 -- # rpc_cmd bdev_lvol_clone 95bc9ceb-7650-4075-ac41-dc2e5c002714 clone 00:16:13.435 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.435 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.694 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.694 12:35:55 -- lvol/external_snapshot.sh@434 -- # clone_uuid=7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.694 12:35:55 -- lvol/external_snapshot.sh@435 -- # rpc_cmd bdev_get_bdevs -b 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.694 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.694 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.694 [ 00:16:13.694 { 00:16:13.694 "name": "7e99edb3-a955-412c-8af2-efb363a09870", 00:16:13.694 "aliases": [ 00:16:13.694 "lvs_test/clone" 00:16:13.694 ], 00:16:13.694 "product_name": "Logical Volume", 00:16:13.694 "block_size": 512, 00:16:13.694 "num_blocks": 2048, 00:16:13.695 "uuid": "7e99edb3-a955-412c-8af2-efb363a09870", 00:16:13.695 "assigned_rate_limits": { 00:16:13.695 "rw_ios_per_sec": 0, 00:16:13.695 "rw_mbytes_per_sec": 0, 00:16:13.695 "r_mbytes_per_sec": 0, 00:16:13.695 "w_mbytes_per_sec": 0 00:16:13.695 }, 00:16:13.695 "claimed": false, 00:16:13.695 "zoned": false, 00:16:13.695 "supported_io_types": { 00:16:13.695 "read": true, 00:16:13.695 "write": true, 00:16:13.695 "unmap": true, 00:16:13.695 "write_zeroes": true, 00:16:13.695 "flush": false, 00:16:13.695 "reset": true, 00:16:13.695 "compare": false, 00:16:13.695 "compare_and_write": false, 00:16:13.695 "abort": false, 00:16:13.695 "nvme_admin": false, 00:16:13.695 "nvme_io": false 00:16:13.695 }, 00:16:13.695 "driver_specific": { 00:16:13.695 "lvol": { 00:16:13.695 "lvol_store_uuid": "38f582c7-3a32-4dfb-9deb-88de60769354", 00:16:13.695 "base_bdev": "test_esnap_reload_aio0", 00:16:13.695 "thin_provision": true, 00:16:13.695 "snapshot": false, 00:16:13.695 "clone": true, 00:16:13.695 "base_snapshot": "eclone", 00:16:13.695 "esnap_clone": false 00:16:13.695 } 00:16:13.695 } 00:16:13.695 } 00:16:13.695 ] 00:16:13.695 12:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:55 -- lvol/external_snapshot.sh@438 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:13.695 12:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 [2024-10-01 12:35:55.988433] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@439 -- # NOT rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:13.695 12:35:56 -- common/autotest_common.sh@640 -- # local es=0 00:16:13.695 12:35:56 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:13.695 12:35:56 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.695 12:35:56 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 request: 00:16:13.695 { 00:16:13.695 "lvs_name": "lvs_test", 00:16:13.695 "method": "bdev_lvol_get_lvstores", 00:16:13.695 "req_id": 1 00:16:13.695 } 00:16:13.695 Got JSON-RPC error response 00:16:13.695 response: 00:16:13.695 { 00:16:13.695 "code": -19, 00:16:13.695 "message": "No such device" 00:16:13.695 } 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:13.695 12:35:56 -- common/autotest_common.sh@643 -- # es=1 00:16:13.695 12:35:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:13.695 12:35:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:13.695 12:35:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@440 -- # rpc_cmd bdev_malloc_delete Malloc8 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@441 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 test_esnap_reload_aio0 512 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 [2024-10-01 12:35:56.065751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:13.695 [2024-10-01 12:35:56.065817] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 95bc9ceb-7650-4075-ac41-dc2e5c002714: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:13.695 [2024-10-01 12:35:56.065853] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 95bc9ceb-7650-4075-ac41-dc2e5c002714: blob is degraded: deferring bdev creation 00:16:13.695 [2024-10-01 12:35:56.065920] vbdev_lvol.c:1112:_create_lvol_disk: *NOTICE*: lvol 7e99edb3-a955-412c-8af2-efb363a09870: blob is degraded: deferring bdev creation 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@441 -- # bs_dev=test_esnap_reload_aio0 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@442 -- # rpc_cmd bdev_lvol_get_lvstores -l lvs_test 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@442 -- # lvs_uuid='[ 00:16:13.695 { 00:16:13.695 "uuid": "38f582c7-3a32-4dfb-9deb-88de60769354", 00:16:13.695 "name": "lvs_test", 00:16:13.695 "base_bdev": "test_esnap_reload_aio0", 00:16:13.695 "total_data_clusters": 19199, 00:16:13.695 "free_clusters": 19199, 00:16:13.695 "block_size": 512, 00:16:13.695 "cluster_size": 16384 00:16:13.695 } 00:16:13.695 ]' 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@445 -- # rpc_cmd bdev_lvol_get_lvols 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@445 -- # lvols='[ 00:16:13.695 { 00:16:13.695 "alias": "lvs_test/eclone", 00:16:13.695 "uuid": "95bc9ceb-7650-4075-ac41-dc2e5c002714", 00:16:13.695 "name": "eclone", 00:16:13.695 "is_thin_provisioned": true, 00:16:13.695 "is_snapshot": true, 00:16:13.695 "is_clone": false, 00:16:13.695 "is_esnap_clone": true, 00:16:13.695 "is_degraded": true, 00:16:13.695 "lvs": { 00:16:13.695 "name": "lvs_test", 00:16:13.695 "uuid": "38f582c7-3a32-4dfb-9deb-88de60769354" 00:16:13.695 } 00:16:13.695 }, 00:16:13.695 { 00:16:13.695 "alias": "lvs_test/clone", 00:16:13.695 "uuid": "7e99edb3-a955-412c-8af2-efb363a09870", 00:16:13.695 "name": "clone", 00:16:13.695 "is_thin_provisioned": true, 00:16:13.695 "is_snapshot": false, 00:16:13.695 "is_clone": true, 00:16:13.695 "is_esnap_clone": false, 00:16:13.695 "is_degraded": true, 00:16:13.695 "lvs": { 00:16:13.695 "name": "lvs_test", 00:16:13.695 "uuid": "38f582c7-3a32-4dfb-9deb-88de60769354" 00:16:13.695 } 00:16:13.695 } 00:16:13.695 ]' 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@446 -- # jq -r '.[] | select(.uuid == "95bc9ceb-7650-4075-ac41-dc2e5c002714").is_degraded' 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@446 -- # [[ true == \t\r\u\e ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@447 -- # jq -r '.[] | select(.uuid == "7e99edb3-a955-412c-8af2-efb363a09870").is_degraded' 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@447 -- # [[ true == \t\r\u\e ]] 00:16:13.695 12:35:56 -- lvol/external_snapshot.sh@448 -- # NOT rpc_cmd bdev_get_bdevs -b 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.695 12:35:56 -- common/autotest_common.sh@640 -- # local es=0 00:16:13.695 12:35:56 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.695 12:35:56 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:13.695 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.695 12:35:56 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.695 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.695 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 [2024-10-01 12:35:56.216071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.955 request: 00:16:13.955 { 00:16:13.955 "name": "7e99edb3-a955-412c-8af2-efb363a09870", 00:16:13.955 "method": "bdev_get_bdevs", 00:16:13.955 "req_id": 1 00:16:13.955 } 00:16:13.955 Got JSON-RPC error response 00:16:13.955 response: 00:16:13.955 { 00:16:13.955 "code": -19, 00:16:13.955 "message": "No such device" 00:16:13.955 } 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:13.955 12:35:56 -- common/autotest_common.sh@643 -- # es=1 00:16:13.955 12:35:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:13.955 12:35:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:13.955 12:35:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@449 -- # NOT rpc_cmd bdev_get_bdevs -b 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.955 12:35:56 -- common/autotest_common.sh@640 -- # local es=0 00:16:13.955 12:35:56 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd bdev_get_bdevs -b 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.955 12:35:56 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:13.955 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.955 12:35:56 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:13.955 12:35:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:13.955 12:35:56 -- common/autotest_common.sh@643 -- # rpc_cmd bdev_get_bdevs -b 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 [2024-10-01 12:35:56.232081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.955 request: 00:16:13.955 { 00:16:13.955 "name": "95bc9ceb-7650-4075-ac41-dc2e5c002714", 00:16:13.955 "method": "bdev_get_bdevs", 00:16:13.955 "req_id": 1 00:16:13.955 } 00:16:13.955 Got JSON-RPC error response 00:16:13.955 response: 00:16:13.955 { 00:16:13.955 "code": -19, 00:16:13.955 "message": "No such device" 00:16:13.955 } 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:13.955 12:35:56 -- common/autotest_common.sh@643 -- # es=1 00:16:13.955 12:35:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:13.955 12:35:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:13.955 12:35:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@452 -- # rpc_cmd bdev_lvol_delete 7e99edb3-a955-412c-8af2-efb363a09870 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@453 -- # rpc_cmd bdev_lvol_get_lvols 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@453 -- # lvols='[ 00:16:13.955 { 00:16:13.955 "alias": "lvs_test/eclone", 00:16:13.955 "uuid": "95bc9ceb-7650-4075-ac41-dc2e5c002714", 00:16:13.955 "name": "eclone", 00:16:13.955 "is_thin_provisioned": true, 00:16:13.955 "is_snapshot": true, 00:16:13.955 "is_clone": false, 00:16:13.955 "is_esnap_clone": true, 00:16:13.955 "is_degraded": true, 00:16:13.955 "lvs": { 00:16:13.955 "name": "lvs_test", 00:16:13.955 "uuid": "38f582c7-3a32-4dfb-9deb-88de60769354" 00:16:13.955 } 00:16:13.955 } 00:16:13.955 ]' 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@454 -- # jq -r '. | length' 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@454 -- # [[ 1 == \1 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@455 -- # rpc_cmd bdev_lvol_delete 95bc9ceb-7650-4075-ac41-dc2e5c002714 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 [2024-10-01 12:35:56.324508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e4b40d8b-f623-416d-8234-baf5a4c83cbd 00:16:13.955 [2024-10-01 12:35:56.324575] vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *NOTICE*: lvol 95bc9ceb-7650-4075-ac41-dc2e5c002714: bdev e4b40d8b-f623-416d-8234-baf5a4c83cbd not available: lvol is degraded 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@456 -- # rpc_cmd bdev_lvol_get_lvols 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@456 -- # lvols='[]' 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@457 -- # jq -r '. | length' 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@457 -- # [[ 0 == \0 ]] 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@459 -- # rpc_cmd bdev_aio_delete test_esnap_reload_aio0 00:16:13.955 12:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 [2024-10-01 12:35:56.396169] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev test_esnap_reload_aio0 being removed: closing lvstore lvs_test 00:16:13.955 ************************************ 00:16:13.955 END TEST test_esnap_remove_degraded 00:16:13.955 ************************************ 00:16:13.955 12:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.955 00:16:13.955 real 0m0.978s 00:16:13.955 user 0m0.254s 00:16:13.955 sys 0m0.068s 00:16:13.955 12:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.955 12:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@474 -- # trap - SIGINT SIGTERM SIGPIPE EXIT 00:16:13.955 12:35:56 -- lvol/external_snapshot.sh@475 -- # killprocess 63896 00:16:13.955 12:35:56 -- common/autotest_common.sh@926 -- # '[' -z 63896 ']' 00:16:13.955 12:35:56 -- common/autotest_common.sh@930 -- # kill -0 63896 00:16:13.955 12:35:56 -- common/autotest_common.sh@931 -- # uname 00:16:13.955 12:35:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:13.955 12:35:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63896 00:16:14.215 killing process with pid 63896 00:16:14.215 12:35:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:14.215 12:35:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:14.215 12:35:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63896' 00:16:14.215 12:35:56 -- common/autotest_common.sh@945 -- # kill 63896 00:16:14.215 12:35:56 -- common/autotest_common.sh@950 -- # wait 63896 00:16:16.121 12:35:58 -- lvol/external_snapshot.sh@476 -- # rm -f /home/vagrant/spdk_repo/spdk/test/lvol/aio_bdev_0 00:16:16.121 ************************************ 00:16:16.121 END TEST lvol_external_snapshot 00:16:16.121 ************************************ 00:16:16.121 00:16:16.121 real 0m12.001s 00:16:16.121 user 0m13.729s 00:16:16.121 sys 0m1.911s 00:16:16.121 12:35:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.121 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.121 12:35:58 -- lvol/lvol.sh@23 -- # timing_exit basic 00:16:16.121 12:35:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:16.121 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.121 12:35:58 -- lvol/lvol.sh@25 -- # timing_exit lvol 00:16:16.121 12:35:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:16.121 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.121 ************************************ 00:16:16.121 END TEST lvol 00:16:16.121 ************************************ 00:16:16.121 00:16:16.121 real 3m52.861s 00:16:16.121 user 4m11.784s 00:16:16.121 sys 0m53.809s 00:16:16.121 12:35:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.121 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.121 12:35:58 -- spdk/autotest.sh@321 -- # run_test blob_io_wait /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/blob_io_wait.sh 00:16:16.121 12:35:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:16.121 12:35:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.121 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.121 ************************************ 00:16:16.122 START TEST blob_io_wait 00:16:16.122 ************************************ 00:16:16.122 12:35:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/blob_io_wait.sh 00:16:16.122 * Looking for test storage... 00:16:16.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait 00:16:16.122 12:35:58 -- blob_io_wait/blob_io_wait.sh@11 -- # truncate -s 64M /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/aio.bdev 00:16:16.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.122 12:35:58 -- blob_io_wait/blob_io_wait.sh@14 -- # bdev_svc_pid=64494 00:16:16.122 12:35:58 -- blob_io_wait/blob_io_wait.sh@13 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc --wait-for-rpc 00:16:16.122 12:35:58 -- blob_io_wait/blob_io_wait.sh@16 -- # trap 'rm -f $testdir/bdevperf.json; rm -f $testdir/aio.bdev; killprocess $bdev_svc_pid; exit 1' SIGINT SIGTERM EXIT 00:16:16.122 12:35:58 -- blob_io_wait/blob_io_wait.sh@18 -- # waitforlisten 64494 00:16:16.122 12:35:58 -- common/autotest_common.sh@819 -- # '[' -z 64494 ']' 00:16:16.122 12:35:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.122 12:35:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.122 12:35:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.122 12:35:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.122 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:16.381 [2024-10-01 12:35:58.747323] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.381 [2024-10-01 12:35:58.747803] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64494 ] 00:16:16.646 [2024-10-01 12:35:58.918957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.646 [2024-10-01 12:35:59.111476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.215 12:35:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.215 12:35:59 -- common/autotest_common.sh@852 -- # return 0 00:16:17.215 12:35:59 -- blob_io_wait/blob_io_wait.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 8192 --large-pool-count 1024 00:16:17.215 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.215 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.215 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.215 12:35:59 -- blob_io_wait/blob_io_wait.sh@21 -- # rpc_cmd bdev_set_options --bdev-io-pool-size 128 --bdev-io-cache-size 1 00:16:17.215 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.215 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.215 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.215 12:35:59 -- blob_io_wait/blob_io_wait.sh@22 -- # rpc_cmd framework_start_init 00:16:17.215 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.215 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.515 12:35:59 -- blob_io_wait/blob_io_wait.sh@23 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/aio.bdev aio0 4096 00:16:17.515 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.515 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 aio0 00:16:17.515 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.515 12:35:59 -- blob_io_wait/blob_io_wait.sh@24 -- # rpc_cmd bdev_lvol_create_lvstore aio0 lvs0 00:16:17.515 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.515 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 e82b680d-778c-49cc-9da5-5445a07fdf70 00:16:17.515 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.515 12:35:59 -- blob_io_wait/blob_io_wait.sh@25 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 32 00:16:17.515 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.515 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 99b5b156-d2e0-4393-998c-0132b81ef9fa 00:16:17.515 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.515 12:35:59 -- blob_io_wait/blob_io_wait.sh@26 -- # rpc_cmd save_config 00:16:17.515 12:35:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.515 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:17.515 12:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.515 12:35:59 -- blob_io_wait/blob_io_wait.sh@28 -- # killprocess 64494 00:16:17.515 12:35:59 -- common/autotest_common.sh@926 -- # '[' -z 64494 ']' 00:16:17.515 12:35:59 -- common/autotest_common.sh@930 -- # kill -0 64494 00:16:17.515 12:35:59 -- common/autotest_common.sh@931 -- # uname 00:16:17.515 12:35:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:17.515 12:35:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64494 00:16:17.515 killing process with pid 64494 00:16:17.515 12:35:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:17.515 12:35:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:17.515 12:35:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64494' 00:16:17.515 12:35:59 -- common/autotest_common.sh@945 -- # kill 64494 00:16:17.515 12:35:59 -- common/autotest_common.sh@950 -- # wait 64494 00:16:18.452 12:36:00 -- blob_io_wait/blob_io_wait.sh@31 -- # bdev_perf_pid=64526 00:16:18.452 12:36:00 -- blob_io_wait/blob_io_wait.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/bdevperf.json -q 128 -o 4096 -w write -t 5 -r /var/tmp/spdk.sock 00:16:18.452 12:36:00 -- blob_io_wait/blob_io_wait.sh@32 -- # waitforlisten 64526 00:16:18.452 12:36:00 -- common/autotest_common.sh@819 -- # '[' -z 64526 ']' 00:16:18.452 12:36:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.452 12:36:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.452 12:36:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.452 12:36:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.452 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:18.452 [2024-10-01 12:36:00.961677] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:18.452 [2024-10-01 12:36:00.961839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ] 00:16:18.712 [2024-10-01 12:36:01.118120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.971 [2024-10-01 12:36:01.293075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.230 Running I/O for 5 seconds... 00:16:19.488 12:36:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.488 12:36:01 -- common/autotest_common.sh@852 -- # return 0 00:16:19.488 12:36:01 -- blob_io_wait/blob_io_wait.sh@33 -- # rpc_cmd bdev_enable_histogram aio0 -e 00:16:19.488 12:36:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.488 12:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 12:36:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.488 12:36:01 -- blob_io_wait/blob_io_wait.sh@34 -- # sleep 2 00:16:22.025 12:36:03 -- blob_io_wait/blob_io_wait.sh@35 -- # rpc_cmd bdev_get_histogram aio0 00:16:22.025 12:36:03 -- blob_io_wait/blob_io_wait.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/histogram.py 00:16:22.025 12:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.025 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 12:36:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.025 Latency histogram 00:16:22.025 ============================================================================== 00:16:22.025 Range in us Cumulative IO count 00:16:22.025 1854.371 - 1861.818: 0.0047% ( 1) 00:16:22.025 1906.502 - 1921.396: 0.0141% ( 2) 00:16:22.025 1921.396 - 1936.291: 0.0188% ( 1) 00:16:22.025 1936.291 - 1951.185: 0.0330% ( 3) 00:16:22.025 1951.185 - 1966.080: 0.0471% ( 3) 00:16:22.025 1966.080 - 1980.975: 0.0612% ( 3) 00:16:22.025 1980.975 - 1995.869: 0.0707% ( 2) 00:16:22.025 1995.869 - 2010.764: 0.1036% ( 7) 00:16:22.025 2010.764 - 2025.658: 0.1178% ( 3) 00:16:22.025 2040.553 - 2055.447: 0.1366% ( 4) 00:16:22.025 2055.447 - 2070.342: 0.1413% ( 1) 00:16:22.025 2070.342 - 2085.236: 0.1649% ( 5) 00:16:22.025 2085.236 - 2100.131: 0.1884% ( 5) 00:16:22.025 2115.025 - 2129.920: 0.2073% ( 4) 00:16:22.025 2129.920 - 2144.815: 0.2449% ( 8) 00:16:22.025 2144.815 - 2159.709: 0.2779% ( 7) 00:16:22.025 2159.709 - 2174.604: 0.3109% ( 7) 00:16:22.025 2174.604 - 2189.498: 0.3439% ( 7) 00:16:22.025 2189.498 - 2204.393: 0.3533% ( 2) 00:16:22.025 2204.393 - 2219.287: 0.3768% ( 5) 00:16:22.025 2219.287 - 2234.182: 0.3863% ( 2) 00:16:22.025 2234.182 - 2249.076: 0.3910% ( 1) 00:16:22.025 2249.076 - 2263.971: 0.4051% ( 3) 00:16:22.025 2263.971 - 2278.865: 0.4239% ( 4) 00:16:22.025 2278.865 - 2293.760: 0.4475% ( 5) 00:16:22.025 2293.760 - 2308.655: 0.4569% ( 2) 00:16:22.025 2308.655 - 2323.549: 0.4805% ( 5) 00:16:22.025 2323.549 - 2338.444: 0.5040% ( 5) 00:16:22.025 2338.444 - 2353.338: 0.5370% ( 7) 00:16:22.025 2353.338 - 2368.233: 0.5511% ( 3) 00:16:22.025 2368.233 - 2383.127: 0.5935% ( 9) 00:16:22.025 2383.127 - 2398.022: 0.6406% ( 10) 00:16:22.025 2398.022 - 2412.916: 0.6736% ( 7) 00:16:22.025 2412.916 - 2427.811: 0.6972% ( 5) 00:16:22.025 2427.811 - 2442.705: 0.7490% ( 11) 00:16:22.025 2442.705 - 2457.600: 0.7725% ( 5) 00:16:22.025 2457.600 - 2472.495: 0.8291% ( 12) 00:16:22.025 2472.495 - 2487.389: 0.8714% ( 9) 00:16:22.025 2487.389 - 2502.284: 0.9515% ( 17) 00:16:22.025 2502.284 - 2517.178: 0.9845% ( 7) 00:16:22.025 2517.178 - 2532.073: 1.0175% ( 7) 00:16:22.025 2532.073 - 2546.967: 1.0928% ( 16) 00:16:22.025 2546.967 - 2561.862: 1.1258% ( 7) 00:16:22.025 2561.862 - 2576.756: 1.1918% ( 14) 00:16:22.025 2576.756 - 2591.651: 1.2483% ( 12) 00:16:22.025 2591.651 - 2606.545: 1.3472% ( 21) 00:16:22.025 2606.545 - 2621.440: 1.4085% ( 13) 00:16:22.025 2621.440 - 2636.335: 1.5215% ( 24) 00:16:22.025 2636.335 - 2651.229: 1.6204% ( 21) 00:16:22.025 2651.229 - 2666.124: 1.7429% ( 26) 00:16:22.025 2666.124 - 2681.018: 1.8795% ( 29) 00:16:22.025 2681.018 - 2695.913: 2.0255% ( 31) 00:16:22.025 2695.913 - 2710.807: 2.2469% ( 47) 00:16:22.025 2710.807 - 2725.702: 2.5107% ( 56) 00:16:22.025 2725.702 - 2740.596: 2.7933% ( 60) 00:16:22.025 2740.596 - 2755.491: 3.0807% ( 61) 00:16:22.025 2755.491 - 2770.385: 3.3727% ( 62) 00:16:22.025 2770.385 - 2785.280: 3.7684% ( 84) 00:16:22.025 2785.280 - 2800.175: 4.2395% ( 100) 00:16:22.025 2800.175 - 2815.069: 4.7624% ( 111) 00:16:22.025 2815.069 - 2829.964: 5.3465% ( 124) 00:16:22.025 2829.964 - 2844.858: 5.9494% ( 128) 00:16:22.025 2844.858 - 2859.753: 6.4958% ( 116) 00:16:22.025 2859.753 - 2874.647: 7.1035% ( 129) 00:16:22.025 2874.647 - 2889.542: 7.8430% ( 157) 00:16:22.025 2889.542 - 2904.436: 8.5072% ( 141) 00:16:22.025 2904.436 - 2919.331: 9.1761% ( 142) 00:16:22.025 2919.331 - 2934.225: 9.8544% ( 144) 00:16:22.025 2934.225 - 2949.120: 10.5704% ( 152) 00:16:22.025 2949.120 - 2964.015: 11.3430% ( 164) 00:16:22.025 2964.015 - 2978.909: 12.0307% ( 146) 00:16:22.025 2978.909 - 2993.804: 12.6902% ( 140) 00:16:22.025 2993.804 - 3008.698: 13.3544% ( 141) 00:16:22.025 3008.698 - 3023.593: 14.1175% ( 162) 00:16:22.025 3023.593 - 3038.487: 14.8476% ( 155) 00:16:22.025 3038.487 - 3053.382: 15.5589% ( 151) 00:16:22.025 3053.382 - 3068.276: 16.1430% ( 124) 00:16:22.025 3068.276 - 3083.171: 16.8025% ( 140) 00:16:22.025 3083.171 - 3098.065: 17.4290% ( 133) 00:16:22.025 3098.065 - 3112.960: 18.1073% ( 144) 00:16:22.025 3112.960 - 3127.855: 18.6302% ( 111) 00:16:22.025 3127.855 - 3142.749: 19.2991% ( 142) 00:16:22.025 3142.749 - 3157.644: 19.8314% ( 113) 00:16:22.025 3157.644 - 3172.538: 20.4108% ( 123) 00:16:22.025 3172.538 - 3187.433: 21.0749% ( 141) 00:16:22.025 3187.433 - 3202.327: 21.5931% ( 110) 00:16:22.025 3202.327 - 3217.222: 22.1018% ( 108) 00:16:22.025 3217.222 - 3232.116: 22.6624% ( 119) 00:16:22.025 3232.116 - 3247.011: 23.1429% ( 102) 00:16:22.025 3247.011 - 3261.905: 23.6563% ( 109) 00:16:22.025 3261.905 - 3276.800: 24.2216% ( 120) 00:16:22.025 3276.800 - 3291.695: 24.7445% ( 111) 00:16:22.025 3291.695 - 3306.589: 25.2155% ( 100) 00:16:22.025 3306.589 - 3321.484: 25.7384% ( 111) 00:16:22.025 3321.484 - 3336.378: 26.2141% ( 101) 00:16:22.025 3336.378 - 3351.273: 26.6004% ( 82) 00:16:22.025 3351.273 - 3366.167: 27.0856% ( 103) 00:16:22.025 3366.167 - 3381.062: 27.5142% ( 91) 00:16:22.025 3381.062 - 3395.956: 27.8911% ( 80) 00:16:22.025 3395.956 - 3410.851: 28.3103% ( 89) 00:16:22.025 3410.851 - 3425.745: 28.7249% ( 88) 00:16:22.025 3425.745 - 3440.640: 29.2195% ( 105) 00:16:22.025 3440.640 - 3455.535: 29.7659% ( 116) 00:16:22.025 3455.535 - 3470.429: 30.2464% ( 102) 00:16:22.025 3470.429 - 3485.324: 30.7315% ( 103) 00:16:22.025 3485.324 - 3500.218: 31.3486% ( 131) 00:16:22.025 3500.218 - 3515.113: 32.1306% ( 166) 00:16:22.025 3515.113 - 3530.007: 32.7665% ( 135) 00:16:22.025 3530.007 - 3544.902: 33.4589% ( 147) 00:16:22.025 3544.902 - 3559.796: 34.2032% ( 158) 00:16:22.025 3559.796 - 3574.691: 35.0323% ( 176) 00:16:22.025 3574.691 - 3589.585: 35.8660% ( 177) 00:16:22.025 3589.585 - 3604.480: 36.7657% ( 191) 00:16:22.025 3604.480 - 3619.375: 37.6325% ( 184) 00:16:22.025 3619.375 - 3634.269: 38.4851% ( 181) 00:16:22.025 3634.269 - 3649.164: 39.3660% ( 187) 00:16:22.025 3649.164 - 3664.058: 40.1196% ( 160) 00:16:22.025 3664.058 - 3678.953: 41.1041% ( 209) 00:16:22.025 3678.953 - 3693.847: 42.1687% ( 226) 00:16:22.025 3693.847 - 3708.742: 43.1768% ( 214) 00:16:22.025 3708.742 - 3723.636: 44.1001% ( 196) 00:16:22.025 3723.636 - 3738.531: 45.0516% ( 202) 00:16:22.025 3738.531 - 3753.425: 46.0078% ( 203) 00:16:22.025 3753.425 - 3768.320: 46.9499% ( 200) 00:16:22.025 3768.320 - 3783.215: 47.9203% ( 206) 00:16:22.025 3783.215 - 3798.109: 48.8954% ( 207) 00:16:22.025 3798.109 - 3813.004: 49.9505% ( 224) 00:16:22.025 3813.004 - 3842.793: 51.8065% ( 394) 00:16:22.025 3842.793 - 3872.582: 53.7661% ( 416) 00:16:22.025 3872.582 - 3902.371: 55.6786% ( 406) 00:16:22.026 3902.371 - 3932.160: 57.6099% ( 410) 00:16:22.026 3932.160 - 3961.949: 59.4941% ( 400) 00:16:22.026 3961.949 - 3991.738: 61.4772% ( 421) 00:16:22.026 3991.738 - 4021.527: 63.2343% ( 373) 00:16:22.026 4021.527 - 4051.316: 64.9913% ( 373) 00:16:22.026 4051.316 - 4081.105: 66.7153% ( 366) 00:16:22.026 4081.105 - 4110.895: 68.3311% ( 343) 00:16:22.026 4110.895 - 4140.684: 69.9326% ( 340) 00:16:22.026 4140.684 - 4170.473: 71.4730% ( 327) 00:16:22.026 4170.473 - 4200.262: 73.0416% ( 333) 00:16:22.026 4200.262 - 4230.051: 74.3935% ( 287) 00:16:22.026 4230.051 - 4259.840: 75.7078% ( 279) 00:16:22.026 4259.840 - 4289.629: 76.9890% ( 272) 00:16:22.026 4289.629 - 4319.418: 78.3268% ( 284) 00:16:22.026 4319.418 - 4349.207: 79.5374% ( 257) 00:16:22.026 4349.207 - 4378.996: 80.6680% ( 240) 00:16:22.026 4378.996 - 4408.785: 81.7514% ( 230) 00:16:22.026 4408.785 - 4438.575: 82.8301% ( 229) 00:16:22.026 4438.575 - 4468.364: 83.8900% ( 225) 00:16:22.026 4468.364 - 4498.153: 84.6719% ( 166) 00:16:22.026 4498.153 - 4527.942: 85.5528% ( 187) 00:16:22.026 4527.942 - 4557.731: 86.3300% ( 165) 00:16:22.026 4557.731 - 4587.520: 87.2674% ( 199) 00:16:22.026 4587.520 - 4617.309: 88.1200% ( 181) 00:16:22.026 4617.309 - 4647.098: 88.9585% ( 178) 00:16:22.026 4647.098 - 4676.887: 89.6509% ( 147) 00:16:22.026 4676.887 - 4706.676: 90.3670% ( 152) 00:16:22.026 4706.676 - 4736.465: 90.9652% ( 127) 00:16:22.026 4736.465 - 4766.255: 91.4833% ( 110) 00:16:22.026 4766.255 - 4796.044: 91.9450% ( 98) 00:16:22.026 4796.044 - 4825.833: 92.4396% ( 105) 00:16:22.026 4825.833 - 4855.622: 92.9295% ( 104) 00:16:22.026 4855.622 - 4885.411: 93.2734% ( 73) 00:16:22.026 4885.411 - 4915.200: 93.7256% ( 96) 00:16:22.026 4915.200 - 4944.989: 94.1684% ( 94) 00:16:22.026 4944.989 - 4974.778: 94.5311% ( 77) 00:16:22.026 4974.778 - 5004.567: 94.9927% ( 98) 00:16:22.026 5004.567 - 5034.356: 95.3554% ( 77) 00:16:22.026 5034.356 - 5064.145: 95.6522% ( 63) 00:16:22.026 5064.145 - 5093.935: 95.9254% ( 58) 00:16:22.026 5093.935 - 5123.724: 96.1798% ( 54) 00:16:22.026 5123.724 - 5153.513: 96.3540% ( 37) 00:16:22.026 5153.513 - 5183.302: 96.5236% ( 36) 00:16:22.026 5183.302 - 5213.091: 96.6084% ( 18) 00:16:22.026 5213.091 - 5242.880: 96.7073% ( 21) 00:16:22.026 5242.880 - 5272.669: 96.8157% ( 23) 00:16:22.026 5272.669 - 5302.458: 96.9287% ( 24) 00:16:22.026 5302.458 - 5332.247: 97.0371% ( 23) 00:16:22.026 5332.247 - 5362.036: 97.2302% ( 41) 00:16:22.026 5362.036 - 5391.825: 97.3998% ( 36) 00:16:22.026 5391.825 - 5421.615: 97.5317% ( 28) 00:16:22.026 5421.615 - 5451.404: 97.7013% ( 36) 00:16:22.026 5451.404 - 5481.193: 97.8473% ( 31) 00:16:22.026 5481.193 - 5510.982: 97.9462% ( 21) 00:16:22.026 5510.982 - 5540.771: 98.0922% ( 31) 00:16:22.026 5540.771 - 5570.560: 98.1723% ( 17) 00:16:22.026 5570.560 - 5600.349: 98.2807% ( 23) 00:16:22.026 5600.349 - 5630.138: 98.3513% ( 15) 00:16:22.026 5630.138 - 5659.927: 98.4502% ( 21) 00:16:22.026 5659.927 - 5689.716: 98.5162% ( 14) 00:16:22.026 5689.716 - 5719.505: 98.6292% ( 24) 00:16:22.026 5719.505 - 5749.295: 98.6763% ( 10) 00:16:22.026 5749.295 - 5779.084: 98.7376% ( 13) 00:16:22.026 5779.084 - 5808.873: 98.7988% ( 13) 00:16:22.026 5808.873 - 5838.662: 98.8836% ( 18) 00:16:22.026 5838.662 - 5868.451: 98.9496% ( 14) 00:16:22.026 5868.451 - 5898.240: 98.9731% ( 5) 00:16:22.026 5898.240 - 5928.029: 99.0343% ( 13) 00:16:22.026 5928.029 - 5957.818: 99.0720% ( 8) 00:16:22.026 5957.818 - 5987.607: 99.1144% ( 9) 00:16:22.026 5987.607 - 6017.396: 99.1474% ( 7) 00:16:22.026 6017.396 - 6047.185: 99.1945% ( 10) 00:16:22.026 6047.185 - 6076.975: 99.2322% ( 8) 00:16:22.026 6076.975 - 6106.764: 99.2699% ( 8) 00:16:22.026 6106.764 - 6136.553: 99.2934% ( 5) 00:16:22.026 6136.553 - 6166.342: 99.3076% ( 3) 00:16:22.026 6166.342 - 6196.131: 99.3358% ( 6) 00:16:22.026 6196.131 - 6225.920: 99.3499% ( 3) 00:16:22.026 6225.920 - 6255.709: 99.3594% ( 2) 00:16:22.026 6255.709 - 6285.498: 99.3782% ( 4) 00:16:22.026 6285.498 - 6315.287: 99.4159% ( 8) 00:16:22.026 6315.287 - 6345.076: 99.4206% ( 1) 00:16:22.026 6345.076 - 6374.865: 99.4253% ( 1) 00:16:22.026 6374.865 - 6404.655: 99.4347% ( 2) 00:16:22.026 6404.655 - 6434.444: 99.4442% ( 2) 00:16:22.026 6434.444 - 6464.233: 99.4724% ( 6) 00:16:22.026 6464.233 - 6494.022: 99.4818% ( 2) 00:16:22.026 6494.022 - 6523.811: 99.4866% ( 1) 00:16:22.026 6523.811 - 6553.600: 99.5054% ( 4) 00:16:22.026 6553.600 - 6583.389: 99.5478% ( 9) 00:16:22.026 6583.389 - 6613.178: 99.5572% ( 2) 00:16:22.026 6613.178 - 6642.967: 99.5713% ( 3) 00:16:22.026 6642.967 - 6672.756: 99.5761% ( 1) 00:16:22.026 7208.960 - 7238.749: 99.5808% ( 1) 00:16:22.026 7268.538 - 7298.327: 99.5855% ( 1) 00:16:22.026 7328.116 - 7357.905: 99.5902% ( 1) 00:16:22.026 7357.905 - 7387.695: 99.5949% ( 1) 00:16:22.026 7387.695 - 7417.484: 99.5996% ( 1) 00:16:22.026 35270.284 - 35508.596: 99.6090% ( 2) 00:16:22.026 35508.596 - 35746.909: 99.6232% ( 3) 00:16:22.026 35746.909 - 35985.222: 99.6420% ( 4) 00:16:22.026 35985.222 - 36223.535: 99.6467% ( 1) 00:16:22.026 36223.535 - 36461.847: 99.6608% ( 3) 00:16:22.026 36461.847 - 36700.160: 99.6750% ( 3) 00:16:22.026 36700.160 - 36938.473: 99.6844% ( 2) 00:16:22.026 36938.473 - 37176.785: 99.6891% ( 1) 00:16:22.026 37176.785 - 37415.098: 99.6938% ( 1) 00:16:22.026 37415.098 - 37653.411: 99.7174% ( 5) 00:16:22.026 37653.411 - 37891.724: 99.7315% ( 3) 00:16:22.026 37891.724 - 38130.036: 99.7409% ( 2) 00:16:22.026 38130.036 - 38368.349: 99.7551% ( 3) 00:16:22.026 38368.349 - 38606.662: 99.7645% ( 2) 00:16:22.026 38606.662 - 38844.975: 99.7692% ( 1) 00:16:22.026 39083.287 - 39321.600: 99.7739% ( 1) 00:16:22.026 39559.913 - 39798.225: 99.7833% ( 2) 00:16:22.026 39798.225 - 40036.538: 99.7880% ( 1) 00:16:22.026 40989.789 - 41228.102: 99.7927% ( 1) 00:16:22.026 41228.102 - 41466.415: 99.7974% ( 1) 00:16:22.026 43611.229 - 43849.542: 99.8022% ( 1) 00:16:22.026 43849.542 - 44087.855: 99.8163% ( 3) 00:16:22.026 44087.855 - 44326.167: 99.8446% ( 6) 00:16:22.026 44326.167 - 44564.480: 99.8587% ( 3) 00:16:22.026 44564.480 - 44802.793: 99.8822% ( 5) 00:16:22.026 44802.793 - 45041.105: 99.8869% ( 1) 00:16:22.026 45041.105 - 45279.418: 99.9011% ( 3) 00:16:22.026 45279.418 - 45517.731: 99.9152% ( 3) 00:16:22.026 45517.731 - 45756.044: 99.9246% ( 2) 00:16:22.026 45756.044 - 45994.356: 99.9341% ( 2) 00:16:22.026 45994.356 - 46232.669: 99.9435% ( 2) 00:16:22.026 46232.669 - 46470.982: 99.9670% ( 5) 00:16:22.026 46470.982 - 46709.295: 99.9859% ( 4) 00:16:22.026 46709.295 - 46947.607: 99.9953% ( 2) 00:16:22.026 12:36:04 -- blob_io_wait/blob_io_wait.sh@36 -- # rpc_cmd bdev_enable_histogram aio0 -d 00:16:22.026 12:36:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.026 12:36:04 -- common/autotest_common.sh@10 -- # set +x 00:16:22.026 12:36:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.026 12:36:04 -- blob_io_wait/blob_io_wait.sh@37 -- # wait 64526 00:16:24.558 00:16:24.558 Latency(us) 00:16:24.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.558 Job: 99b5b156-d2e0-4393-998c-0132b81ef9fa (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:24.558 99b5b156-d2e0-4393-998c-0132b81ef9fa: 5.01 10498.43 41.01 0.00 0.00 8053.20 1966.08 49569.05 00:16:24.558 =================================================================================================================== 00:16:24.558 Total : 10498.43 41.01 0.00 0.00 8053.20 1966.08 49569.05 00:16:25.125 12:36:07 -- blob_io_wait/blob_io_wait.sh@40 -- # bdev_perf_pid=64612 00:16:25.125 12:36:07 -- blob_io_wait/blob_io_wait.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/bdevperf.json -q 128 -o 4096 -w read -t 5 -r /var/tmp/spdk.sock 00:16:25.125 12:36:07 -- blob_io_wait/blob_io_wait.sh@41 -- # waitforlisten 64612 00:16:25.125 12:36:07 -- common/autotest_common.sh@819 -- # '[' -z 64612 ']' 00:16:25.125 12:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.125 12:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:25.125 12:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.125 12:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:25.125 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:25.125 [2024-10-01 12:36:07.499338] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:25.125 [2024-10-01 12:36:07.499497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64612 ] 00:16:25.384 [2024-10-01 12:36:07.666392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.384 [2024-10-01 12:36:07.825710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.642 Running I/O for 5 seconds... 00:16:26.209 12:36:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:26.209 12:36:08 -- common/autotest_common.sh@852 -- # return 0 00:16:26.209 12:36:08 -- blob_io_wait/blob_io_wait.sh@42 -- # rpc_cmd bdev_enable_histogram aio0 -e 00:16:26.209 12:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.209 12:36:08 -- common/autotest_common.sh@10 -- # set +x 00:16:26.209 12:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.209 12:36:08 -- blob_io_wait/blob_io_wait.sh@43 -- # sleep 2 00:16:28.148 12:36:10 -- blob_io_wait/blob_io_wait.sh@44 -- # rpc_cmd bdev_get_histogram aio0 00:16:28.148 12:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.148 12:36:10 -- blob_io_wait/blob_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/histogram.py 00:16:28.148 12:36:10 -- common/autotest_common.sh@10 -- # set +x 00:16:28.407 12:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.407 Latency histogram 00:16:28.407 ============================================================================== 00:16:28.407 Range in us Cumulative IO count 00:16:28.407 1385.193 - 1392.640: 0.0027% ( 1) 00:16:28.407 1616.058 - 1623.505: 0.0080% ( 2) 00:16:28.407 1623.505 - 1630.953: 0.0134% ( 2) 00:16:28.407 1630.953 - 1638.400: 0.0188% ( 2) 00:16:28.407 1638.400 - 1645.847: 0.0322% ( 5) 00:16:28.407 1645.847 - 1653.295: 0.0536% ( 8) 00:16:28.407 1653.295 - 1660.742: 0.0644% ( 4) 00:16:28.407 1660.742 - 1668.189: 0.0805% ( 6) 00:16:28.407 1675.636 - 1683.084: 0.0912% ( 4) 00:16:28.407 1683.084 - 1690.531: 0.1126% ( 8) 00:16:28.407 1690.531 - 1697.978: 0.1368% ( 9) 00:16:28.407 1697.978 - 1705.425: 0.1636% ( 10) 00:16:28.407 1705.425 - 1712.873: 0.1931% ( 11) 00:16:28.407 1712.873 - 1720.320: 0.2199% ( 10) 00:16:28.407 1720.320 - 1727.767: 0.2655% ( 17) 00:16:28.407 1727.767 - 1735.215: 0.3111% ( 17) 00:16:28.407 1735.215 - 1742.662: 0.3674% ( 21) 00:16:28.407 1742.662 - 1750.109: 0.4210% ( 20) 00:16:28.407 1750.109 - 1757.556: 0.4666% ( 17) 00:16:28.407 1757.556 - 1765.004: 0.5364% ( 26) 00:16:28.407 1765.004 - 1772.451: 0.5846% ( 18) 00:16:28.407 1772.451 - 1779.898: 0.6409% ( 21) 00:16:28.407 1779.898 - 1787.345: 0.7026% ( 23) 00:16:28.407 1787.345 - 1794.793: 0.7992% ( 36) 00:16:28.407 1794.793 - 1802.240: 0.8474% ( 18) 00:16:28.407 1802.240 - 1809.687: 0.9359% ( 33) 00:16:28.407 1809.687 - 1817.135: 1.0030% ( 25) 00:16:28.407 1817.135 - 1824.582: 1.1049% ( 38) 00:16:28.408 1824.582 - 1832.029: 1.2417% ( 51) 00:16:28.408 1832.029 - 1839.476: 1.3811% ( 52) 00:16:28.408 1839.476 - 1846.924: 1.5393% ( 59) 00:16:28.408 1846.924 - 1854.371: 1.6868% ( 55) 00:16:28.408 1854.371 - 1861.818: 1.8531% ( 62) 00:16:28.408 1861.818 - 1869.265: 2.0971% ( 91) 00:16:28.408 1869.265 - 1876.713: 2.2849% ( 70) 00:16:28.408 1876.713 - 1884.160: 2.5209% ( 88) 00:16:28.408 1884.160 - 1891.607: 2.8105% ( 108) 00:16:28.408 1891.607 - 1899.055: 3.1591% ( 130) 00:16:28.408 1899.055 - 1906.502: 3.5614% ( 150) 00:16:28.408 1906.502 - 1921.396: 4.3739% ( 303) 00:16:28.408 1921.396 - 1936.291: 5.4359% ( 396) 00:16:28.408 1936.291 - 1951.185: 6.8707% ( 535) 00:16:28.408 1951.185 - 1966.080: 8.5119% ( 612) 00:16:28.408 1966.080 - 1980.975: 10.2792% ( 659) 00:16:28.408 1980.975 - 1995.869: 12.1993% ( 716) 00:16:28.408 1995.869 - 2010.764: 14.2562% ( 767) 00:16:28.408 2010.764 - 2025.658: 16.2434% ( 741) 00:16:28.408 2025.658 - 2040.553: 18.5524% ( 861) 00:16:28.408 2040.553 - 2055.447: 20.8507% ( 857) 00:16:28.408 2055.447 - 2070.342: 23.3769% ( 942) 00:16:28.408 2070.342 - 2085.236: 26.0425% ( 994) 00:16:28.408 2085.236 - 2100.131: 29.1936% ( 1175) 00:16:28.408 2100.131 - 2115.025: 32.4680% ( 1221) 00:16:28.408 2115.025 - 2129.920: 35.7317% ( 1217) 00:16:28.408 2129.920 - 2144.815: 39.0464% ( 1236) 00:16:28.408 2144.815 - 2159.709: 42.4307% ( 1262) 00:16:28.408 2159.709 - 2174.604: 45.8983% ( 1293) 00:16:28.408 2174.604 - 2189.498: 49.0547% ( 1177) 00:16:28.408 2189.498 - 2204.393: 52.0261% ( 1108) 00:16:28.408 2204.393 - 2219.287: 54.9680% ( 1097) 00:16:28.408 2219.287 - 2234.182: 57.7248% ( 1028) 00:16:28.408 2234.182 - 2249.076: 60.2483% ( 941) 00:16:28.408 2249.076 - 2263.971: 62.8335% ( 964) 00:16:28.408 2263.971 - 2278.865: 65.2069% ( 885) 00:16:28.408 2278.865 - 2293.760: 67.4220% ( 826) 00:16:28.408 2293.760 - 2308.655: 69.5621% ( 798) 00:16:28.408 2308.655 - 2323.549: 71.6109% ( 764) 00:16:28.408 2323.549 - 2338.444: 73.4613% ( 690) 00:16:28.408 2338.444 - 2353.338: 75.2152% ( 654) 00:16:28.408 2353.338 - 2368.233: 76.8404% ( 606) 00:16:28.408 2368.233 - 2383.127: 78.4735% ( 609) 00:16:28.408 2383.127 - 2398.022: 79.9968% ( 568) 00:16:28.408 2398.022 - 2412.916: 81.3886% ( 519) 00:16:28.408 2412.916 - 2427.811: 82.7885% ( 522) 00:16:28.408 2427.811 - 2442.705: 83.9416% ( 430) 00:16:28.408 2442.705 - 2457.600: 85.0358% ( 408) 00:16:28.408 2457.600 - 2472.495: 86.0870% ( 392) 00:16:28.408 2472.495 - 2487.389: 87.0391% ( 355) 00:16:28.408 2487.389 - 2502.284: 87.8811% ( 314) 00:16:28.408 2502.284 - 2517.178: 88.6749% ( 296) 00:16:28.408 2517.178 - 2532.073: 89.5036% ( 309) 00:16:28.408 2532.073 - 2546.967: 90.2089% ( 263) 00:16:28.408 2546.967 - 2561.862: 90.9491% ( 276) 00:16:28.408 2561.862 - 2576.756: 91.6544% ( 263) 00:16:28.408 2576.756 - 2591.651: 92.1988% ( 203) 00:16:28.408 2591.651 - 2606.545: 92.7673% ( 212) 00:16:28.408 2606.545 - 2621.440: 93.2849% ( 193) 00:16:28.408 2621.440 - 2636.335: 93.7649% ( 179) 00:16:28.408 2636.335 - 2651.229: 94.1833% ( 156) 00:16:28.408 2651.229 - 2666.124: 94.5641% ( 142) 00:16:28.408 2666.124 - 2681.018: 94.8859% ( 120) 00:16:28.408 2681.018 - 2695.913: 95.1272% ( 90) 00:16:28.408 2695.913 - 2710.807: 95.3177% ( 71) 00:16:28.408 2710.807 - 2725.702: 95.5081% ( 71) 00:16:28.408 2725.702 - 2740.596: 95.6663% ( 59) 00:16:28.408 2740.596 - 2755.491: 95.7977% ( 49) 00:16:28.408 2755.491 - 2770.385: 95.8996% ( 38) 00:16:28.408 2770.385 - 2785.280: 95.9935% ( 35) 00:16:28.408 2785.280 - 2800.175: 96.1034% ( 41) 00:16:28.408 2800.175 - 2815.069: 96.1946% ( 34) 00:16:28.408 2815.069 - 2829.964: 96.2804% ( 32) 00:16:28.408 2829.964 - 2844.858: 96.3689% ( 33) 00:16:28.408 2844.858 - 2859.753: 96.4333% ( 24) 00:16:28.408 2859.753 - 2874.647: 96.5057% ( 27) 00:16:28.408 2874.647 - 2889.542: 96.5754% ( 26) 00:16:28.408 2889.542 - 2904.436: 96.6264% ( 19) 00:16:28.408 2904.436 - 2919.331: 96.7095% ( 31) 00:16:28.408 2919.331 - 2934.225: 96.7578% ( 18) 00:16:28.408 2934.225 - 2949.120: 96.8060% ( 18) 00:16:28.408 2949.120 - 2964.015: 96.8543% ( 18) 00:16:28.408 2964.015 - 2978.909: 96.8945% ( 15) 00:16:28.408 2978.909 - 2993.804: 96.9374% ( 16) 00:16:28.408 2993.804 - 3008.698: 97.0018% ( 24) 00:16:28.408 3008.698 - 3023.593: 97.0769% ( 28) 00:16:28.408 3023.593 - 3038.487: 97.1171% ( 15) 00:16:28.408 3038.487 - 3053.382: 97.1600% ( 16) 00:16:28.408 3053.382 - 3068.276: 97.2512% ( 34) 00:16:28.408 3068.276 - 3083.171: 97.3182% ( 25) 00:16:28.408 3083.171 - 3098.065: 97.3772% ( 22) 00:16:28.408 3098.065 - 3112.960: 97.4255% ( 18) 00:16:28.408 3112.960 - 3127.855: 97.4845% ( 22) 00:16:28.408 3127.855 - 3142.749: 97.5542% ( 26) 00:16:28.408 3142.749 - 3157.644: 97.6213% ( 25) 00:16:28.408 3157.644 - 3172.538: 97.6776% ( 21) 00:16:28.408 3172.538 - 3187.433: 97.7741% ( 36) 00:16:28.408 3187.433 - 3202.327: 97.8412% ( 25) 00:16:28.408 3202.327 - 3217.222: 97.9216% ( 30) 00:16:28.408 3217.222 - 3232.116: 98.0101% ( 33) 00:16:28.408 3232.116 - 3247.011: 98.1013% ( 34) 00:16:28.408 3247.011 - 3261.905: 98.1845% ( 31) 00:16:28.408 3261.905 - 3276.800: 98.2971% ( 42) 00:16:28.408 3276.800 - 3291.695: 98.3936% ( 36) 00:16:28.408 3291.695 - 3306.589: 98.4982% ( 39) 00:16:28.408 3306.589 - 3321.484: 98.5599% ( 23) 00:16:28.408 3321.484 - 3336.378: 98.5948% ( 13) 00:16:28.408 3336.378 - 3351.273: 98.6377% ( 16) 00:16:28.408 3351.273 - 3366.167: 98.6940% ( 21) 00:16:28.408 3366.167 - 3381.062: 98.7342% ( 15) 00:16:28.408 3381.062 - 3395.956: 98.7771% ( 16) 00:16:28.408 3395.956 - 3410.851: 98.8227% ( 17) 00:16:28.408 3410.851 - 3425.745: 98.8442% ( 8) 00:16:28.408 3425.745 - 3440.640: 98.8763% ( 12) 00:16:28.408 3440.640 - 3455.535: 98.9112% ( 13) 00:16:28.408 3455.535 - 3470.429: 98.9568% ( 17) 00:16:28.408 3470.429 - 3485.324: 98.9997% ( 16) 00:16:28.408 3485.324 - 3500.218: 99.0480% ( 18) 00:16:28.408 3500.218 - 3515.113: 99.1016% ( 20) 00:16:28.408 3515.113 - 3530.007: 99.1257% ( 9) 00:16:28.408 3530.007 - 3544.902: 99.1445% ( 7) 00:16:28.408 3544.902 - 3559.796: 99.1874% ( 16) 00:16:28.408 3559.796 - 3574.691: 99.2196% ( 12) 00:16:28.408 3574.691 - 3589.585: 99.2437% ( 9) 00:16:28.408 3589.585 - 3604.480: 99.2706% ( 10) 00:16:28.408 3604.480 - 3619.375: 99.2840% ( 5) 00:16:28.408 3619.375 - 3634.269: 99.3081% ( 9) 00:16:28.408 3634.269 - 3649.164: 99.3349% ( 10) 00:16:28.408 3649.164 - 3664.058: 99.3457% ( 4) 00:16:28.408 3664.058 - 3678.953: 99.3671% ( 8) 00:16:28.408 3678.953 - 3693.847: 99.3778% ( 4) 00:16:28.408 3693.847 - 3708.742: 99.3966% ( 7) 00:16:28.408 3708.742 - 3723.636: 99.4100% ( 5) 00:16:28.408 3738.531 - 3753.425: 99.4154% ( 2) 00:16:28.408 3753.425 - 3768.320: 99.4234% ( 3) 00:16:28.408 3768.320 - 3783.215: 99.4341% ( 4) 00:16:28.408 3783.215 - 3798.109: 99.4395% ( 2) 00:16:28.408 3798.109 - 3813.004: 99.4476% ( 3) 00:16:28.408 3813.004 - 3842.793: 99.4529% ( 2) 00:16:28.408 3842.793 - 3872.582: 99.4931% ( 15) 00:16:28.408 3872.582 - 3902.371: 99.5226% ( 11) 00:16:28.408 3902.371 - 3932.160: 99.5763% ( 20) 00:16:28.408 3932.160 - 3961.949: 99.6299% ( 20) 00:16:28.408 3961.949 - 3991.738: 99.6943% ( 24) 00:16:28.408 3991.738 - 4021.527: 99.7479% ( 20) 00:16:28.408 4021.527 - 4051.316: 99.7908% ( 16) 00:16:28.408 4051.316 - 4081.105: 99.8150% ( 9) 00:16:28.408 4081.105 - 4110.895: 99.8471% ( 12) 00:16:28.408 4110.895 - 4140.684: 99.8900% ( 16) 00:16:28.408 4140.684 - 4170.473: 99.9356% ( 17) 00:16:28.408 4170.473 - 4200.262: 99.9651% ( 11) 00:16:28.408 4200.262 - 4230.051: 99.9839% ( 7) 00:16:28.408 4230.051 - 4259.840: 99.9893% ( 2) 00:16:28.408 4259.840 - 4289.629: 99.9920% ( 1) 00:16:28.408 4319.418 - 4349.207: 99.9973% ( 2) 00:16:28.408 12:36:10 -- blob_io_wait/blob_io_wait.sh@45 -- # rpc_cmd bdev_enable_histogram aio0 -d 00:16:28.408 12:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.408 12:36:10 -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 12:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.408 12:36:10 -- blob_io_wait/blob_io_wait.sh@46 -- # wait 64612 00:16:30.941 00:16:30.941 Latency(us) 00:16:30.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.941 Job: 99b5b156-d2e0-4393-998c-0132b81ef9fa (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:16:30.941 99b5b156-d2e0-4393-998c-0132b81ef9fa: 5.01 18461.73 72.12 0.00 0.00 4580.14 1616.06 46947.61 00:16:30.941 =================================================================================================================== 00:16:30.941 Total : 18461.73 72.12 0.00 0.00 4580.14 1616.06 46947.61 00:16:31.509 12:36:13 -- blob_io_wait/blob_io_wait.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/bdevperf.json -q 128 -o 4096 -w unmap -t 1 00:16:31.509 [2024-10-01 12:36:14.009861] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:31.509 [2024-10-01 12:36:14.010280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64687 ] 00:16:31.767 [2024-10-01 12:36:14.178383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.026 [2024-10-01 12:36:14.345183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.284 Running I/O for 1 seconds... 00:16:33.218 00:16:33.218 Latency(us) 00:16:33.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.218 Job: 99b5b156-d2e0-4393-998c-0132b81ef9fa (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:16:33.218 99b5b156-d2e0-4393-998c-0132b81ef9fa: 1.00 118164.06 461.58 0.00 0.00 1071.48 396.57 1534.14 00:16:33.218 =================================================================================================================== 00:16:33.218 Total : 118164.06 461.58 0.00 0.00 1071.48 396.57 1534.14 00:16:34.153 12:36:16 -- blob_io_wait/blob_io_wait.sh@50 -- # sync 00:16:34.153 12:36:16 -- blob_io_wait/blob_io_wait.sh@51 -- # rm -f /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/bdevperf.json 00:16:34.153 12:36:16 -- blob_io_wait/blob_io_wait.sh@52 -- # rm -f /home/vagrant/spdk_repo/spdk/test/blobstore/blob_io_wait/aio.bdev 00:16:34.153 12:36:16 -- blob_io_wait/blob_io_wait.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:16:34.153 ************************************ 00:16:34.153 END TEST blob_io_wait 00:16:34.153 ************************************ 00:16:34.153 00:16:34.153 real 0m18.120s 00:16:34.153 user 0m11.914s 00:16:34.153 sys 0m5.269s 00:16:34.154 12:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.154 12:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.412 12:36:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:16:34.412 12:36:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:34.412 12:36:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:34.412 12:36:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:16:34.412 12:36:16 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:16:34.412 12:36:16 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:16:34.412 12:36:16 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:16:34.412 12:36:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.412 12:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.412 12:36:16 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:16:34.412 12:36:16 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:16:34.412 12:36:16 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:16:34.412 12:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:35.788 INFO: APP EXITING 00:16:35.788 INFO: killing all VMs 00:16:35.788 INFO: killing vhost app 00:16:35.788 INFO: EXIT DONE 00:16:36.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.047 Waiting for block devices as requested 00:16:36.047 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.305 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.873 Cleaning 00:16:36.873 Removing: /dev/shm/spdk_tgt_trace.pid53437 00:16:36.873 Removing: /var/run/dpdk/spdk_pid53222 00:16:36.873 Removing: /var/run/dpdk/spdk_pid53437 00:16:36.873 Removing: /var/run/dpdk/spdk_pid53731 00:16:36.873 Removing: /var/run/dpdk/spdk_pid53835 00:16:36.873 Removing: /var/run/dpdk/spdk_pid53940 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54055 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54156 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54201 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54238 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54305 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54411 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54871 00:16:36.873 Removing: /var/run/dpdk/spdk_pid54948 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55030 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55053 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55198 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55227 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55371 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55395 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55470 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55490 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55556 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55587 00:16:36.873 Removing: /var/run/dpdk/spdk_pid55772 00:16:37.132 Removing: /var/run/dpdk/spdk_pid55815 00:16:37.132 Removing: /var/run/dpdk/spdk_pid55889 00:16:37.132 Removing: /var/run/dpdk/spdk_pid55978 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56011 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56089 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56115 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56167 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56193 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56240 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56271 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56318 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56344 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56390 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56422 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56468 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56500 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56541 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56578 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56619 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56645 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56697 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56723 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56770 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56796 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56842 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56874 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56915 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56952 00:16:37.132 Removing: /var/run/dpdk/spdk_pid56993 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57019 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57071 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57097 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57144 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57170 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57216 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57248 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57289 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57329 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57373 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57402 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57457 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57483 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57530 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57561 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57609 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57690 00:16:37.132 Removing: /var/run/dpdk/spdk_pid57805 00:16:37.132 Removing: /var/run/dpdk/spdk_pid58025 00:16:37.132 Removing: /var/run/dpdk/spdk_pid59315 00:16:37.132 Removing: /var/run/dpdk/spdk_pid59687 00:16:37.132 Removing: /var/run/dpdk/spdk_pid60050 00:16:37.132 Removing: /var/run/dpdk/spdk_pid60289 00:16:37.132 Removing: /var/run/dpdk/spdk_pid60576 00:16:37.132 Removing: /var/run/dpdk/spdk_pid62305 00:16:37.132 Removing: /var/run/dpdk/spdk_pid62960 00:16:37.132 Removing: /var/run/dpdk/spdk_pid63896 00:16:37.132 Removing: /var/run/dpdk/spdk_pid64494 00:16:37.132 Removing: /var/run/dpdk/spdk_pid64526 00:16:37.132 Removing: /var/run/dpdk/spdk_pid64612 00:16:37.132 Removing: /var/run/dpdk/spdk_pid64687 00:16:37.132 Clean 00:16:37.391 killing process with pid 47544 00:16:37.391 killing process with pid 47547 00:16:37.391 12:36:19 -- common/autotest_common.sh@1436 -- # return 0 00:16:37.391 12:36:19 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:16:37.391 12:36:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:37.391 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:16:37.391 12:36:19 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:16:37.391 12:36:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:37.391 12:36:19 -- common/autotest_common.sh@10 -- # set +x 00:16:37.391 12:36:19 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:37.391 12:36:19 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:37.391 12:36:19 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:37.391 12:36:19 -- spdk/autotest.sh@394 -- # hash lcov 00:16:37.391 12:36:19 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:16:37.391 12:36:19 -- spdk/autotest.sh@396 -- # hostname 00:16:37.391 12:36:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:37.675 geninfo: WARNING: invalid characters removed from testname! 00:16:59.610 12:36:41 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:02.895 12:36:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:05.429 12:36:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:07.334 12:36:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:09.905 12:36:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:11.809 12:36:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:14.342 12:36:56 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:14.342 12:36:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.342 12:36:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:14.342 12:36:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.342 12:36:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.342 12:36:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.342 12:36:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.342 12:36:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.342 12:36:56 -- paths/export.sh@5 -- $ export PATH 00:17:14.342 12:36:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.342 12:36:56 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:14.342 12:36:56 -- common/autobuild_common.sh@440 -- $ date +%s 00:17:14.342 12:36:56 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727786216.XXXXXX 00:17:14.342 12:36:56 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727786216.WWdnan 00:17:14.342 12:36:56 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:17:14.342 12:36:56 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:17:14.342 12:36:56 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:14.342 12:36:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:14.342 12:36:56 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:14.342 12:36:56 -- common/autobuild_common.sh@456 -- $ get_config_params 00:17:14.342 12:36:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:17:14.342 12:36:56 -- common/autotest_common.sh@10 -- $ set +x 00:17:14.342 12:36:56 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:17:14.342 12:36:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:17:14.342 12:36:56 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:14.342 12:36:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:17:14.342 12:36:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:17:14.342 12:36:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:17:14.342 12:36:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:17:14.342 12:36:56 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:14.342 12:36:56 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:17:14.342 12:36:56 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:14.342 12:36:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:17:14.342 + [[ -n 5229 ]] 00:17:14.342 + sudo kill 5229 00:17:14.609 [Pipeline] } 00:17:14.625 [Pipeline] // timeout 00:17:14.630 [Pipeline] } 00:17:14.644 [Pipeline] // stage 00:17:14.650 [Pipeline] } 00:17:14.662 [Pipeline] // catchError 00:17:14.668 [Pipeline] stage 00:17:14.670 [Pipeline] { (Stop VM) 00:17:14.679 [Pipeline] sh 00:17:14.984 + vagrant halt 00:17:18.272 ==> default: Halting domain... 00:17:23.550 [Pipeline] sh 00:17:23.825 + vagrant destroy -f 00:17:27.112 ==> default: Removing domain... 00:17:27.125 [Pipeline] sh 00:17:27.410 + mv output /var/jenkins/workspace/lvol-vg-autotest/output 00:17:27.419 [Pipeline] } 00:17:27.434 [Pipeline] // stage 00:17:27.439 [Pipeline] } 00:17:27.453 [Pipeline] // dir 00:17:27.458 [Pipeline] } 00:17:27.472 [Pipeline] // wrap 00:17:27.478 [Pipeline] } 00:17:27.491 [Pipeline] // catchError 00:17:27.501 [Pipeline] stage 00:17:27.503 [Pipeline] { (Epilogue) 00:17:27.517 [Pipeline] sh 00:17:27.800 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:33.103 [Pipeline] catchError 00:17:33.105 [Pipeline] { 00:17:33.118 [Pipeline] sh 00:17:33.399 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:33.658 Artifacts sizes are good 00:17:33.668 [Pipeline] } 00:17:33.682 [Pipeline] // catchError 00:17:33.694 [Pipeline] archiveArtifacts 00:17:33.702 Archiving artifacts 00:17:33.829 [Pipeline] cleanWs 00:17:33.841 [WS-CLEANUP] Deleting project workspace... 00:17:33.841 [WS-CLEANUP] Deferred wipeout is used... 00:17:33.847 [WS-CLEANUP] done 00:17:33.848 [Pipeline] } 00:17:33.864 [Pipeline] // stage 00:17:33.869 [Pipeline] } 00:17:33.883 [Pipeline] // node 00:17:33.888 [Pipeline] End of Pipeline 00:17:33.932 Finished: SUCCESS