00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1001 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3668 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.009 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.032 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.049 Using shallow fetch with depth 1 00:00:00.049 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.049 > git --version # timeout=10 00:00:00.066 > git --version # 'git version 2.39.2' 00:00:00.066 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.091 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.091 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.276 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.289 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.303 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.303 > git config core.sparsecheckout # timeout=10 00:00:02.314 > git read-tree -mu HEAD # timeout=10 00:00:02.330 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.353 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.353 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.634 [Pipeline] Start of Pipeline 00:00:02.648 [Pipeline] library 00:00:02.650 Loading library shm_lib@master 00:00:02.650 Library shm_lib@master is cached. Copying from home. 00:00:02.665 [Pipeline] node 00:00:02.676 Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:02.678 [Pipeline] { 00:00:02.688 [Pipeline] catchError 00:00:02.689 [Pipeline] { 00:00:02.701 [Pipeline] wrap 00:00:02.711 [Pipeline] { 00:00:02.720 [Pipeline] stage 00:00:02.722 [Pipeline] { (Prologue) 00:00:02.742 [Pipeline] echo 00:00:02.743 Node: VM-host-SM9 00:00:02.750 [Pipeline] cleanWs 00:00:02.763 [WS-CLEANUP] Deleting project workspace... 00:00:02.763 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.771 [WS-CLEANUP] done 00:00:02.965 [Pipeline] setCustomBuildProperty 00:00:03.044 [Pipeline] httpRequest 00:00:03.369 [Pipeline] echo 00:00:03.370 Sorcerer 10.211.164.20 is alive 00:00:03.378 [Pipeline] retry 00:00:03.379 [Pipeline] { 00:00:03.390 [Pipeline] httpRequest 00:00:03.393 HttpMethod: GET 00:00:03.394 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.394 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.395 Response Code: HTTP/1.1 200 OK 00:00:03.395 Success: Status code 200 is in the accepted range: 200,404 00:00:03.396 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.542 [Pipeline] } 00:00:03.556 [Pipeline] // retry 00:00:03.562 [Pipeline] sh 00:00:03.838 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.851 [Pipeline] httpRequest 00:00:04.155 [Pipeline] echo 00:00:04.157 Sorcerer 10.211.164.20 is alive 00:00:04.164 [Pipeline] retry 00:00:04.166 [Pipeline] { 00:00:04.179 [Pipeline] httpRequest 00:00:04.183 HttpMethod: GET 00:00:04.183 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.184 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.185 Response Code: HTTP/1.1 200 OK 00:00:04.185 Success: Status code 200 is in the accepted range: 200,404 00:00:04.185 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:20.631 [Pipeline] } 00:00:20.649 [Pipeline] // retry 00:00:20.657 [Pipeline] sh 00:00:20.938 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:23.480 [Pipeline] sh 00:00:23.757 + git -C spdk log --oneline -n5 00:00:23.757 c13c99a5e test: Various fixes for Fedora40 00:00:23.757 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:23.757 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:23.757 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:23.757 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:23.776 [Pipeline] withCredentials 00:00:23.787 > git --version # timeout=10 00:00:23.802 > git --version # 'git version 2.39.2' 00:00:23.818 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:23.820 [Pipeline] { 00:00:23.830 [Pipeline] retry 00:00:23.832 [Pipeline] { 00:00:23.847 [Pipeline] sh 00:00:24.127 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:24.398 [Pipeline] } 00:00:24.416 [Pipeline] // retry 00:00:24.421 [Pipeline] } 00:00:24.437 [Pipeline] // withCredentials 00:00:24.447 [Pipeline] httpRequest 00:00:24.940 [Pipeline] echo 00:00:24.941 Sorcerer 10.211.164.20 is alive 00:00:24.951 [Pipeline] retry 00:00:24.953 [Pipeline] { 00:00:24.967 [Pipeline] httpRequest 00:00:24.972 HttpMethod: GET 00:00:24.972 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:24.973 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:24.983 Response Code: HTTP/1.1 200 OK 00:00:24.983 Success: Status code 200 is in the accepted range: 200,404 00:00:24.984 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:02.450 [Pipeline] } 00:01:02.467 [Pipeline] // retry 00:01:02.475 [Pipeline] sh 00:01:02.756 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:04.147 [Pipeline] sh 00:01:04.431 + git -C dpdk log --oneline -n5 00:01:04.431 eeb0605f11 version: 23.11.0 00:01:04.431 238778122a doc: update release notes for 23.11 00:01:04.431 46aa6b3cfc doc: fix description of RSS features 00:01:04.431 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:04.432 7e421ae345 devtools: support skipping forbid rule check 00:01:04.453 [Pipeline] writeFile 00:01:04.471 [Pipeline] sh 00:01:04.756 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.769 [Pipeline] sh 00:01:05.049 + cat autorun-spdk.conf 00:01:05.049 SPDK_TEST_UNITTEST=1 00:01:05.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.049 SPDK_TEST_NVME=1 00:01:05.049 SPDK_TEST_BLOCKDEV=1 00:01:05.049 SPDK_RUN_ASAN=1 00:01:05.049 SPDK_RUN_UBSAN=1 00:01:05.049 SPDK_TEST_RAID5=1 00:01:05.049 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:05.049 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.049 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.056 RUN_NIGHTLY=1 00:01:05.058 [Pipeline] } 00:01:05.071 [Pipeline] // stage 00:01:05.088 [Pipeline] stage 00:01:05.090 [Pipeline] { (Run VM) 00:01:05.103 [Pipeline] sh 00:01:05.387 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.387 + echo 'Start stage prepare_nvme.sh' 00:01:05.387 Start stage prepare_nvme.sh 00:01:05.387 + [[ -n 4 ]] 00:01:05.387 + disk_prefix=ex4 00:01:05.387 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:01:05.387 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:01:05.387 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:01:05.387 ++ SPDK_TEST_UNITTEST=1 00:01:05.387 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.387 ++ SPDK_TEST_NVME=1 00:01:05.387 ++ SPDK_TEST_BLOCKDEV=1 00:01:05.387 ++ SPDK_RUN_ASAN=1 00:01:05.387 ++ SPDK_RUN_UBSAN=1 00:01:05.387 ++ SPDK_TEST_RAID5=1 00:01:05.387 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:05.387 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.387 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.387 ++ RUN_NIGHTLY=1 00:01:05.387 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:05.387 + nvme_files=() 00:01:05.387 + declare -A nvme_files 00:01:05.387 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.387 + nvme_files['nvme.img']=5G 00:01:05.387 + nvme_files['nvme-cmb.img']=5G 00:01:05.387 + nvme_files['nvme-multi0.img']=4G 00:01:05.387 + nvme_files['nvme-multi1.img']=4G 00:01:05.387 + nvme_files['nvme-multi2.img']=4G 00:01:05.387 + nvme_files['nvme-openstack.img']=8G 00:01:05.387 + nvme_files['nvme-zns.img']=5G 00:01:05.387 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.387 + (( SPDK_TEST_FTL == 1 )) 00:01:05.387 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.387 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:05.387 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.387 + for nvme in "${!nvme_files[@]}" 00:01:05.387 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:05.646 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.646 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:05.646 + echo 'End stage prepare_nvme.sh' 00:01:05.646 End stage prepare_nvme.sh 00:01:05.658 [Pipeline] sh 00:01:05.938 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.938 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2404 00:01:05.938 00:01:05.938 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:01:05.938 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:01:05.938 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:01:05.938 HELP=0 00:01:05.939 DRY_RUN=0 00:01:05.939 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:01:05.939 NVME_DISKS_TYPE=nvme, 00:01:05.939 NVME_AUTO_CREATE=0 00:01:05.939 NVME_DISKS_NAMESPACES=, 00:01:05.939 NVME_CMB=, 00:01:05.939 NVME_PMR=, 00:01:05.939 NVME_ZNS=, 00:01:05.939 NVME_MS=, 00:01:05.939 NVME_FDP=, 00:01:05.939 SPDK_VAGRANT_DISTRO=ubuntu2404 00:01:05.939 SPDK_VAGRANT_VMCPU=10 00:01:05.939 SPDK_VAGRANT_VMRAM=12288 00:01:05.939 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.939 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.939 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.939 SPDK_OPENSTACK_NETWORK=0 00:01:05.939 VAGRANT_PACKAGE_BOX=0 00:01:05.939 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.939 FORCE_DISTRO=true 00:01:05.939 VAGRANT_BOX_VERSION= 00:01:05.939 EXTRA_VAGRANTFILES= 00:01:05.939 NIC_MODEL=e1000 00:01:05.939 00:01:05.939 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:01:05.939 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:08.511 Bringing machine 'default' up with 'libvirt' provider... 00:01:09.093 ==> default: Creating image (snapshot of base box volume). 00:01:09.093 ==> default: Creating domain with the following settings... 00:01:09.093 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1732619487_8f673dedc924af07ac8c 00:01:09.093 ==> default: -- Domain type: kvm 00:01:09.093 ==> default: -- Cpus: 10 00:01:09.093 ==> default: -- Feature: acpi 00:01:09.093 ==> default: -- Feature: apic 00:01:09.093 ==> default: -- Feature: pae 00:01:09.093 ==> default: -- Memory: 12288M 00:01:09.093 ==> default: -- Memory Backing: hugepages: 00:01:09.093 ==> default: -- Management MAC: 00:01:09.093 ==> default: -- Loader: 00:01:09.093 ==> default: -- Nvram: 00:01:09.093 ==> default: -- Base box: spdk/ubuntu2404 00:01:09.093 ==> default: -- Storage pool: default 00:01:09.093 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1732619487_8f673dedc924af07ac8c.img (20G) 00:01:09.093 ==> default: -- Volume Cache: default 00:01:09.093 ==> default: -- Kernel: 00:01:09.093 ==> default: -- Initrd: 00:01:09.093 ==> default: -- Graphics Type: vnc 00:01:09.093 ==> default: -- Graphics Port: -1 00:01:09.093 ==> default: -- Graphics IP: 127.0.0.1 00:01:09.093 ==> default: -- Graphics Password: Not defined 00:01:09.093 ==> default: -- Video Type: cirrus 00:01:09.093 ==> default: -- Video VRAM: 9216 00:01:09.093 ==> default: -- Sound Type: 00:01:09.093 ==> default: -- Keymap: en-us 00:01:09.093 ==> default: -- TPM Path: 00:01:09.093 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:09.093 ==> default: -- Command line args: 00:01:09.093 ==> default: -> value=-device, 00:01:09.093 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:09.093 ==> default: -> value=-drive, 00:01:09.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:09.093 ==> default: -> value=-device, 00:01:09.093 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:09.353 ==> default: Creating shared folders metadata... 00:01:09.353 ==> default: Starting domain. 00:01:10.734 ==> default: Waiting for domain to get an IP address... 00:01:20.710 ==> default: Waiting for SSH to become available... 00:01:21.646 ==> default: Configuring and enabling network interfaces... 00:01:26.919 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:31.179 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:35.370 ==> default: Mounting SSHFS shared folder... 00:01:36.308 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.308 ==> default: Checking Mount.. 00:01:36.875 ==> default: Folder Successfully Mounted! 00:01:36.875 ==> default: Running provisioner: file... 00:01:37.441 default: ~/.gitconfig => .gitconfig 00:01:37.698 00:01:37.698 SUCCESS! 00:01:37.698 00:01:37.698 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:01:37.698 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:37.698 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:01:37.698 00:01:37.706 [Pipeline] } 00:01:37.721 [Pipeline] // stage 00:01:37.731 [Pipeline] dir 00:01:37.731 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:01:37.732 [Pipeline] { 00:01:37.746 [Pipeline] catchError 00:01:37.748 [Pipeline] { 00:01:37.761 [Pipeline] sh 00:01:38.039 + vagrant ssh-config --host vagrant 00:01:38.039 + sed -ne /^Host/,$p 00:01:38.039 + tee ssh_conf 00:01:41.324 Host vagrant 00:01:41.324 HostName 192.168.121.186 00:01:41.324 User vagrant 00:01:41.324 Port 22 00:01:41.324 UserKnownHostsFile /dev/null 00:01:41.324 StrictHostKeyChecking no 00:01:41.324 PasswordAuthentication no 00:01:41.324 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:01:41.324 IdentitiesOnly yes 00:01:41.324 LogLevel FATAL 00:01:41.324 ForwardAgent yes 00:01:41.324 ForwardX11 yes 00:01:41.324 00:01:41.338 [Pipeline] withEnv 00:01:41.340 [Pipeline] { 00:01:41.353 [Pipeline] sh 00:01:41.633 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:41.633 source /etc/os-release 00:01:41.633 [[ -e /image.version ]] && img=$(< /image.version) 00:01:41.633 # Minimal, systemd-like check. 00:01:41.633 if [[ -e /.dockerenv ]]; then 00:01:41.633 # Clear garbage from the node's name: 00:01:41.633 # agt-er_autotest_547-896 -> autotest_547-896 00:01:41.633 # $HOSTNAME is the actual container id 00:01:41.633 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:41.633 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:41.633 # We can assume this is a mount from a host where container is running, 00:01:41.633 # so fetch its hostname to easily identify the target swarm worker. 00:01:41.633 container="$(< /etc/hostname) ($agent)" 00:01:41.633 else 00:01:41.633 # Fallback 00:01:41.633 container=$agent 00:01:41.633 fi 00:01:41.633 fi 00:01:41.633 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:41.633 00:01:41.904 [Pipeline] } 00:01:41.922 [Pipeline] // withEnv 00:01:41.932 [Pipeline] setCustomBuildProperty 00:01:41.948 [Pipeline] stage 00:01:41.950 [Pipeline] { (Tests) 00:01:41.970 [Pipeline] sh 00:01:42.254 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:42.529 [Pipeline] sh 00:01:42.809 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.086 [Pipeline] timeout 00:01:43.086 Timeout set to expire in 1 hr 30 min 00:01:43.089 [Pipeline] { 00:01:43.106 [Pipeline] sh 00:01:43.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:43.953 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:43.966 [Pipeline] sh 00:01:44.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:44.521 [Pipeline] sh 00:01:44.830 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.115 [Pipeline] sh 00:01:45.397 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:01:45.656 ++ readlink -f spdk_repo 00:01:45.656 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:45.656 + [[ -n /home/vagrant/spdk_repo ]] 00:01:45.656 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:45.656 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:45.656 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:45.656 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:45.656 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:45.656 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:01:45.656 + cd /home/vagrant/spdk_repo 00:01:45.656 + source /etc/os-release 00:01:45.656 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:01:45.656 ++ NAME=Ubuntu 00:01:45.656 ++ VERSION_ID=24.04 00:01:45.656 ++ VERSION='24.04 LTS (Noble Numbat)' 00:01:45.656 ++ VERSION_CODENAME=noble 00:01:45.656 ++ ID=ubuntu 00:01:45.656 ++ ID_LIKE=debian 00:01:45.656 ++ HOME_URL=https://www.ubuntu.com/ 00:01:45.656 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:45.656 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:45.656 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:45.656 ++ UBUNTU_CODENAME=noble 00:01:45.656 ++ LOGO=ubuntu-logo 00:01:45.656 + uname -a 00:01:45.656 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:45.656 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:45.656 Hugepages 00:01:45.656 node hugesize free / total 00:01:45.656 node0 1048576kB 0 / 0 00:01:45.656 node0 2048kB 0 / 0 00:01:45.656 00:01:45.656 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:45.656 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:45.915 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:45.915 + rm -f /tmp/spdk-ld-path 00:01:45.915 + source autorun-spdk.conf 00:01:45.915 ++ SPDK_TEST_UNITTEST=1 00:01:45.915 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.915 ++ SPDK_TEST_NVME=1 00:01:45.915 ++ SPDK_TEST_BLOCKDEV=1 00:01:45.915 ++ SPDK_RUN_ASAN=1 00:01:45.915 ++ SPDK_RUN_UBSAN=1 00:01:45.915 ++ SPDK_TEST_RAID5=1 00:01:45.915 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.915 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.915 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.915 ++ RUN_NIGHTLY=1 00:01:45.915 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:45.915 + [[ -n '' ]] 00:01:45.915 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:45.915 + for M in /var/spdk/build-*-manifest.txt 00:01:45.915 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:45.915 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.915 + for M in /var/spdk/build-*-manifest.txt 00:01:45.915 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:45.915 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.915 ++ uname 00:01:45.915 + [[ Linux == \L\i\n\u\x ]] 00:01:45.915 + sudo dmesg -T 00:01:45.915 + sudo dmesg --clear 00:01:45.915 + dmesg_pid=2526 00:01:45.915 + [[ Ubuntu == FreeBSD ]] 00:01:45.915 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.915 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.915 + sudo dmesg -Tw 00:01:45.915 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:45.915 + [[ -x /usr/src/fio-static/fio ]] 00:01:45.915 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:45.915 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:45.915 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:45.915 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:45.915 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:45.915 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:45.915 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:45.915 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:45.915 Test configuration: 00:01:45.915 SPDK_TEST_UNITTEST=1 00:01:45.915 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.915 SPDK_TEST_NVME=1 00:01:45.915 SPDK_TEST_BLOCKDEV=1 00:01:45.915 SPDK_RUN_ASAN=1 00:01:45.915 SPDK_RUN_UBSAN=1 00:01:45.916 SPDK_TEST_RAID5=1 00:01:45.916 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.916 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.916 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.916 RUN_NIGHTLY=1 11:12:03 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:45.916 11:12:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:45.916 11:12:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:45.916 11:12:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.916 11:12:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.916 11:12:03 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:45.916 11:12:03 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:45.916 11:12:03 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:45.916 11:12:03 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:45.916 11:12:03 -- paths/export.sh@6 -- $ export PATH 00:01:45.916 11:12:03 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:45.916 11:12:03 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:45.916 11:12:03 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:45.916 11:12:03 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732619523.XXXXXX 00:01:45.916 11:12:03 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732619523.9pqfQd 00:01:45.916 11:12:03 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:45.916 11:12:03 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:45.916 11:12:03 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:45.916 11:12:03 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:45.916 11:12:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:45.916 11:12:03 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:45.916 11:12:03 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:45.916 11:12:03 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:45.916 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.916 11:12:03 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:45.916 11:12:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.916 11:12:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.916 11:12:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:45.916 11:12:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:46.175 Tue Nov 26 11:12:03 UTC 2024 00:01:46.175 11:12:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:46.175 LTS-67-gc13c99a5e 00:01:46.175 11:12:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:46.175 11:12:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:46.175 11:12:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:46.175 11:12:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.175 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.175 ************************************ 00:01:46.175 START TEST asan 00:01:46.175 ************************************ 00:01:46.175 using asan 00:01:46.175 11:12:03 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:46.175 00:01:46.175 real 0m0.000s 00:01:46.175 user 0m0.000s 00:01:46.175 sys 0m0.000s 00:01:46.175 ************************************ 00:01:46.175 END TEST asan 00:01:46.175 ************************************ 00:01:46.175 11:12:03 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:46.175 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.175 11:12:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.175 11:12:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.175 11:12:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:46.175 11:12:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.175 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.175 ************************************ 00:01:46.175 START TEST ubsan 00:01:46.175 ************************************ 00:01:46.175 using ubsan 00:01:46.175 11:12:03 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:46.175 00:01:46.175 real 0m0.000s 00:01:46.175 user 0m0.000s 00:01:46.175 sys 0m0.000s 00:01:46.175 11:12:03 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:46.175 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.175 ************************************ 00:01:46.175 END TEST ubsan 00:01:46.175 ************************************ 00:01:46.175 11:12:03 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:46.175 11:12:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:46.175 11:12:03 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:46.175 11:12:03 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:46.175 11:12:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.175 11:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.175 ************************************ 00:01:46.175 START TEST build_native_dpdk 00:01:46.175 ************************************ 00:01:46.175 11:12:03 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:46.175 11:12:03 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:46.175 11:12:03 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:46.175 11:12:03 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:46.175 11:12:03 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:46.176 11:12:03 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:46.176 11:12:03 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:46.176 11:12:03 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:46.176 11:12:03 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:46.176 11:12:03 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:46.176 11:12:03 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:46.176 11:12:03 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:46.176 11:12:03 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:46.176 11:12:03 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:46.176 11:12:03 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:46.176 11:12:03 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:46.176 11:12:03 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:46.176 11:12:03 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:46.176 eeb0605f11 version: 23.11.0 00:01:46.176 238778122a doc: update release notes for 23.11 00:01:46.176 46aa6b3cfc doc: fix description of RSS features 00:01:46.176 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:46.176 7e421ae345 devtools: support skipping forbid rule check 00:01:46.176 11:12:03 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:46.176 11:12:03 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:46.176 11:12:03 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:46.176 11:12:03 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:46.176 11:12:03 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:46.176 11:12:03 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:46.176 11:12:03 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:46.176 11:12:03 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:46.176 11:12:03 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:46.176 11:12:03 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:46.176 11:12:03 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:46.176 11:12:03 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:46.176 11:12:03 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:46.176 11:12:03 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:46.176 11:12:03 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:46.176 11:12:03 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:46.176 11:12:03 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:46.176 11:12:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.176 11:12:03 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:46.176 11:12:03 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:46.176 11:12:03 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:46.176 11:12:03 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:46.176 11:12:03 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:46.176 11:12:03 -- scripts/common.sh@343 -- $ case "$op" in 00:01:46.176 11:12:03 -- scripts/common.sh@344 -- $ : 1 00:01:46.176 11:12:03 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:46.176 11:12:03 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.176 11:12:03 -- scripts/common.sh@364 -- $ decimal 23 00:01:46.176 11:12:03 -- scripts/common.sh@352 -- $ local d=23 00:01:46.176 11:12:03 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:46.176 11:12:03 -- scripts/common.sh@354 -- $ echo 23 00:01:46.176 11:12:03 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:46.176 11:12:03 -- scripts/common.sh@365 -- $ decimal 21 00:01:46.176 11:12:03 -- scripts/common.sh@352 -- $ local d=21 00:01:46.176 11:12:03 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:46.176 11:12:03 -- scripts/common.sh@354 -- $ echo 21 00:01:46.176 11:12:03 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:46.176 11:12:03 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:46.176 11:12:03 -- scripts/common.sh@366 -- $ return 1 00:01:46.176 11:12:03 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:46.176 patching file config/rte_config.h 00:01:46.176 Hunk #1 succeeded at 60 (offset 1 line). 00:01:46.176 11:12:03 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:46.176 11:12:03 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:46.176 11:12:03 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:46.176 11:12:03 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:46.176 11:12:03 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:46.176 11:12:03 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:46.176 11:12:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.176 11:12:03 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:46.176 11:12:03 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:46.176 11:12:03 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:46.176 11:12:03 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:46.176 11:12:03 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:46.176 11:12:03 -- scripts/common.sh@343 -- $ case "$op" in 00:01:46.176 11:12:03 -- scripts/common.sh@344 -- $ : 1 00:01:46.176 11:12:03 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:46.176 11:12:03 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.176 11:12:03 -- scripts/common.sh@364 -- $ decimal 23 00:01:46.176 11:12:03 -- scripts/common.sh@352 -- $ local d=23 00:01:46.176 11:12:03 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:46.176 11:12:03 -- scripts/common.sh@354 -- $ echo 23 00:01:46.176 11:12:03 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:46.176 11:12:03 -- scripts/common.sh@365 -- $ decimal 24 00:01:46.176 11:12:03 -- scripts/common.sh@352 -- $ local d=24 00:01:46.176 11:12:03 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:46.176 11:12:03 -- scripts/common.sh@354 -- $ echo 24 00:01:46.176 11:12:03 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:46.176 11:12:03 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:46.176 11:12:03 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:46.176 11:12:03 -- scripts/common.sh@367 -- $ return 0 00:01:46.176 11:12:03 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:46.176 patching file lib/pcapng/rte_pcapng.c 00:01:46.176 11:12:03 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:46.176 11:12:03 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:46.176 11:12:03 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:46.176 11:12:03 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:46.176 11:12:03 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:51.447 The Meson build system 00:01:51.447 Version: 1.4.1 00:01:51.447 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:51.447 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:51.447 Build type: native build 00:01:51.447 Program cat found: YES (/usr/bin/cat) 00:01:51.447 Project name: DPDK 00:01:51.447 Project version: 23.11.0 00:01:51.448 C compiler for the host machine: gcc (gcc 13.2.0 "gcc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:01:51.448 C linker for the host machine: gcc ld.bfd 2.42 00:01:51.448 Host machine cpu family: x86_64 00:01:51.448 Host machine cpu: x86_64 00:01:51.448 Message: ## Building in Developer Mode ## 00:01:51.448 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.448 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:51.448 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.448 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:01:51.448 Program cat found: YES (/usr/bin/cat) 00:01:51.448 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:51.448 Compiler for C supports arguments -march=native: YES 00:01:51.448 Checking for size of "void *" : 8 00:01:51.448 Checking for size of "void *" : 8 (cached) 00:01:51.448 Library m found: YES 00:01:51.448 Library numa found: YES 00:01:51.448 Has header "numaif.h" : YES 00:01:51.448 Library fdt found: NO 00:01:51.448 Library execinfo found: NO 00:01:51.448 Has header "execinfo.h" : YES 00:01:51.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:01:51.448 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.448 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.448 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.448 Run-time dependency openssl found: YES 3.0.13 00:01:51.448 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:51.448 Library pcap found: NO 00:01:51.448 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.448 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.448 Compiler for C supports arguments -Wformat: YES 00:01:51.448 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:51.448 Compiler for C supports arguments -Wformat-security: YES 00:01:51.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.448 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.448 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.448 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.448 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.448 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.448 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.448 Compiler for C supports arguments -Wundef: YES 00:01:51.448 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.448 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.448 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.448 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.448 Program objdump found: YES (/usr/bin/objdump) 00:01:51.448 Compiler for C supports arguments -mavx512f: YES 00:01:51.448 Checking if "AVX512 checking" compiles: YES 00:01:51.448 Fetching value of define "__SSE4_2__" : 1 00:01:51.448 Fetching value of define "__AES__" : 1 00:01:51.448 Fetching value of define "__AVX__" : 1 00:01:51.448 Fetching value of define "__AVX2__" : 1 00:01:51.448 Fetching value of define "__AVX512BW__" : (undefined) 00:01:51.448 Fetching value of define "__AVX512CD__" : (undefined) 00:01:51.448 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:51.448 Fetching value of define "__AVX512F__" : (undefined) 00:01:51.448 Fetching value of define "__AVX512VL__" : (undefined) 00:01:51.448 Fetching value of define "__PCLMUL__" : 1 00:01:51.448 Fetching value of define "__RDRND__" : 1 00:01:51.448 Fetching value of define "__RDSEED__" : 1 00:01:51.448 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.448 Fetching value of define "__znver1__" : (undefined) 00:01:51.448 Fetching value of define "__znver2__" : (undefined) 00:01:51.448 Fetching value of define "__znver3__" : (undefined) 00:01:51.448 Fetching value of define "__znver4__" : (undefined) 00:01:51.448 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.448 Message: lib/log: Defining dependency "log" 00:01:51.448 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.448 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.448 Checking for function "getentropy" : NO 00:01:51.448 Message: lib/eal: Defining dependency "eal" 00:01:51.448 Message: lib/ring: Defining dependency "ring" 00:01:51.448 Message: lib/rcu: Defining dependency "rcu" 00:01:51.448 Message: lib/mempool: Defining dependency "mempool" 00:01:51.448 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.448 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.448 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.448 Compiler for C supports arguments -mpclmul: YES 00:01:51.448 Compiler for C supports arguments -maes: YES 00:01:51.448 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.448 Compiler for C supports arguments -mavx512bw: YES 00:01:51.448 Compiler for C supports arguments -mavx512dq: YES 00:01:51.448 Compiler for C supports arguments -mavx512vl: YES 00:01:51.448 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.448 Compiler for C supports arguments -mavx2: YES 00:01:51.448 Compiler for C supports arguments -mavx: YES 00:01:51.448 Message: lib/net: Defining dependency "net" 00:01:51.448 Message: lib/meter: Defining dependency "meter" 00:01:51.448 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.448 Message: lib/pci: Defining dependency "pci" 00:01:51.448 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.448 Message: lib/metrics: Defining dependency "metrics" 00:01:51.448 Message: lib/hash: Defining dependency "hash" 00:01:51.448 Message: lib/timer: Defining dependency "timer" 00:01:51.448 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:51.448 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:51.448 Message: lib/acl: Defining dependency "acl" 00:01:51.448 Message: lib/bbdev: Defining dependency "bbdev" 00:01:51.448 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:51.448 Run-time dependency libelf found: YES 0.190 00:01:51.448 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:01:51.448 Message: lib/bpf: Defining dependency "bpf" 00:01:51.448 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:51.448 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.448 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.448 Message: lib/distributor: Defining dependency "distributor" 00:01:51.448 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.448 Message: lib/efd: Defining dependency "efd" 00:01:51.448 Message: lib/eventdev: Defining dependency "eventdev" 00:01:51.448 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:51.448 Message: lib/gpudev: Defining dependency "gpudev" 00:01:51.448 Message: lib/gro: Defining dependency "gro" 00:01:51.448 Message: lib/gso: Defining dependency "gso" 00:01:51.448 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:51.448 Message: lib/jobstats: Defining dependency "jobstats" 00:01:51.448 Message: lib/latencystats: Defining dependency "latencystats" 00:01:51.448 Message: lib/lpm: Defining dependency "lpm" 00:01:51.448 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:51.448 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:51.448 Message: lib/member: Defining dependency "member" 00:01:51.448 Message: lib/pcapng: Defining dependency "pcapng" 00:01:51.448 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.448 Message: lib/power: Defining dependency "power" 00:01:51.448 Message: lib/rawdev: Defining dependency "rawdev" 00:01:51.448 Message: lib/regexdev: Defining dependency "regexdev" 00:01:51.448 Message: lib/mldev: Defining dependency "mldev" 00:01:51.448 Message: lib/rib: Defining dependency "rib" 00:01:51.448 Message: lib/reorder: Defining dependency "reorder" 00:01:51.448 Message: lib/sched: Defining dependency "sched" 00:01:51.448 Message: lib/security: Defining dependency "security" 00:01:51.448 Message: lib/stack: Defining dependency "stack" 00:01:51.448 Has header "linux/userfaultfd.h" : YES 00:01:51.448 Has header "linux/vduse.h" : YES 00:01:51.448 Message: lib/vhost: Defining dependency "vhost" 00:01:51.448 Message: lib/ipsec: Defining dependency "ipsec" 00:01:51.448 Message: lib/pdcp: Defining dependency "pdcp" 00:01:51.448 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.448 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:51.448 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:51.448 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:51.448 Message: lib/fib: Defining dependency "fib" 00:01:51.448 Message: lib/port: Defining dependency "port" 00:01:51.448 Message: lib/pdump: Defining dependency "pdump" 00:01:51.448 Message: lib/table: Defining dependency "table" 00:01:51.448 Message: lib/pipeline: Defining dependency "pipeline" 00:01:51.448 Message: lib/graph: Defining dependency "graph" 00:01:51.448 Message: lib/node: Defining dependency "node" 00:01:53.355 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.355 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:53.355 Compiler for C supports arguments -Wno-unused-value: YES 00:01:53.355 Compiler for C supports arguments -Wno-format: YES 00:01:53.355 Compiler for C supports arguments -Wno-format-security: YES 00:01:53.355 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:53.355 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.355 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:53.355 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:53.355 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.355 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.355 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.355 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:53.355 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:53.355 Has header "sys/epoll.h" : YES 00:01:53.355 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.355 Configuring doxy-api-html.conf using configuration 00:01:53.355 Configuring doxy-api-man.conf using configuration 00:01:53.355 Program mandb found: YES (/usr/bin/mandb) 00:01:53.355 Program sphinx-build found: NO 00:01:53.355 Configuring rte_build_config.h using configuration 00:01:53.355 Message: 00:01:53.355 ================= 00:01:53.355 Applications Enabled 00:01:53.355 ================= 00:01:53.355 00:01:53.355 apps: 00:01:53.355 graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:53.355 test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, test-pmd, 00:01:53.355 test-regex, test-sad, test-security-perf, 00:01:53.355 00:01:53.355 Message: 00:01:53.355 ================= 00:01:53.355 Libraries Enabled 00:01:53.355 ================= 00:01:53.355 00:01:53.355 libs: 00:01:53.355 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.355 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:53.355 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:53.355 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:53.355 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:53.355 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:53.355 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:53.355 00:01:53.355 00:01:53.355 Message: 00:01:53.355 =============== 00:01:53.355 Drivers Enabled 00:01:53.355 =============== 00:01:53.355 00:01:53.355 common: 00:01:53.355 00:01:53.355 bus: 00:01:53.355 pci, vdev, 00:01:53.355 mempool: 00:01:53.355 ring, 00:01:53.355 dma: 00:01:53.355 00:01:53.355 net: 00:01:53.355 i40e, 00:01:53.355 raw: 00:01:53.355 00:01:53.355 crypto: 00:01:53.355 00:01:53.355 compress: 00:01:53.355 00:01:53.355 regex: 00:01:53.355 00:01:53.355 ml: 00:01:53.355 00:01:53.355 vdpa: 00:01:53.355 00:01:53.355 event: 00:01:53.355 00:01:53.355 baseband: 00:01:53.355 00:01:53.355 gpu: 00:01:53.355 00:01:53.355 00:01:53.355 Message: 00:01:53.355 ================= 00:01:53.355 Content Skipped 00:01:53.355 ================= 00:01:53.355 00:01:53.355 apps: 00:01:53.355 dumpcap: missing dependency, "libpcap" 00:01:53.355 00:01:53.355 libs: 00:01:53.355 00:01:53.355 drivers: 00:01:53.355 common/cpt: not in enabled drivers build config 00:01:53.355 common/dpaax: not in enabled drivers build config 00:01:53.355 common/iavf: not in enabled drivers build config 00:01:53.355 common/idpf: not in enabled drivers build config 00:01:53.355 common/mvep: not in enabled drivers build config 00:01:53.355 common/octeontx: not in enabled drivers build config 00:01:53.355 bus/auxiliary: not in enabled drivers build config 00:01:53.355 bus/cdx: not in enabled drivers build config 00:01:53.355 bus/dpaa: not in enabled drivers build config 00:01:53.355 bus/fslmc: not in enabled drivers build config 00:01:53.355 bus/ifpga: not in enabled drivers build config 00:01:53.355 bus/platform: not in enabled drivers build config 00:01:53.355 bus/vmbus: not in enabled drivers build config 00:01:53.355 common/cnxk: not in enabled drivers build config 00:01:53.355 common/mlx5: not in enabled drivers build config 00:01:53.355 common/nfp: not in enabled drivers build config 00:01:53.355 common/qat: not in enabled drivers build config 00:01:53.355 common/sfc_efx: not in enabled drivers build config 00:01:53.355 mempool/bucket: not in enabled drivers build config 00:01:53.355 mempool/cnxk: not in enabled drivers build config 00:01:53.355 mempool/dpaa: not in enabled drivers build config 00:01:53.355 mempool/dpaa2: not in enabled drivers build config 00:01:53.355 mempool/octeontx: not in enabled drivers build config 00:01:53.355 mempool/stack: not in enabled drivers build config 00:01:53.355 dma/cnxk: not in enabled drivers build config 00:01:53.355 dma/dpaa: not in enabled drivers build config 00:01:53.355 dma/dpaa2: not in enabled drivers build config 00:01:53.355 dma/hisilicon: not in enabled drivers build config 00:01:53.355 dma/idxd: not in enabled drivers build config 00:01:53.355 dma/ioat: not in enabled drivers build config 00:01:53.355 dma/skeleton: not in enabled drivers build config 00:01:53.355 net/af_packet: not in enabled drivers build config 00:01:53.355 net/af_xdp: not in enabled drivers build config 00:01:53.355 net/ark: not in enabled drivers build config 00:01:53.355 net/atlantic: not in enabled drivers build config 00:01:53.355 net/avp: not in enabled drivers build config 00:01:53.355 net/axgbe: not in enabled drivers build config 00:01:53.355 net/bnx2x: not in enabled drivers build config 00:01:53.355 net/bnxt: not in enabled drivers build config 00:01:53.355 net/bonding: not in enabled drivers build config 00:01:53.355 net/cnxk: not in enabled drivers build config 00:01:53.355 net/cpfl: not in enabled drivers build config 00:01:53.355 net/cxgbe: not in enabled drivers build config 00:01:53.355 net/dpaa: not in enabled drivers build config 00:01:53.355 net/dpaa2: not in enabled drivers build config 00:01:53.355 net/e1000: not in enabled drivers build config 00:01:53.355 net/ena: not in enabled drivers build config 00:01:53.355 net/enetc: not in enabled drivers build config 00:01:53.355 net/enetfec: not in enabled drivers build config 00:01:53.355 net/enic: not in enabled drivers build config 00:01:53.356 net/failsafe: not in enabled drivers build config 00:01:53.356 net/fm10k: not in enabled drivers build config 00:01:53.356 net/gve: not in enabled drivers build config 00:01:53.356 net/hinic: not in enabled drivers build config 00:01:53.356 net/hns3: not in enabled drivers build config 00:01:53.356 net/iavf: not in enabled drivers build config 00:01:53.356 net/ice: not in enabled drivers build config 00:01:53.356 net/idpf: not in enabled drivers build config 00:01:53.356 net/igc: not in enabled drivers build config 00:01:53.356 net/ionic: not in enabled drivers build config 00:01:53.356 net/ipn3ke: not in enabled drivers build config 00:01:53.356 net/ixgbe: not in enabled drivers build config 00:01:53.356 net/mana: not in enabled drivers build config 00:01:53.356 net/memif: not in enabled drivers build config 00:01:53.356 net/mlx4: not in enabled drivers build config 00:01:53.356 net/mlx5: not in enabled drivers build config 00:01:53.356 net/mvneta: not in enabled drivers build config 00:01:53.356 net/mvpp2: not in enabled drivers build config 00:01:53.356 net/netvsc: not in enabled drivers build config 00:01:53.356 net/nfb: not in enabled drivers build config 00:01:53.356 net/nfp: not in enabled drivers build config 00:01:53.356 net/ngbe: not in enabled drivers build config 00:01:53.356 net/null: not in enabled drivers build config 00:01:53.356 net/octeontx: not in enabled drivers build config 00:01:53.356 net/octeon_ep: not in enabled drivers build config 00:01:53.356 net/pcap: not in enabled drivers build config 00:01:53.356 net/pfe: not in enabled drivers build config 00:01:53.356 net/qede: not in enabled drivers build config 00:01:53.356 net/ring: not in enabled drivers build config 00:01:53.356 net/sfc: not in enabled drivers build config 00:01:53.356 net/softnic: not in enabled drivers build config 00:01:53.356 net/tap: not in enabled drivers build config 00:01:53.356 net/thunderx: not in enabled drivers build config 00:01:53.356 net/txgbe: not in enabled drivers build config 00:01:53.356 net/vdev_netvsc: not in enabled drivers build config 00:01:53.356 net/vhost: not in enabled drivers build config 00:01:53.356 net/virtio: not in enabled drivers build config 00:01:53.356 net/vmxnet3: not in enabled drivers build config 00:01:53.356 raw/cnxk_bphy: not in enabled drivers build config 00:01:53.356 raw/cnxk_gpio: not in enabled drivers build config 00:01:53.356 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:53.356 raw/ifpga: not in enabled drivers build config 00:01:53.356 raw/ntb: not in enabled drivers build config 00:01:53.356 raw/skeleton: not in enabled drivers build config 00:01:53.356 crypto/armv8: not in enabled drivers build config 00:01:53.356 crypto/bcmfs: not in enabled drivers build config 00:01:53.356 crypto/caam_jr: not in enabled drivers build config 00:01:53.356 crypto/ccp: not in enabled drivers build config 00:01:53.356 crypto/cnxk: not in enabled drivers build config 00:01:53.356 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.356 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.356 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.356 crypto/mlx5: not in enabled drivers build config 00:01:53.356 crypto/mvsam: not in enabled drivers build config 00:01:53.356 crypto/nitrox: not in enabled drivers build config 00:01:53.356 crypto/null: not in enabled drivers build config 00:01:53.356 crypto/octeontx: not in enabled drivers build config 00:01:53.356 crypto/openssl: not in enabled drivers build config 00:01:53.356 crypto/scheduler: not in enabled drivers build config 00:01:53.356 crypto/uadk: not in enabled drivers build config 00:01:53.356 crypto/virtio: not in enabled drivers build config 00:01:53.356 compress/isal: not in enabled drivers build config 00:01:53.356 compress/mlx5: not in enabled drivers build config 00:01:53.356 compress/octeontx: not in enabled drivers build config 00:01:53.356 compress/zlib: not in enabled drivers build config 00:01:53.356 regex/mlx5: not in enabled drivers build config 00:01:53.356 regex/cn9k: not in enabled drivers build config 00:01:53.356 ml/cnxk: not in enabled drivers build config 00:01:53.356 vdpa/ifc: not in enabled drivers build config 00:01:53.356 vdpa/mlx5: not in enabled drivers build config 00:01:53.356 vdpa/nfp: not in enabled drivers build config 00:01:53.356 vdpa/sfc: not in enabled drivers build config 00:01:53.356 event/cnxk: not in enabled drivers build config 00:01:53.356 event/dlb2: not in enabled drivers build config 00:01:53.356 event/dpaa: not in enabled drivers build config 00:01:53.356 event/dpaa2: not in enabled drivers build config 00:01:53.356 event/dsw: not in enabled drivers build config 00:01:53.356 event/opdl: not in enabled drivers build config 00:01:53.356 event/skeleton: not in enabled drivers build config 00:01:53.356 event/sw: not in enabled drivers build config 00:01:53.356 event/octeontx: not in enabled drivers build config 00:01:53.356 baseband/acc: not in enabled drivers build config 00:01:53.356 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:53.356 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:53.356 baseband/la12xx: not in enabled drivers build config 00:01:53.356 baseband/null: not in enabled drivers build config 00:01:53.356 baseband/turbo_sw: not in enabled drivers build config 00:01:53.356 gpu/cuda: not in enabled drivers build config 00:01:53.356 00:01:53.356 00:01:53.356 Build targets in project: 219 00:01:53.356 00:01:53.356 DPDK 23.11.0 00:01:53.356 00:01:53.356 User defined options 00:01:53.356 libdir : lib 00:01:53.356 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:53.356 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:53.356 c_link_args : 00:01:53.356 enable_docs : false 00:01:53.356 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:53.356 enable_kmods : false 00:01:53.356 machine : native 00:01:53.356 tests : false 00:01:53.356 00:01:53.356 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:01:53.356 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:53.356 11:12:10 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:53.356 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:53.356 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.356 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.356 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.356 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.356 [5/707] Linking static target lib/librte_kvargs.a 00:01:53.356 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.356 [7/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.356 [8/707] Linking static target lib/librte_log.a 00:01:53.356 [9/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.616 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.616 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.875 [12/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.875 [13/707] Linking target lib/librte_log.so.24.0 00:01:53.875 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.875 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.875 [16/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.133 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.133 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.392 [19/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:54.392 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.392 [21/707] Linking target lib/librte_kvargs.so.24.0 00:01:54.392 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.392 [23/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:54.392 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.651 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.651 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.651 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:54.910 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.911 [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.911 [30/707] Linking static target lib/librte_telemetry.a 00:01:54.911 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.169 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.169 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.169 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.169 [35/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.169 [36/707] Linking target lib/librte_telemetry.so.24.0 00:01:55.169 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.427 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.427 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.427 [40/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:55.427 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.427 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.427 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.427 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.686 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.686 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.944 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.944 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.202 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.202 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.202 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.202 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.202 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.461 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.461 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.461 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.461 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.461 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.719 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.719 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.719 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.719 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.977 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.977 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.977 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.977 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.977 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.977 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.236 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.495 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.495 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.495 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.495 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.495 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.495 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.495 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.495 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.753 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.011 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.011 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.011 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.270 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.270 [83/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.270 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.270 [85/707] Linking static target lib/librte_ring.a 00:01:58.270 [86/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.529 [87/707] Linking static target lib/librte_eal.a 00:01:58.529 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.529 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.529 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.788 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.788 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.788 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.788 [94/707] Linking static target lib/librte_mempool.a 00:01:59.048 [95/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.048 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.048 [97/707] Linking static target lib/librte_rcu.a 00:01:59.307 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:59.307 [99/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:59.307 [100/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.307 [101/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.566 [102/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.566 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.566 [104/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.566 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.824 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.824 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.824 [108/707] Linking static target lib/librte_mbuf.a 00:01:59.824 [109/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.824 [110/707] Linking static target lib/librte_net.a 00:02:00.083 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.083 [112/707] Linking static target lib/librte_meter.a 00:02:00.083 [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.342 [115/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [116/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.342 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.342 [119/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.280 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.280 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.538 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.538 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.538 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.538 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.797 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.797 [127/707] Linking static target lib/librte_pci.a 00:02:01.797 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.797 [129/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.797 [130/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.055 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.055 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.055 [133/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.055 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.055 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:02.055 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:02.055 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:02.055 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:02.314 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:02.314 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.314 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.314 [142/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.572 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.572 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.572 [145/707] Linking static target lib/librte_cmdline.a 00:02:02.831 [146/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.831 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:02.831 [148/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:02.831 [149/707] Linking static target lib/librte_metrics.a 00:02:03.090 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.349 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.349 [152/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.609 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.609 [154/707] Linking static target lib/librte_timer.a 00:02:03.609 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.868 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.436 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:04.436 [158/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:04.436 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:04.436 [160/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.695 [161/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:04.695 [162/707] Linking target lib/librte_eal.so.24.0 00:02:04.695 [163/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:04.695 [164/707] Linking target lib/librte_ring.so.24.0 00:02:04.953 [165/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:04.953 [166/707] Linking target lib/librte_rcu.so.24.0 00:02:04.953 [167/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:04.953 [168/707] Linking target lib/librte_mempool.so.24.0 00:02:05.212 [169/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.212 [170/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:05.212 [171/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:05.212 [172/707] Linking target lib/librte_mbuf.so.24.0 00:02:05.212 [173/707] Linking target lib/librte_meter.so.24.0 00:02:05.212 [174/707] Linking target lib/librte_pci.so.24.0 00:02:05.212 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:05.212 [176/707] Linking target lib/librte_timer.so.24.0 00:02:05.471 [177/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.471 [178/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:05.471 [179/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:05.471 [180/707] Linking static target lib/librte_bitratestats.a 00:02:05.471 [181/707] Linking target lib/librte_net.so.24.0 00:02:05.471 [182/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.471 [183/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:05.471 [184/707] Linking static target lib/librte_ethdev.a 00:02:05.471 [185/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:05.471 [186/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:05.471 [187/707] Linking target lib/librte_cmdline.so.24.0 00:02:05.471 [188/707] Linking static target lib/librte_bbdev.a 00:02:05.471 [189/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.729 [190/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.729 [191/707] Linking static target lib/librte_hash.a 00:02:05.729 [192/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:05.729 [193/707] Linking static target lib/acl/libavx2_tmp.a 00:02:06.297 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:06.297 [195/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.297 [196/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.297 [197/707] Linking target lib/librte_bbdev.so.24.0 00:02:06.297 [198/707] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:06.297 [199/707] Linking static target lib/acl/libavx512_tmp.a 00:02:06.297 [200/707] Linking target lib/librte_hash.so.24.0 00:02:06.297 [201/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:06.297 [202/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.297 [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:06.556 [204/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:06.556 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:06.556 [206/707] Linking static target lib/librte_acl.a 00:02:06.556 [207/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:06.556 [208/707] Linking static target lib/librte_cfgfile.a 00:02:06.816 [209/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:06.816 [210/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.816 [211/707] Linking target lib/librte_acl.so.24.0 00:02:06.816 [212/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.075 [213/707] Linking target lib/librte_cfgfile.so.24.0 00:02:07.075 [214/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:07.075 [215/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:07.075 [216/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.334 [217/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.334 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.334 [219/707] Linking static target lib/librte_compressdev.a 00:02:07.593 [220/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:07.593 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.852 [222/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:07.852 [223/707] Linking static target lib/librte_bpf.a 00:02:07.852 [224/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.852 [225/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.852 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:07.852 [227/707] Linking target lib/librte_compressdev.so.24.0 00:02:08.111 [228/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:08.111 [229/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.111 [230/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.111 [231/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:08.111 [232/707] Linking static target lib/librte_distributor.a 00:02:08.369 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.369 [234/707] Linking target lib/librte_distributor.so.24.0 00:02:08.627 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:08.627 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.627 [237/707] Linking static target lib/librte_dmadev.a 00:02:09.194 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.194 [239/707] Linking target lib/librte_dmadev.so.24.0 00:02:09.194 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:09.194 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.452 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:09.452 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:09.710 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:09.710 [245/707] Linking static target lib/librte_efd.a 00:02:09.710 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:09.969 [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.969 [248/707] Linking static target lib/librte_cryptodev.a 00:02:09.969 [249/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.969 [250/707] Linking target lib/librte_efd.so.24.0 00:02:10.246 [251/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.246 [252/707] Linking target lib/librte_ethdev.so.24.0 00:02:10.246 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:10.540 [254/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:10.540 [255/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:10.540 [256/707] Linking target lib/librte_metrics.so.24.0 00:02:10.540 [257/707] Linking target lib/librte_bpf.so.24.0 00:02:10.540 [258/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:10.540 [259/707] Linking static target lib/librte_dispatcher.a 00:02:10.540 [260/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:10.540 [261/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:10.540 [262/707] Linking static target lib/librte_gpudev.a 00:02:10.540 [263/707] Linking target lib/librte_bitratestats.so.24.0 00:02:10.799 [264/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:10.799 [265/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:10.799 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:11.057 [267/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.057 [268/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.315 [269/707] Linking target lib/librte_cryptodev.so.24.0 00:02:11.315 [270/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:11.315 [271/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:11.315 [272/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:11.315 [273/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.315 [274/707] Linking target lib/librte_gpudev.so.24.0 00:02:11.574 [275/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:11.574 [276/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:11.574 [277/707] Linking static target lib/librte_eventdev.a 00:02:11.832 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:11.833 [279/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:11.833 [280/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:11.833 [281/707] Linking static target lib/librte_gro.a 00:02:11.833 [282/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:12.091 [283/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:12.091 [284/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.091 [285/707] Linking target lib/librte_gro.so.24.0 00:02:12.091 [286/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:12.091 [287/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:12.091 [288/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:12.091 [289/707] Linking static target lib/librte_gso.a 00:02:12.350 [290/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.350 [291/707] Linking target lib/librte_gso.so.24.0 00:02:12.608 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:12.608 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:12.608 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:12.608 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:12.866 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:12.866 [297/707] Linking static target lib/librte_jobstats.a 00:02:12.866 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:13.125 [299/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:13.125 [300/707] Linking static target lib/librte_latencystats.a 00:02:13.125 [301/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:13.125 [302/707] Linking static target lib/librte_ip_frag.a 00:02:13.125 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.125 [304/707] Linking target lib/librte_jobstats.so.24.0 00:02:13.125 [305/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.382 [306/707] Linking target lib/librte_latencystats.so.24.0 00:02:13.382 [307/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.382 [308/707] Linking target lib/librte_ip_frag.so.24.0 00:02:13.382 [309/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:13.382 [310/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:13.382 [311/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:13.382 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:13.382 [313/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:13.641 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.641 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.641 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.641 [317/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.899 [318/707] Linking target lib/librte_eventdev.so.24.0 00:02:13.899 [319/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:13.899 [320/707] Linking target lib/librte_dispatcher.so.24.0 00:02:14.158 [321/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:14.158 [322/707] Linking static target lib/librte_lpm.a 00:02:14.158 [323/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:14.158 [324/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.416 [325/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.416 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.416 [327/707] Linking target lib/librte_lpm.so.24.0 00:02:14.416 [328/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.416 [329/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:14.416 [330/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:14.416 [331/707] Linking static target lib/librte_pcapng.a 00:02:14.416 [332/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.675 [333/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:14.675 [334/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.675 [335/707] Linking target lib/librte_pcapng.so.24.0 00:02:14.933 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:14.933 [337/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.933 [338/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.933 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.191 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:15.449 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.449 [342/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:15.450 [343/707] Linking static target lib/librte_power.a 00:02:15.450 [344/707] Linking static target lib/librte_regexdev.a 00:02:15.450 [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:15.450 [346/707] Linking static target lib/librte_rawdev.a 00:02:15.450 [347/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:15.450 [348/707] Linking static target lib/librte_member.a 00:02:15.450 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:15.450 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:15.708 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:15.708 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:15.708 [353/707] Linking static target lib/librte_mldev.a 00:02:15.967 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.967 [355/707] Linking target lib/librte_member.so.24.0 00:02:15.967 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.967 [357/707] Linking target lib/librte_rawdev.so.24.0 00:02:15.967 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.967 [359/707] Linking target lib/librte_power.so.24.0 00:02:15.967 [360/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:15.967 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:15.967 [362/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.225 [363/707] Linking target lib/librte_regexdev.so.24.0 00:02:16.482 [364/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:16.482 [365/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:16.482 [366/707] Linking static target lib/librte_rib.a 00:02:16.482 [367/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.482 [368/707] Linking static target lib/librte_reorder.a 00:02:16.482 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.740 [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:16.740 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:16.740 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:16.740 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:16.740 [374/707] Linking static target lib/librte_stack.a 00:02:16.998 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.998 [376/707] Linking target lib/librte_reorder.so.24.0 00:02:16.998 [377/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.998 [378/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.998 [379/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.998 [380/707] Linking target lib/librte_stack.so.24.0 00:02:16.998 [381/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.998 [382/707] Linking static target lib/librte_security.a 00:02:16.998 [383/707] Linking target lib/librte_mldev.so.24.0 00:02:16.998 [384/707] Linking target lib/librte_rib.so.24.0 00:02:16.998 [385/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:17.256 [386/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:17.514 [387/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.514 [388/707] Linking target lib/librte_security.so.24.0 00:02:17.514 [389/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:17.514 [390/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.514 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.772 [392/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.772 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:17.772 [394/707] Linking static target lib/librte_sched.a 00:02:18.030 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.287 [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.287 [397/707] Linking target lib/librte_sched.so.24.0 00:02:18.287 [398/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:18.544 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.544 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.802 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:18.802 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:18.802 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.368 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:19.368 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:19.626 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:19.626 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:19.626 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:19.626 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:19.885 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:19.885 [411/707] Linking static target lib/librte_ipsec.a 00:02:19.885 [412/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:20.143 [413/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.143 [414/707] Linking target lib/librte_ipsec.so.24.0 00:02:20.401 [415/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:20.401 [416/707] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:20.401 [417/707] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:20.401 [418/707] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:20.401 [419/707] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:20.401 [420/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:20.401 [421/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:20.401 [422/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:21.334 [423/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:21.334 [424/707] Linking static target lib/librte_pdcp.a 00:02:21.334 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:21.593 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:21.593 [427/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:21.593 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:21.593 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:21.593 [430/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.593 [431/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:21.593 [432/707] Linking static target lib/librte_fib.a 00:02:21.593 [433/707] Linking target lib/librte_pdcp.so.24.0 00:02:21.852 [434/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.110 [435/707] Linking target lib/librte_fib.so.24.0 00:02:22.110 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:22.676 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:22.676 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:22.676 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:22.676 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:22.676 [441/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:22.935 [442/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:22.935 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:23.217 [444/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:23.496 [445/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:23.496 [446/707] Linking static target lib/librte_port.a 00:02:23.496 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:23.496 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:23.754 [449/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:23.754 [450/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:23.754 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:23.754 [452/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.013 [453/707] Linking target lib/librte_port.so.24.0 00:02:24.013 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:24.013 [455/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:24.013 [456/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:24.013 [457/707] Linking static target lib/librte_pdump.a 00:02:24.272 [458/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.272 [459/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.272 [460/707] Linking target lib/librte_pdump.so.24.0 00:02:24.530 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:24.788 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:25.046 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:25.046 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:25.046 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:25.046 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:25.046 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:25.305 [468/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:25.305 [469/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:25.305 [470/707] Linking static target lib/librte_table.a 00:02:25.564 [471/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:25.822 [472/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:26.080 [473/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.080 [474/707] Linking target lib/librte_table.so.24.0 00:02:26.080 [475/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:26.339 [476/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:26.597 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:26.597 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:26.855 [479/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:26.855 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:27.114 [481/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:27.114 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:27.376 [483/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:27.376 [484/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:27.376 [485/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:27.947 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:27.947 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:28.206 [488/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:28.206 [489/707] Linking static target lib/librte_graph.a 00:02:28.206 [490/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:28.206 [491/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:28.464 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:28.722 [493/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.722 [494/707] Linking target lib/librte_graph.so.24.0 00:02:28.722 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:28.722 [496/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:28.980 [497/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:28.980 [498/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:29.238 [499/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:29.497 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:29.497 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:29.497 [502/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:29.756 [503/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:29.756 [504/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:29.756 [505/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:30.014 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:30.014 [507/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:30.580 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.580 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.580 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.580 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.580 [512/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:30.580 [513/707] Linking static target lib/librte_node.a 00:02:30.580 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.839 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.839 [516/707] Linking target lib/librte_node.so.24.0 00:02:31.097 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.097 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.097 [519/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.097 [520/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.354 [521/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.354 [522/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.354 [523/707] Linking static target drivers/librte_bus_vdev.a 00:02:31.354 [524/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.354 [525/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.354 [526/707] Linking static target drivers/librte_bus_pci.a 00:02:31.613 [527/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.613 [528/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.613 [529/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.613 [530/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:31.613 [531/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:31.613 [532/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:31.613 [533/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:31.871 [534/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:31.871 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.871 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:31.871 [537/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:32.130 [538/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.130 [539/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.130 [540/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.130 [541/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.130 [542/707] Linking static target drivers/librte_mempool_ring.a 00:02:32.388 [543/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.388 [544/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:32.388 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:32.646 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:33.213 [547/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:33.471 [548/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:33.471 [549/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:33.471 [550/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.038 [551/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:34.038 [552/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:34.296 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:34.296 [554/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:34.296 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:34.560 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:35.135 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:35.135 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:35.135 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:35.135 [560/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:35.393 [561/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:35.957 [562/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:35.957 [563/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:35.957 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:36.215 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:36.473 [566/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.473 [567/707] Linking static target lib/librte_vhost.a 00:02:36.731 [568/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:36.731 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:36.731 [570/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:36.731 [571/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:36.731 [572/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:36.989 [573/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:37.246 [574/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:37.505 [575/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:37.505 [576/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:37.505 [577/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:37.763 [578/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:37.763 [579/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.763 [580/707] Linking target lib/librte_vhost.so.24.0 00:02:37.763 [581/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:38.021 [582/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:38.021 [583/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.021 [584/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.279 [585/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:38.279 [586/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:38.279 [587/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:38.279 [588/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.279 [589/707] Linking static target drivers/librte_net_i40e.a 00:02:38.538 [590/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:38.538 [591/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:38.796 [592/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:38.796 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:39.054 [594/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.054 [595/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:39.054 [596/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:39.054 [597/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:39.620 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:39.620 [599/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:39.878 [600/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:39.878 [601/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:39.878 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:40.137 [603/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:40.137 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:40.137 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:40.703 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:40.703 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:40.961 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:40.961 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:40.961 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:40.961 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:41.219 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:41.219 [613/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:41.219 [614/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:41.219 [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:41.786 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:41.786 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:41.786 [618/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:42.044 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:42.044 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:42.302 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:43.238 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:43.238 [623/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:43.238 [624/707] Linking static target lib/librte_pipeline.a 00:02:43.238 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:43.238 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:43.496 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:43.496 [628/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:43.496 [629/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:43.766 [630/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:43.766 [631/707] Linking target app/dpdk-pdump 00:02:43.766 [632/707] Linking target app/dpdk-graph 00:02:43.766 [633/707] Linking target app/dpdk-proc-info 00:02:44.024 [634/707] Linking target app/dpdk-test-acl 00:02:44.024 [635/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:44.024 [636/707] Linking target app/dpdk-test-cmdline 00:02:44.024 [637/707] Linking target app/dpdk-test-compress-perf 00:02:44.281 [638/707] Linking target app/dpdk-test-crypto-perf 00:02:44.281 [639/707] Linking target app/dpdk-test-dma-perf 00:02:44.281 [640/707] Linking target app/dpdk-test-fib 00:02:44.282 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:44.539 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:44.797 [643/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:44.797 [644/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:44.797 [645/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:45.054 [646/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:45.054 [647/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:45.054 [648/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:45.311 [649/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:45.311 [650/707] Linking target app/dpdk-test-gpudev 00:02:45.568 [651/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:45.568 [652/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:45.568 [653/707] Linking target app/dpdk-test-eventdev 00:02:45.568 [654/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:45.824 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:45.824 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:46.080 [657/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:46.080 [658/707] Linking target app/dpdk-test-flow-perf 00:02:46.080 [659/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:46.080 [660/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.337 [661/707] Linking target lib/librte_pipeline.so.24.0 00:02:46.337 [662/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:46.337 [663/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:46.337 [664/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:46.595 [665/707] Linking target app/dpdk-test-bbdev 00:02:46.595 [666/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:46.853 [667/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:46.853 [668/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:46.853 [669/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:47.111 [670/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:47.369 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:47.369 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:47.369 [673/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:47.369 [674/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:47.627 [675/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:47.885 [676/707] Linking target app/dpdk-test-mldev 00:02:47.885 [677/707] Linking target app/dpdk-test-pipeline 00:02:47.885 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:48.144 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:48.403 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:48.661 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:48.661 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:48.661 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:48.919 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:48.919 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:49.487 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:49.487 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:49.487 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:49.487 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:49.745 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:50.003 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:50.261 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:50.520 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:50.778 [694/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:51.037 [695/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:51.037 [696/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:51.037 [697/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:51.037 [698/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:51.037 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:51.037 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:51.296 [701/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:51.296 [702/707] Linking target app/dpdk-test-regex 00:02:51.296 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:51.555 [704/707] Linking target app/dpdk-test-sad 00:02:51.815 [705/707] Linking target app/dpdk-testpmd 00:02:52.073 [706/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:52.639 [707/707] Linking target app/dpdk-test-security-perf 00:02:52.639 11:13:10 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:52.639 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:52.639 [0/1] Installing files. 00:02:52.898 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.898 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.160 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.161 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.162 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.163 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.163 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.163 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.164 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.422 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.422 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.422 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.422 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.422 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.422 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.422 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.422 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:53.422 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.422 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.681 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.682 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.682 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:53.682 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:53.682 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:53.682 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:53.682 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:53.682 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:53.682 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:53.682 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:53.682 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:53.682 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:53.682 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:53.682 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:53.682 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:53.682 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:53.682 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:53.682 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:53.682 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:53.682 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:53.682 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:53.682 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:53.682 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:53.682 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:53.682 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:53.682 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:53.682 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:53.682 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:53.682 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:53.682 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:53.682 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:53.682 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:53.682 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:53.683 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:53.683 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:53.683 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:53.683 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:53.683 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:53.683 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:53.683 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:53.683 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:53.683 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:53.683 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:53.683 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:53.683 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:53.683 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:53.683 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:53.683 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:53.683 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:53.683 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:53.683 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:53.683 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:53.683 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:53.683 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:53.683 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:53.683 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:53.683 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:53.683 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:53.683 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:53.683 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:53.683 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:53.683 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:53.683 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:53.683 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:53.683 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:53.683 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:53.683 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:53.683 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:53.683 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:53.683 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:53.683 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:53.683 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:53.683 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:53.683 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:53.683 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:53.683 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:53.683 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:53.683 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:53.683 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:53.683 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:53.683 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:53.683 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:53.683 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:53.683 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:53.683 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:53.683 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:53.683 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:53.683 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:53.683 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:53.683 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:53.683 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:53.683 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:53.683 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:53.683 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:53.683 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:53.683 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:53.683 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:53.683 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:53.683 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:53.683 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:53.683 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:53.683 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:53.683 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:53.683 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:53.683 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:53.683 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:53.683 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:53.683 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:53.683 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:53.683 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:53.683 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:53.683 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:53.683 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:53.683 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:53.683 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:53.683 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:53.683 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:53.683 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:53.683 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:53.683 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:53.683 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:53.683 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:53.683 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:53.683 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:53.683 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:53.683 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:53.683 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:53.683 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:53.683 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:53.683 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:53.683 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:53.683 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:53.683 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:53.683 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:53.683 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:53.683 11:13:11 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:53.683 11:13:11 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:53.683 11:13:11 -- common/autobuild_common.sh@203 -- $ cat 00:02:53.683 11:13:11 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.683 00:02:53.683 real 1m7.890s 00:02:53.683 user 8m36.256s 00:02:53.683 sys 1m8.685s 00:02:53.683 11:13:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:53.683 ************************************ 00:02:53.683 END TEST build_native_dpdk 00:02:53.683 ************************************ 00:02:53.683 11:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.683 11:13:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.683 11:13:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.683 11:13:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.683 11:13:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.683 11:13:11 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:53.683 11:13:11 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:53.683 11:13:11 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:02:53.683 11:13:11 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:53.683 11:13:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:53.683 11:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.683 ************************************ 00:02:53.683 START TEST unittest_build 00:02:53.683 ************************************ 00:02:53.683 11:13:11 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:02:53.683 11:13:11 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:53.941 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:53.941 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.941 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:53.941 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:54.200 Using 'verbs' RDMA provider 00:03:09.769 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:21.974 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:21.974 Creating mk/config.mk...done. 00:03:21.974 Creating mk/cc.flags.mk...done. 00:03:21.974 Type 'make' to build. 00:03:21.974 11:13:39 -- common/autobuild_common.sh@408 -- $ make -j10 00:03:21.974 make[1]: Nothing to be done for 'all'. 00:03:40.121 CC lib/log/log.o 00:03:40.121 CC lib/log/log_flags.o 00:03:40.121 CC lib/log/log_deprecated.o 00:03:40.121 CC lib/ut/ut.o 00:03:40.121 CC lib/ut_mock/mock.o 00:03:40.121 LIB libspdk_ut_mock.a 00:03:40.121 LIB libspdk_ut.a 00:03:40.121 LIB libspdk_log.a 00:03:40.121 CXX lib/trace_parser/trace.o 00:03:40.121 CC lib/ioat/ioat.o 00:03:40.121 CC lib/util/base64.o 00:03:40.121 CC lib/util/bit_array.o 00:03:40.121 CC lib/dma/dma.o 00:03:40.121 CC lib/util/cpuset.o 00:03:40.121 CC lib/util/crc16.o 00:03:40.121 CC lib/util/crc32.o 00:03:40.121 CC lib/util/crc32c.o 00:03:40.121 CC lib/vfio_user/host/vfio_user_pci.o 00:03:40.121 CC lib/util/crc32_ieee.o 00:03:40.121 CC lib/util/crc64.o 00:03:40.121 CC lib/vfio_user/host/vfio_user.o 00:03:40.121 CC lib/util/dif.o 00:03:40.121 LIB libspdk_dma.a 00:03:40.121 CC lib/util/fd.o 00:03:40.121 CC lib/util/file.o 00:03:40.121 CC lib/util/hexlify.o 00:03:40.121 CC lib/util/iov.o 00:03:40.121 CC lib/util/math.o 00:03:40.121 LIB libspdk_ioat.a 00:03:40.121 CC lib/util/pipe.o 00:03:40.121 CC lib/util/strerror_tls.o 00:03:40.121 CC lib/util/string.o 00:03:40.121 CC lib/util/uuid.o 00:03:40.121 CC lib/util/fd_group.o 00:03:40.121 CC lib/util/xor.o 00:03:40.121 LIB libspdk_vfio_user.a 00:03:40.121 CC lib/util/zipf.o 00:03:40.121 LIB libspdk_util.a 00:03:40.121 CC lib/conf/conf.o 00:03:40.121 CC lib/rdma/common.o 00:03:40.121 CC lib/rdma/rdma_verbs.o 00:03:40.121 CC lib/vmd/vmd.o 00:03:40.121 CC lib/env_dpdk/env.o 00:03:40.121 CC lib/vmd/led.o 00:03:40.121 CC lib/env_dpdk/memory.o 00:03:40.121 CC lib/idxd/idxd.o 00:03:40.121 CC lib/json/json_parse.o 00:03:40.121 LIB libspdk_trace_parser.a 00:03:40.121 CC lib/json/json_util.o 00:03:40.121 CC lib/json/json_write.o 00:03:40.121 CC lib/idxd/idxd_user.o 00:03:40.121 LIB libspdk_conf.a 00:03:40.121 CC lib/idxd/idxd_kernel.o 00:03:40.121 CC lib/env_dpdk/pci.o 00:03:40.380 LIB libspdk_rdma.a 00:03:40.380 CC lib/env_dpdk/init.o 00:03:40.380 CC lib/env_dpdk/threads.o 00:03:40.380 CC lib/env_dpdk/pci_ioat.o 00:03:40.380 LIB libspdk_json.a 00:03:40.380 CC lib/env_dpdk/pci_virtio.o 00:03:40.380 CC lib/env_dpdk/pci_vmd.o 00:03:40.638 CC lib/env_dpdk/pci_idxd.o 00:03:40.638 CC lib/env_dpdk/pci_event.o 00:03:40.638 CC lib/env_dpdk/sigbus_handler.o 00:03:40.638 CC lib/env_dpdk/pci_dpdk.o 00:03:40.638 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:40.638 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:40.896 LIB libspdk_idxd.a 00:03:40.896 CC lib/jsonrpc/jsonrpc_server.o 00:03:40.896 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:40.896 CC lib/jsonrpc/jsonrpc_client.o 00:03:40.896 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:40.896 LIB libspdk_vmd.a 00:03:41.169 LIB libspdk_jsonrpc.a 00:03:41.430 CC lib/rpc/rpc.o 00:03:41.430 LIB libspdk_rpc.a 00:03:41.689 CC lib/sock/sock.o 00:03:41.689 CC lib/sock/sock_rpc.o 00:03:41.689 CC lib/notify/notify.o 00:03:41.689 CC lib/notify/notify_rpc.o 00:03:41.689 CC lib/trace/trace.o 00:03:41.689 CC lib/trace/trace_flags.o 00:03:41.689 CC lib/trace/trace_rpc.o 00:03:41.947 LIB libspdk_notify.a 00:03:41.947 LIB libspdk_trace.a 00:03:41.947 LIB libspdk_env_dpdk.a 00:03:42.205 CC lib/thread/iobuf.o 00:03:42.205 CC lib/thread/thread.o 00:03:42.205 LIB libspdk_sock.a 00:03:42.464 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.464 CC lib/nvme/nvme_ctrlr.o 00:03:42.464 CC lib/nvme/nvme_fabric.o 00:03:42.464 CC lib/nvme/nvme_ns_cmd.o 00:03:42.464 CC lib/nvme/nvme_ns.o 00:03:42.464 CC lib/nvme/nvme_pcie_common.o 00:03:42.464 CC lib/nvme/nvme_pcie.o 00:03:42.464 CC lib/nvme/nvme_qpair.o 00:03:42.464 CC lib/nvme/nvme.o 00:03:43.398 CC lib/nvme/nvme_quirks.o 00:03:43.398 CC lib/nvme/nvme_transport.o 00:03:43.398 CC lib/nvme/nvme_discovery.o 00:03:43.398 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.398 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.656 CC lib/nvme/nvme_tcp.o 00:03:43.656 CC lib/nvme/nvme_opal.o 00:03:43.914 CC lib/nvme/nvme_io_msg.o 00:03:43.914 CC lib/nvme/nvme_poll_group.o 00:03:43.914 CC lib/nvme/nvme_zns.o 00:03:43.914 CC lib/nvme/nvme_cuse.o 00:03:44.172 LIB libspdk_thread.a 00:03:44.172 CC lib/nvme/nvme_vfio_user.o 00:03:44.172 CC lib/nvme/nvme_rdma.o 00:03:44.430 CC lib/accel/accel.o 00:03:44.430 CC lib/blob/blobstore.o 00:03:44.430 CC lib/blob/request.o 00:03:44.688 CC lib/blob/zeroes.o 00:03:44.688 CC lib/blob/blob_bs_dev.o 00:03:44.946 CC lib/accel/accel_rpc.o 00:03:44.946 CC lib/init/json_config.o 00:03:44.946 CC lib/init/subsystem.o 00:03:44.946 CC lib/virtio/virtio.o 00:03:44.946 CC lib/accel/accel_sw.o 00:03:45.204 CC lib/virtio/virtio_vhost_user.o 00:03:45.204 CC lib/init/subsystem_rpc.o 00:03:45.204 CC lib/virtio/virtio_vfio_user.o 00:03:45.204 CC lib/virtio/virtio_pci.o 00:03:45.462 CC lib/init/rpc.o 00:03:45.462 LIB libspdk_init.a 00:03:45.720 LIB libspdk_virtio.a 00:03:45.720 LIB libspdk_accel.a 00:03:45.720 CC lib/event/reactor.o 00:03:45.720 CC lib/event/app.o 00:03:45.720 CC lib/event/log_rpc.o 00:03:45.720 CC lib/event/app_rpc.o 00:03:45.720 CC lib/event/scheduler_static.o 00:03:45.720 CC lib/bdev/bdev_rpc.o 00:03:45.720 CC lib/bdev/bdev_zone.o 00:03:45.720 CC lib/bdev/bdev.o 00:03:45.978 CC lib/bdev/part.o 00:03:45.978 CC lib/bdev/scsi_nvme.o 00:03:45.978 LIB libspdk_nvme.a 00:03:46.237 LIB libspdk_event.a 00:03:48.769 LIB libspdk_blob.a 00:03:48.769 CC lib/blobfs/tree.o 00:03:48.769 CC lib/blobfs/blobfs.o 00:03:48.769 CC lib/lvol/lvol.o 00:03:49.704 LIB libspdk_bdev.a 00:03:49.704 CC lib/scsi/dev.o 00:03:49.704 CC lib/scsi/port.o 00:03:49.704 CC lib/scsi/lun.o 00:03:49.704 CC lib/ftl/ftl_core.o 00:03:49.704 CC lib/scsi/scsi.o 00:03:49.704 CC lib/ublk/ublk.o 00:03:49.704 CC lib/nvmf/ctrlr.o 00:03:49.704 CC lib/nbd/nbd.o 00:03:49.962 CC lib/scsi/scsi_bdev.o 00:03:49.962 LIB libspdk_blobfs.a 00:03:49.962 CC lib/ftl/ftl_init.o 00:03:49.962 CC lib/ftl/ftl_layout.o 00:03:49.962 CC lib/ftl/ftl_debug.o 00:03:49.962 LIB libspdk_lvol.a 00:03:49.962 CC lib/nbd/nbd_rpc.o 00:03:50.220 CC lib/scsi/scsi_pr.o 00:03:50.220 CC lib/scsi/scsi_rpc.o 00:03:50.220 CC lib/ftl/ftl_io.o 00:03:50.220 CC lib/ublk/ublk_rpc.o 00:03:50.220 CC lib/ftl/ftl_sb.o 00:03:50.220 LIB libspdk_nbd.a 00:03:50.478 CC lib/scsi/task.o 00:03:50.478 CC lib/ftl/ftl_l2p.o 00:03:50.478 CC lib/ftl/ftl_l2p_flat.o 00:03:50.478 CC lib/ftl/ftl_nv_cache.o 00:03:50.478 CC lib/ftl/ftl_band.o 00:03:50.478 CC lib/nvmf/ctrlr_discovery.o 00:03:50.478 CC lib/nvmf/ctrlr_bdev.o 00:03:50.478 LIB libspdk_ublk.a 00:03:50.478 CC lib/ftl/ftl_band_ops.o 00:03:50.478 CC lib/ftl/ftl_writer.o 00:03:50.735 LIB libspdk_scsi.a 00:03:50.735 CC lib/ftl/ftl_rq.o 00:03:50.735 CC lib/ftl/ftl_reloc.o 00:03:50.735 CC lib/nvmf/subsystem.o 00:03:50.735 CC lib/ftl/ftl_l2p_cache.o 00:03:50.993 CC lib/ftl/ftl_p2l.o 00:03:50.993 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.993 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.993 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.250 CC lib/nvmf/nvmf.o 00:03:51.250 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.250 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.250 CC lib/iscsi/conn.o 00:03:51.508 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.508 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.508 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.508 CC lib/iscsi/init_grp.o 00:03:51.766 CC lib/iscsi/iscsi.o 00:03:51.766 CC lib/iscsi/md5.o 00:03:51.766 CC lib/iscsi/param.o 00:03:51.766 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.766 CC lib/vhost/vhost.o 00:03:51.766 CC lib/vhost/vhost_rpc.o 00:03:52.025 CC lib/vhost/vhost_scsi.o 00:03:52.025 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.025 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.284 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.284 CC lib/vhost/vhost_blk.o 00:03:52.284 CC lib/iscsi/portal_grp.o 00:03:52.284 CC lib/nvmf/nvmf_rpc.o 00:03:52.284 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.284 CC lib/vhost/rte_vhost_user.o 00:03:52.543 CC lib/iscsi/tgt_node.o 00:03:52.543 CC lib/iscsi/iscsi_subsystem.o 00:03:52.543 CC lib/iscsi/iscsi_rpc.o 00:03:52.543 CC lib/iscsi/task.o 00:03:52.543 CC lib/ftl/utils/ftl_conf.o 00:03:52.802 CC lib/nvmf/transport.o 00:03:52.802 CC lib/ftl/utils/ftl_md.o 00:03:53.061 CC lib/nvmf/tcp.o 00:03:53.061 CC lib/ftl/utils/ftl_mempool.o 00:03:53.061 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.061 CC lib/ftl/utils/ftl_property.o 00:03:53.319 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.319 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.319 CC lib/nvmf/rdma.o 00:03:53.319 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.578 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.578 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.578 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.578 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.578 LIB libspdk_iscsi.a 00:03:53.578 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.578 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.578 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.578 CC lib/ftl/base/ftl_base_dev.o 00:03:53.837 LIB libspdk_vhost.a 00:03:53.837 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.837 CC lib/ftl/ftl_trace.o 00:03:54.096 LIB libspdk_ftl.a 00:03:56.002 LIB libspdk_nvmf.a 00:03:56.261 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.261 CC module/sock/posix/posix.o 00:03:56.261 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.261 CC module/accel/dsa/accel_dsa.o 00:03:56.261 CC module/accel/iaa/accel_iaa.o 00:03:56.261 CC module/blob/bdev/blob_bdev.o 00:03:56.261 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.261 CC module/accel/error/accel_error.o 00:03:56.261 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.261 CC module/accel/ioat/accel_ioat.o 00:03:56.520 LIB libspdk_env_dpdk_rpc.a 00:03:56.520 LIB libspdk_scheduler_gscheduler.a 00:03:56.520 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.520 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.520 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.520 CC module/accel/error/accel_error_rpc.o 00:03:56.520 LIB libspdk_scheduler_dynamic.a 00:03:56.520 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.520 LIB libspdk_accel_ioat.a 00:03:56.779 LIB libspdk_blob_bdev.a 00:03:56.779 LIB libspdk_accel_iaa.a 00:03:56.779 LIB libspdk_accel_error.a 00:03:56.779 LIB libspdk_accel_dsa.a 00:03:56.779 CC module/bdev/error/vbdev_error.o 00:03:56.779 CC module/blobfs/bdev/blobfs_bdev.o 00:03:56.779 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.779 CC module/bdev/delay/vbdev_delay.o 00:03:56.779 CC module/bdev/malloc/bdev_malloc.o 00:03:56.779 CC module/bdev/nvme/bdev_nvme.o 00:03:56.779 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.779 CC module/bdev/gpt/gpt.o 00:03:56.779 CC module/bdev/null/bdev_null.o 00:03:57.038 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:57.038 CC module/bdev/gpt/vbdev_gpt.o 00:03:57.296 CC module/bdev/error/vbdev_error_rpc.o 00:03:57.296 CC module/bdev/null/bdev_null_rpc.o 00:03:57.296 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:57.296 LIB libspdk_blobfs_bdev.a 00:03:57.296 LIB libspdk_sock_posix.a 00:03:57.297 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:57.297 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:57.297 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:57.297 CC module/bdev/nvme/nvme_rpc.o 00:03:57.297 LIB libspdk_bdev_error.a 00:03:57.297 LIB libspdk_bdev_null.a 00:03:57.297 LIB libspdk_bdev_passthru.a 00:03:57.297 LIB libspdk_bdev_gpt.a 00:03:57.555 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:57.555 CC module/bdev/nvme/bdev_mdns_client.o 00:03:57.555 CC module/bdev/nvme/vbdev_opal.o 00:03:57.555 LIB libspdk_bdev_malloc.a 00:03:57.555 CC module/bdev/raid/bdev_raid.o 00:03:57.555 LIB libspdk_bdev_delay.a 00:03:57.555 CC module/bdev/split/vbdev_split.o 00:03:57.555 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:57.555 CC module/bdev/aio/bdev_aio.o 00:03:57.555 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:57.555 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.814 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.814 CC module/bdev/split/vbdev_split_rpc.o 00:03:57.814 LIB libspdk_bdev_lvol.a 00:03:57.814 CC module/bdev/ftl/bdev_ftl.o 00:03:57.814 CC module/bdev/iscsi/bdev_iscsi.o 00:03:58.072 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:58.072 LIB libspdk_bdev_split.a 00:03:58.072 LIB libspdk_bdev_zone_block.a 00:03:58.072 LIB libspdk_bdev_aio.a 00:03:58.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:58.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:58.072 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:58.072 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:58.072 CC module/bdev/raid/bdev_raid_rpc.o 00:03:58.331 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:58.331 CC module/bdev/raid/bdev_raid_sb.o 00:03:58.331 CC module/bdev/raid/raid0.o 00:03:58.331 LIB libspdk_bdev_ftl.a 00:03:58.331 CC module/bdev/raid/raid1.o 00:03:58.331 CC module/bdev/raid/concat.o 00:03:58.331 CC module/bdev/raid/raid5f.o 00:03:58.331 LIB libspdk_bdev_iscsi.a 00:03:58.591 LIB libspdk_bdev_virtio.a 00:03:59.158 LIB libspdk_bdev_raid.a 00:03:59.729 LIB libspdk_bdev_nvme.a 00:04:00.015 CC module/event/subsystems/sock/sock.o 00:04:00.015 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.015 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.015 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.015 CC module/event/subsystems/vmd/vmd.o 00:04:00.015 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.015 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.318 LIB libspdk_event_sock.a 00:04:00.318 LIB libspdk_event_vhost_blk.a 00:04:00.318 LIB libspdk_event_scheduler.a 00:04:00.318 LIB libspdk_event_iobuf.a 00:04:00.318 LIB libspdk_event_vmd.a 00:04:00.576 CC module/event/subsystems/accel/accel.o 00:04:00.576 LIB libspdk_event_accel.a 00:04:00.833 CC module/event/subsystems/bdev/bdev.o 00:04:01.090 LIB libspdk_event_bdev.a 00:04:01.090 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.090 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.090 CC module/event/subsystems/scsi/scsi.o 00:04:01.090 CC module/event/subsystems/nbd/nbd.o 00:04:01.090 CC module/event/subsystems/ublk/ublk.o 00:04:01.397 LIB libspdk_event_nbd.a 00:04:01.397 LIB libspdk_event_ublk.a 00:04:01.397 LIB libspdk_event_scsi.a 00:04:01.397 LIB libspdk_event_nvmf.a 00:04:01.397 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.654 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.654 LIB libspdk_event_iscsi.a 00:04:01.654 LIB libspdk_event_vhost_scsi.a 00:04:01.912 CXX app/trace/trace.o 00:04:01.912 CC app/spdk_lspci/spdk_lspci.o 00:04:01.912 CC app/spdk_nvme_perf/perf.o 00:04:01.912 CC app/trace_record/trace_record.o 00:04:01.912 CC app/nvmf_tgt/nvmf_main.o 00:04:01.912 CC examples/accel/perf/accel_perf.o 00:04:01.912 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.912 CC app/spdk_tgt/spdk_tgt.o 00:04:01.912 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.912 CC test/accel/dif/dif.o 00:04:02.170 LINK spdk_lspci 00:04:02.170 LINK nvmf_tgt 00:04:02.170 LINK iscsi_tgt 00:04:02.170 LINK spdk_tgt 00:04:02.170 LINK spdk_trace_record 00:04:02.170 LINK hello_bdev 00:04:02.427 LINK spdk_trace 00:04:02.427 LINK dif 00:04:02.427 LINK accel_perf 00:04:02.684 CC app/spdk_nvme_identify/identify.o 00:04:02.684 CC examples/blob/hello_world/hello_blob.o 00:04:02.943 CC examples/blob/cli/blobcli.o 00:04:02.943 LINK spdk_nvme_perf 00:04:03.203 LINK hello_blob 00:04:03.203 CC examples/bdev/bdevperf/bdevperf.o 00:04:03.461 LINK blobcli 00:04:04.029 CC app/spdk_nvme_discover/discovery_aer.o 00:04:04.029 LINK spdk_nvme_identify 00:04:04.029 LINK spdk_nvme_discover 00:04:04.029 CC app/spdk_top/spdk_top.o 00:04:04.029 CC examples/ioat/perf/perf.o 00:04:04.288 CC app/vhost/vhost.o 00:04:04.288 LINK bdevperf 00:04:04.288 CC test/app/bdev_svc/bdev_svc.o 00:04:04.545 CC app/spdk_dd/spdk_dd.o 00:04:04.546 LINK ioat_perf 00:04:04.546 LINK vhost 00:04:04.546 LINK bdev_svc 00:04:04.804 CC examples/nvme/hello_world/hello_world.o 00:04:04.804 LINK spdk_dd 00:04:05.063 CC examples/nvme/reconnect/reconnect.o 00:04:05.063 CC examples/ioat/verify/verify.o 00:04:05.063 CC examples/sock/hello_world/hello_sock.o 00:04:05.063 LINK hello_world 00:04:05.321 LINK verify 00:04:05.321 LINK spdk_top 00:04:05.321 LINK hello_sock 00:04:05.321 LINK reconnect 00:04:05.321 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.580 LINK lsvmd 00:04:05.838 CC examples/vmd/led/led.o 00:04:06.097 CC examples/nvmf/nvmf/nvmf.o 00:04:06.097 LINK led 00:04:06.097 CC examples/util/zipf/zipf.o 00:04:06.097 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.097 CC app/fio/nvme/fio_plugin.o 00:04:06.355 LINK zipf 00:04:06.355 CC test/app/histogram_perf/histogram_perf.o 00:04:06.355 CC app/fio/bdev/fio_plugin.o 00:04:06.355 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:06.355 LINK nvmf 00:04:06.613 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:06.613 LINK histogram_perf 00:04:06.613 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.871 LINK nvme_manage 00:04:06.871 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:06.871 CC test/bdev/bdevio/bdevio.o 00:04:06.871 CC examples/thread/thread/thread_ex.o 00:04:06.871 LINK nvme_fuzz 00:04:07.129 LINK spdk_nvme 00:04:07.129 LINK spdk_bdev 00:04:07.129 CC examples/idxd/perf/perf.o 00:04:07.129 LINK thread 00:04:07.388 LINK vhost_fuzz 00:04:07.388 LINK bdevio 00:04:07.646 LINK idxd_perf 00:04:07.646 CC examples/nvme/arbitration/arbitration.o 00:04:07.904 CC examples/nvme/hotplug/hotplug.o 00:04:07.904 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.162 LINK cmb_copy 00:04:08.162 CC examples/nvme/abort/abort.o 00:04:08.162 LINK arbitration 00:04:08.162 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.162 LINK hotplug 00:04:08.419 CC test/blobfs/mkfs/mkfs.o 00:04:08.420 LINK pmr_persistence 00:04:08.678 LINK abort 00:04:08.678 TEST_HEADER include/spdk/accel.h 00:04:08.935 TEST_HEADER include/spdk/accel_module.h 00:04:08.935 TEST_HEADER include/spdk/assert.h 00:04:08.935 TEST_HEADER include/spdk/barrier.h 00:04:08.935 TEST_HEADER include/spdk/base64.h 00:04:08.935 TEST_HEADER include/spdk/bdev.h 00:04:08.935 TEST_HEADER include/spdk/bdev_module.h 00:04:08.935 TEST_HEADER include/spdk/bdev_zone.h 00:04:08.935 TEST_HEADER include/spdk/bit_array.h 00:04:08.935 TEST_HEADER include/spdk/bit_pool.h 00:04:08.935 TEST_HEADER include/spdk/blob.h 00:04:08.935 TEST_HEADER include/spdk/blob_bdev.h 00:04:08.935 TEST_HEADER include/spdk/blobfs.h 00:04:08.935 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:08.935 TEST_HEADER include/spdk/conf.h 00:04:08.935 TEST_HEADER include/spdk/config.h 00:04:08.935 TEST_HEADER include/spdk/cpuset.h 00:04:08.935 TEST_HEADER include/spdk/crc16.h 00:04:08.935 TEST_HEADER include/spdk/crc32.h 00:04:08.935 TEST_HEADER include/spdk/crc64.h 00:04:08.935 TEST_HEADER include/spdk/dif.h 00:04:08.935 TEST_HEADER include/spdk/dma.h 00:04:08.935 TEST_HEADER include/spdk/endian.h 00:04:08.935 TEST_HEADER include/spdk/env.h 00:04:08.935 TEST_HEADER include/spdk/env_dpdk.h 00:04:08.935 TEST_HEADER include/spdk/event.h 00:04:08.935 TEST_HEADER include/spdk/fd.h 00:04:08.935 TEST_HEADER include/spdk/fd_group.h 00:04:08.935 TEST_HEADER include/spdk/file.h 00:04:08.935 TEST_HEADER include/spdk/ftl.h 00:04:08.935 TEST_HEADER include/spdk/gpt_spec.h 00:04:08.935 TEST_HEADER include/spdk/hexlify.h 00:04:08.935 TEST_HEADER include/spdk/histogram_data.h 00:04:08.935 TEST_HEADER include/spdk/idxd.h 00:04:08.935 TEST_HEADER include/spdk/idxd_spec.h 00:04:08.935 TEST_HEADER include/spdk/init.h 00:04:08.935 TEST_HEADER include/spdk/ioat.h 00:04:08.935 TEST_HEADER include/spdk/ioat_spec.h 00:04:08.935 TEST_HEADER include/spdk/iscsi_spec.h 00:04:08.935 TEST_HEADER include/spdk/json.h 00:04:08.935 TEST_HEADER include/spdk/jsonrpc.h 00:04:08.935 TEST_HEADER include/spdk/likely.h 00:04:08.935 TEST_HEADER include/spdk/log.h 00:04:08.935 TEST_HEADER include/spdk/lvol.h 00:04:08.935 TEST_HEADER include/spdk/memory.h 00:04:08.935 TEST_HEADER include/spdk/mmio.h 00:04:08.935 TEST_HEADER include/spdk/nbd.h 00:04:08.935 TEST_HEADER include/spdk/notify.h 00:04:08.935 TEST_HEADER include/spdk/nvme.h 00:04:08.935 TEST_HEADER include/spdk/nvme_intel.h 00:04:08.935 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:08.935 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:08.935 TEST_HEADER include/spdk/nvme_spec.h 00:04:08.935 TEST_HEADER include/spdk/nvme_zns.h 00:04:08.935 TEST_HEADER include/spdk/nvmf.h 00:04:08.935 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:08.935 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:08.935 TEST_HEADER include/spdk/nvmf_spec.h 00:04:08.935 TEST_HEADER include/spdk/nvmf_transport.h 00:04:08.935 TEST_HEADER include/spdk/opal.h 00:04:08.935 TEST_HEADER include/spdk/opal_spec.h 00:04:08.935 TEST_HEADER include/spdk/pci_ids.h 00:04:08.935 TEST_HEADER include/spdk/pipe.h 00:04:08.935 TEST_HEADER include/spdk/queue.h 00:04:08.935 TEST_HEADER include/spdk/reduce.h 00:04:08.935 TEST_HEADER include/spdk/rpc.h 00:04:08.935 TEST_HEADER include/spdk/scheduler.h 00:04:08.935 TEST_HEADER include/spdk/scsi.h 00:04:08.935 TEST_HEADER include/spdk/scsi_spec.h 00:04:08.935 TEST_HEADER include/spdk/sock.h 00:04:08.935 TEST_HEADER include/spdk/stdinc.h 00:04:08.935 TEST_HEADER include/spdk/string.h 00:04:08.935 TEST_HEADER include/spdk/thread.h 00:04:08.935 TEST_HEADER include/spdk/trace.h 00:04:08.935 TEST_HEADER include/spdk/trace_parser.h 00:04:08.935 TEST_HEADER include/spdk/tree.h 00:04:08.935 TEST_HEADER include/spdk/ublk.h 00:04:08.935 TEST_HEADER include/spdk/util.h 00:04:08.935 TEST_HEADER include/spdk/uuid.h 00:04:08.935 TEST_HEADER include/spdk/version.h 00:04:08.935 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:08.935 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:08.935 TEST_HEADER include/spdk/vhost.h 00:04:08.935 TEST_HEADER include/spdk/vmd.h 00:04:08.935 TEST_HEADER include/spdk/xor.h 00:04:08.935 TEST_HEADER include/spdk/zipf.h 00:04:08.935 CXX test/cpp_headers/accel.o 00:04:08.935 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:08.935 LINK mkfs 00:04:09.193 CC test/dma/test_dma/test_dma.o 00:04:09.193 CC test/app/jsoncat/jsoncat.o 00:04:09.193 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.193 CC test/env/vtophys/vtophys.o 00:04:09.193 CXX test/cpp_headers/accel_module.o 00:04:09.193 LINK interrupt_tgt 00:04:09.193 LINK iscsi_fuzz 00:04:09.193 LINK jsoncat 00:04:09.451 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:09.451 LINK vtophys 00:04:09.451 CXX test/cpp_headers/assert.o 00:04:09.451 CC test/app/stub/stub.o 00:04:09.451 LINK env_dpdk_post_init 00:04:09.709 LINK test_dma 00:04:09.709 CXX test/cpp_headers/barrier.o 00:04:09.709 LINK stub 00:04:09.709 CXX test/cpp_headers/base64.o 00:04:09.709 CXX test/cpp_headers/bdev.o 00:04:09.968 LINK mem_callbacks 00:04:09.968 CC test/env/memory/memory_ut.o 00:04:09.968 CC test/env/pci/pci_ut.o 00:04:09.968 CXX test/cpp_headers/bdev_module.o 00:04:09.968 CXX test/cpp_headers/bdev_zone.o 00:04:10.226 CXX test/cpp_headers/bit_array.o 00:04:10.226 CXX test/cpp_headers/bit_pool.o 00:04:10.484 CC test/event/event_perf/event_perf.o 00:04:10.484 CC test/rpc_client/rpc_client_test.o 00:04:10.484 CC test/lvol/esnap/esnap.o 00:04:10.484 CC test/nvme/aer/aer.o 00:04:10.484 CXX test/cpp_headers/blob.o 00:04:10.484 CXX test/cpp_headers/blob_bdev.o 00:04:10.484 LINK event_perf 00:04:10.484 LINK pci_ut 00:04:10.742 LINK rpc_client_test 00:04:10.742 CC test/thread/poller_perf/poller_perf.o 00:04:10.742 CXX test/cpp_headers/blobfs.o 00:04:10.742 CXX test/cpp_headers/blobfs_bdev.o 00:04:10.742 LINK aer 00:04:11.000 LINK poller_perf 00:04:11.000 CXX test/cpp_headers/conf.o 00:04:11.000 CXX test/cpp_headers/config.o 00:04:11.000 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:11.258 LINK memory_ut 00:04:11.258 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:11.258 CXX test/cpp_headers/cpuset.o 00:04:11.258 CC test/event/reactor/reactor.o 00:04:11.258 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:11.258 LINK reactor 00:04:11.258 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:11.258 LINK histogram_ut 00:04:11.258 CXX test/cpp_headers/crc16.o 00:04:11.258 CXX test/cpp_headers/crc32.o 00:04:11.515 CXX test/cpp_headers/crc64.o 00:04:11.515 CC test/thread/lock/spdk_lock.o 00:04:11.515 CC test/event/reactor_perf/reactor_perf.o 00:04:11.515 CC test/event/app_repeat/app_repeat.o 00:04:11.772 CC test/nvme/reset/reset.o 00:04:11.772 CXX test/cpp_headers/dif.o 00:04:11.772 LINK reactor_perf 00:04:11.772 LINK app_repeat 00:04:12.029 CXX test/cpp_headers/dma.o 00:04:12.029 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:12.029 LINK reset 00:04:12.029 CXX test/cpp_headers/endian.o 00:04:12.287 CXX test/cpp_headers/env.o 00:04:12.287 CXX test/cpp_headers/env_dpdk.o 00:04:12.545 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:12.545 CXX test/cpp_headers/event.o 00:04:12.545 CC test/event/scheduler/scheduler.o 00:04:12.803 LINK blob_bdev_ut 00:04:12.803 CXX test/cpp_headers/fd.o 00:04:12.803 CC test/nvme/sgl/sgl.o 00:04:13.060 LINK scheduler 00:04:13.060 CXX test/cpp_headers/fd_group.o 00:04:13.060 CXX test/cpp_headers/file.o 00:04:13.060 CXX test/cpp_headers/ftl.o 00:04:13.060 CXX test/cpp_headers/gpt_spec.o 00:04:13.331 LINK sgl 00:04:13.331 CXX test/cpp_headers/hexlify.o 00:04:13.331 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:13.684 CXX test/cpp_headers/histogram_data.o 00:04:13.943 LINK spdk_lock 00:04:13.943 CXX test/cpp_headers/idxd.o 00:04:13.943 LINK tree_ut 00:04:13.943 CC test/nvme/e2edp/nvme_dp.o 00:04:14.202 CXX test/cpp_headers/idxd_spec.o 00:04:14.202 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:14.202 LINK accel_ut 00:04:14.461 CXX test/cpp_headers/init.o 00:04:14.719 CXX test/cpp_headers/ioat.o 00:04:14.719 CXX test/cpp_headers/iscsi_spec.o 00:04:14.719 CXX test/cpp_headers/ioat_spec.o 00:04:14.719 LINK nvme_dp 00:04:14.719 CC test/nvme/overhead/overhead.o 00:04:14.979 CXX test/cpp_headers/json.o 00:04:14.979 CC test/nvme/err_injection/err_injection.o 00:04:14.979 CC test/nvme/startup/startup.o 00:04:14.980 LINK overhead 00:04:14.980 CXX test/cpp_headers/jsonrpc.o 00:04:15.239 LINK startup 00:04:15.239 LINK err_injection 00:04:15.240 CXX test/cpp_headers/likely.o 00:04:15.498 CXX test/cpp_headers/log.o 00:04:15.498 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:15.756 CXX test/cpp_headers/lvol.o 00:04:15.756 LINK part_ut 00:04:15.756 LINK scsi_nvme_ut 00:04:15.756 LINK blobfs_async_ut 00:04:15.756 CXX test/cpp_headers/memory.o 00:04:15.756 CC test/nvme/reserve/reserve.o 00:04:16.014 CC test/nvme/simple_copy/simple_copy.o 00:04:16.014 CXX test/cpp_headers/mmio.o 00:04:16.014 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:16.014 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:16.014 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:16.014 LINK reserve 00:04:16.273 CXX test/cpp_headers/nbd.o 00:04:16.273 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:16.273 CXX test/cpp_headers/notify.o 00:04:16.273 LINK simple_copy 00:04:16.532 CXX test/cpp_headers/nvme.o 00:04:16.532 LINK gpt_ut 00:04:16.790 CXX test/cpp_headers/nvme_intel.o 00:04:16.790 LINK dma_ut 00:04:16.790 CXX test/cpp_headers/nvme_ocssd.o 00:04:17.048 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:17.048 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:17.048 CC test/nvme/connect_stress/connect_stress.o 00:04:17.048 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:17.048 LINK esnap 00:04:17.048 CC test/unit/lib/event/app.c/app_ut.o 00:04:17.048 LINK blobfs_bdev_ut 00:04:17.307 LINK connect_stress 00:04:17.307 CXX test/cpp_headers/nvme_spec.o 00:04:17.307 CXX test/cpp_headers/nvme_zns.o 00:04:17.307 LINK vbdev_lvol_ut 00:04:17.565 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:17.565 CXX test/cpp_headers/nvmf.o 00:04:17.565 LINK blobfs_sync_ut 00:04:17.565 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:17.565 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:17.824 CXX test/cpp_headers/nvmf_cmd.o 00:04:17.824 LINK app_ut 00:04:18.083 CC test/nvme/boot_partition/boot_partition.o 00:04:18.083 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:18.083 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.083 LINK bdev_zone_ut 00:04:18.083 LINK bdev_ut 00:04:18.083 LINK boot_partition 00:04:18.341 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:18.341 CXX test/cpp_headers/nvmf_spec.o 00:04:18.341 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:18.341 CXX test/cpp_headers/nvmf_transport.o 00:04:18.600 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:18.600 LINK ioat_ut 00:04:18.600 LINK reactor_ut 00:04:18.600 CXX test/cpp_headers/opal.o 00:04:18.600 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:18.859 LINK bdev_raid_sb_ut 00:04:18.859 CC test/nvme/compliance/nvme_compliance.o 00:04:18.859 CXX test/cpp_headers/opal_spec.o 00:04:18.859 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:18.859 CXX test/cpp_headers/pci_ids.o 00:04:19.118 LINK init_grp_ut 00:04:19.118 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.118 CXX test/cpp_headers/pipe.o 00:04:19.118 LINK vbdev_zone_block_ut 00:04:19.118 LINK nvme_compliance 00:04:19.376 CXX test/cpp_headers/queue.o 00:04:19.376 CXX test/cpp_headers/reduce.o 00:04:19.376 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:19.376 LINK fused_ordering 00:04:19.376 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:19.635 CXX test/cpp_headers/rpc.o 00:04:19.635 LINK concat_ut 00:04:19.635 CXX test/cpp_headers/scheduler.o 00:04:19.893 LINK conn_ut 00:04:19.893 CXX test/cpp_headers/scsi.o 00:04:19.893 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.893 LINK raid1_ut 00:04:19.893 CC test/nvme/fdp/fdp.o 00:04:19.893 CXX test/cpp_headers/scsi_spec.o 00:04:20.152 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:20.152 LINK bdev_raid_ut 00:04:20.152 CC test/nvme/cuse/cuse.o 00:04:20.152 CXX test/cpp_headers/sock.o 00:04:20.152 LINK doorbell_aers 00:04:20.152 CXX test/cpp_headers/stdinc.o 00:04:20.410 CXX test/cpp_headers/string.o 00:04:20.410 LINK fdp 00:04:20.410 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:20.410 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:20.410 CXX test/cpp_headers/thread.o 00:04:20.668 CXX test/cpp_headers/trace.o 00:04:20.926 LINK raid5f_ut 00:04:20.926 CXX test/cpp_headers/trace_parser.o 00:04:20.926 CXX test/cpp_headers/tree.o 00:04:20.926 CXX test/cpp_headers/ublk.o 00:04:20.926 LINK param_ut 00:04:21.184 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:21.184 LINK portal_grp_ut 00:04:21.184 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:21.184 CXX test/cpp_headers/util.o 00:04:21.184 CXX test/cpp_headers/uuid.o 00:04:21.184 CXX test/cpp_headers/version.o 00:04:21.184 CXX test/cpp_headers/vfio_user_pci.o 00:04:21.441 CXX test/cpp_headers/vfio_user_spec.o 00:04:21.441 CXX test/cpp_headers/vhost.o 00:04:21.441 CXX test/cpp_headers/vmd.o 00:04:21.441 CXX test/cpp_headers/xor.o 00:04:21.441 LINK cuse 00:04:21.441 LINK bdev_ut 00:04:21.699 CXX test/cpp_headers/zipf.o 00:04:21.699 LINK blob_ut 00:04:21.699 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:21.699 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:21.699 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:21.699 CC test/unit/lib/log/log.c/log_ut.o 00:04:21.957 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:21.957 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:21.957 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:22.215 LINK tgt_node_ut 00:04:22.215 LINK jsonrpc_server_ut 00:04:22.215 LINK log_ut 00:04:22.215 LINK notify_ut 00:04:22.473 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:22.473 LINK json_util_ut 00:04:22.473 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:22.473 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:22.731 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:22.731 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:22.989 LINK iscsi_ut 00:04:23.247 LINK dev_ut 00:04:23.247 LINK json_write_ut 00:04:23.505 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:23.505 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:23.505 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:23.762 LINK nvme_ut 00:04:24.020 LINK base64_ut 00:04:24.020 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:24.020 LINK lvol_ut 00:04:24.278 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:24.278 LINK lun_ut 00:04:24.536 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:24.536 LINK sock_ut 00:04:24.536 LINK bit_array_ut 00:04:24.536 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:24.794 LINK json_parse_ut 00:04:24.794 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:24.794 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:24.794 LINK scsi_ut 00:04:25.052 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:25.052 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:25.052 LINK cpuset_ut 00:04:25.052 LINK pci_event_ut 00:04:25.310 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:25.310 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:25.569 LINK crc16_ut 00:04:25.569 LINK crc32_ieee_ut 00:04:25.569 LINK subsystem_ut 00:04:25.569 LINK posix_ut 00:04:25.883 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:25.883 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:25.883 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:25.883 LINK crc32c_ut 00:04:25.883 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:26.158 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:26.158 LINK thread_ut 00:04:26.158 LINK scsi_bdev_ut 00:04:26.416 LINK crc64_ut 00:04:26.416 LINK nvme_ctrlr_ut 00:04:26.416 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:26.675 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:26.675 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:26.675 LINK bdev_nvme_ut 00:04:26.675 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:27.242 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:27.242 LINK scsi_pr_ut 00:04:27.242 LINK ctrlr_bdev_ut 00:04:27.242 LINK tcp_ut 00:04:27.500 LINK iobuf_ut 00:04:27.500 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:27.500 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:27.500 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:27.759 CC test/unit/lib/util/math.c/math_ut.o 00:04:27.759 LINK ctrlr_ut 00:04:27.759 LINK math_ut 00:04:28.017 LINK rpc_ut 00:04:28.017 LINK iov_ut 00:04:28.017 LINK dif_ut 00:04:28.017 LINK ctrlr_discovery_ut 00:04:28.017 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:28.017 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:28.276 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:28.276 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:28.276 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:28.276 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:28.276 LINK subsystem_ut 00:04:28.276 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:28.534 LINK nvme_ctrlr_cmd_ut 00:04:28.534 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:28.534 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:28.534 LINK pipe_ut 00:04:28.793 LINK idxd_user_ut 00:04:28.793 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:28.793 LINK nvmf_ut 00:04:28.793 CC test/unit/lib/util/string.c/string_ut.o 00:04:28.793 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:29.051 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:29.051 LINK idxd_ut 00:04:29.310 LINK nvme_ns_ut 00:04:29.310 LINK string_ut 00:04:29.310 LINK xor_ut 00:04:29.310 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:29.569 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:29.569 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:29.569 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:29.827 LINK nvme_poll_group_ut 00:04:29.827 LINK ftl_l2p_ut 00:04:30.086 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:30.086 LINK common_ut 00:04:30.086 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:30.086 LINK nvme_quirks_ut 00:04:30.086 LINK nvme_qpair_ut 00:04:30.086 LINK nvme_ns_ocssd_cmd_ut 00:04:30.344 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:30.344 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:30.344 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:30.344 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:30.603 LINK nvme_ns_cmd_ut 00:04:30.603 LINK nvme_pcie_ut 00:04:30.603 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:30.861 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:30.861 LINK ftl_io_ut 00:04:31.118 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:31.376 LINK ftl_bitmap_ut 00:04:31.376 LINK nvme_io_msg_ut 00:04:31.376 LINK ftl_band_ut 00:04:31.635 LINK nvme_transport_ut 00:04:31.635 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:31.635 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:31.635 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:31.894 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:31.894 LINK nvme_fabric_ut 00:04:31.894 LINK nvme_opal_ut 00:04:31.894 LINK vhost_ut 00:04:31.894 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:31.894 LINK ftl_mempool_ut 00:04:32.152 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:32.152 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:32.152 LINK nvme_pcie_common_ut 00:04:32.410 LINK ftl_mngt_ut 00:04:32.977 LINK nvme_tcp_ut 00:04:33.235 LINK rdma_ut 00:04:33.235 LINK ftl_layout_upgrade_ut 00:04:33.235 LINK ftl_sb_ut 00:04:33.802 LINK nvme_cuse_ut 00:04:34.370 LINK nvme_rdma_ut 00:04:35.304 LINK transport_ut 00:04:35.564 00:04:35.564 real 1m41.843s 00:04:35.564 user 8m14.674s 00:04:35.564 sys 1m40.671s 00:04:35.564 11:14:53 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:35.564 11:14:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.564 ************************************ 00:04:35.564 END TEST unittest_build 00:04:35.564 ************************************ 00:04:35.823 11:14:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.823 11:14:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.823 11:14:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.823 11:14:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.823 11:14:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.823 11:14:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.823 11:14:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.823 11:14:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.823 11:14:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.823 11:14:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.823 11:14:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.823 11:14:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.823 11:14:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.823 11:14:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.823 11:14:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.823 11:14:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.823 11:14:53 -- scripts/common.sh@344 -- # : 1 00:04:35.823 11:14:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.823 11:14:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.823 11:14:53 -- scripts/common.sh@364 -- # decimal 1 00:04:35.823 11:14:53 -- scripts/common.sh@352 -- # local d=1 00:04:35.823 11:14:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.823 11:14:53 -- scripts/common.sh@354 -- # echo 1 00:04:35.823 11:14:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.823 11:14:53 -- scripts/common.sh@365 -- # decimal 2 00:04:35.823 11:14:53 -- scripts/common.sh@352 -- # local d=2 00:04:35.823 11:14:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.823 11:14:53 -- scripts/common.sh@354 -- # echo 2 00:04:35.823 11:14:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.824 11:14:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.824 11:14:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.824 11:14:53 -- scripts/common.sh@367 -- # return 0 00:04:35.824 11:14:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.824 11:14:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.824 --rc genhtml_branch_coverage=1 00:04:35.824 --rc genhtml_function_coverage=1 00:04:35.824 --rc genhtml_legend=1 00:04:35.824 --rc geninfo_all_blocks=1 00:04:35.824 --rc geninfo_unexecuted_blocks=1 00:04:35.824 00:04:35.824 ' 00:04:35.824 11:14:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.824 --rc genhtml_branch_coverage=1 00:04:35.824 --rc genhtml_function_coverage=1 00:04:35.824 --rc genhtml_legend=1 00:04:35.824 --rc geninfo_all_blocks=1 00:04:35.824 --rc geninfo_unexecuted_blocks=1 00:04:35.824 00:04:35.824 ' 00:04:35.824 11:14:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.824 --rc genhtml_branch_coverage=1 00:04:35.824 --rc genhtml_function_coverage=1 00:04:35.824 --rc genhtml_legend=1 00:04:35.824 --rc geninfo_all_blocks=1 00:04:35.824 --rc geninfo_unexecuted_blocks=1 00:04:35.824 00:04:35.824 ' 00:04:35.824 11:14:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.824 --rc genhtml_branch_coverage=1 00:04:35.824 --rc genhtml_function_coverage=1 00:04:35.824 --rc genhtml_legend=1 00:04:35.824 --rc geninfo_all_blocks=1 00:04:35.824 --rc geninfo_unexecuted_blocks=1 00:04:35.824 00:04:35.824 ' 00:04:35.824 11:14:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.824 11:14:53 -- nvmf/common.sh@7 -- # uname -s 00:04:35.824 11:14:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.824 11:14:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.824 11:14:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.824 11:14:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.824 11:14:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.824 11:14:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.824 11:14:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.824 11:14:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.824 11:14:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.824 11:14:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.824 11:14:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:04:35.824 11:14:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:04:35.824 11:14:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.824 11:14:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.824 11:14:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.824 11:14:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.824 11:14:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.824 11:14:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.824 11:14:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.824 11:14:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:35.824 11:14:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:35.824 11:14:53 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:35.824 11:14:53 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:35.824 11:14:53 -- paths/export.sh@6 -- # export PATH 00:04:35.824 11:14:53 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:35.824 11:14:53 -- nvmf/common.sh@46 -- # : 0 00:04:35.824 11:14:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:35.824 11:14:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:35.824 11:14:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:35.824 11:14:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.824 11:14:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.824 11:14:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:35.824 11:14:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:35.824 11:14:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:35.824 11:14:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.824 11:14:53 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.824 11:14:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.824 11:14:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:35.824 11:14:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.824 11:14:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.824 11:14:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.824 11:14:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:35.824 11:14:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:35.824 11:14:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:35.824 11:14:54 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:35.824 11:14:54 -- spdk/autotest.sh@48 -- # udevadm_pid=63214 00:04:35.824 11:14:54 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:35.824 11:14:54 -- spdk/autotest.sh@54 -- # echo 63227 00:04:35.824 11:14:54 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:35.824 11:14:54 -- spdk/autotest.sh@56 -- # echo 63228 00:04:35.824 11:14:54 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:35.824 11:14:54 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:35.824 11:14:54 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:35.824 11:14:54 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:35.824 11:14:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.824 11:14:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.824 11:14:54 -- spdk/autotest.sh@70 -- # create_test_list 00:04:35.824 11:14:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:35.824 11:14:54 -- common/autotest_common.sh@10 -- # set +x 00:04:36.083 11:14:54 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:36.083 11:14:54 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:36.083 11:14:54 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:36.083 11:14:54 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:36.083 11:14:54 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:36.083 11:14:54 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:36.083 11:14:54 -- common/autotest_common.sh@1450 -- # uname 00:04:36.083 11:14:54 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:36.083 11:14:54 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:36.083 11:14:54 -- common/autotest_common.sh@1470 -- # uname 00:04:36.083 11:14:54 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:36.083 11:14:54 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:36.083 11:14:54 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:36.083 lcov: LCOV version 1.15 00:04:36.083 11:14:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.963 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:50.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:50.963 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:50.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:50.963 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:50.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:37.691 11:15:48 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:37.691 11:15:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.691 11:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.691 11:15:48 -- spdk/autotest.sh@89 -- # rm -f 00:05:37.691 11:15:48 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:37.691 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:37.691 11:15:48 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:37.691 11:15:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:37.691 11:15:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:37.691 11:15:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:37.691 11:15:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:37.691 11:15:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:37.691 11:15:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:37.691 11:15:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.691 11:15:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:37.691 11:15:48 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:37.691 11:15:48 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:05:37.691 11:15:48 -- spdk/autotest.sh@108 -- # grep -v p 00:05:37.691 11:15:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:37.691 11:15:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:37.691 11:15:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:37.691 11:15:48 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:37.691 11:15:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:37.691 No valid GPT data, bailing 00:05:37.691 11:15:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:37.691 11:15:48 -- scripts/common.sh@393 -- # pt= 00:05:37.691 11:15:48 -- scripts/common.sh@394 -- # return 1 00:05:37.691 11:15:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:37.691 1+0 records in 00:05:37.691 1+0 records out 00:05:37.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485043 s, 216 MB/s 00:05:37.691 11:15:48 -- spdk/autotest.sh@116 -- # sync 00:05:37.691 11:15:49 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:37.691 11:15:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:37.691 11:15:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:37.691 11:15:50 -- spdk/autotest.sh@122 -- # uname -s 00:05:37.691 11:15:50 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:37.691 11:15:50 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:37.691 11:15:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.691 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.691 ************************************ 00:05:37.691 START TEST setup.sh 00:05:37.691 ************************************ 00:05:37.691 11:15:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:37.691 * Looking for test storage... 00:05:37.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.691 11:15:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.691 11:15:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.691 11:15:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.691 11:15:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.691 11:15:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.691 11:15:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.691 11:15:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.691 11:15:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.691 11:15:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.691 11:15:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.691 11:15:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.691 11:15:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.691 11:15:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.691 11:15:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.691 11:15:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.691 11:15:50 -- scripts/common.sh@344 -- # : 1 00:05:37.691 11:15:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.691 11:15:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.691 11:15:50 -- scripts/common.sh@364 -- # decimal 1 00:05:37.691 11:15:50 -- scripts/common.sh@352 -- # local d=1 00:05:37.691 11:15:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.691 11:15:50 -- scripts/common.sh@354 -- # echo 1 00:05:37.691 11:15:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.691 11:15:50 -- scripts/common.sh@365 -- # decimal 2 00:05:37.691 11:15:50 -- scripts/common.sh@352 -- # local d=2 00:05:37.691 11:15:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.691 11:15:50 -- scripts/common.sh@354 -- # echo 2 00:05:37.691 11:15:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.691 11:15:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.691 11:15:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.691 11:15:50 -- scripts/common.sh@367 -- # return 0 00:05:37.691 11:15:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.691 --rc genhtml_branch_coverage=1 00:05:37.691 --rc genhtml_function_coverage=1 00:05:37.691 --rc genhtml_legend=1 00:05:37.691 --rc geninfo_all_blocks=1 00:05:37.691 --rc geninfo_unexecuted_blocks=1 00:05:37.691 00:05:37.691 ' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.691 --rc genhtml_branch_coverage=1 00:05:37.691 --rc genhtml_function_coverage=1 00:05:37.691 --rc genhtml_legend=1 00:05:37.691 --rc geninfo_all_blocks=1 00:05:37.691 --rc geninfo_unexecuted_blocks=1 00:05:37.691 00:05:37.691 ' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.691 --rc genhtml_branch_coverage=1 00:05:37.691 --rc genhtml_function_coverage=1 00:05:37.691 --rc genhtml_legend=1 00:05:37.691 --rc geninfo_all_blocks=1 00:05:37.691 --rc geninfo_unexecuted_blocks=1 00:05:37.691 00:05:37.691 ' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.691 --rc genhtml_branch_coverage=1 00:05:37.691 --rc genhtml_function_coverage=1 00:05:37.691 --rc genhtml_legend=1 00:05:37.691 --rc geninfo_all_blocks=1 00:05:37.691 --rc geninfo_unexecuted_blocks=1 00:05:37.691 00:05:37.691 ' 00:05:37.691 11:15:50 -- setup/test-setup.sh@10 -- # uname -s 00:05:37.691 11:15:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:37.691 11:15:50 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:37.691 11:15:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.691 11:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.691 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 ************************************ 00:05:37.692 START TEST acl 00:05:37.692 ************************************ 00:05:37.692 11:15:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:37.692 * Looking for test storage... 00:05:37.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.692 11:15:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.692 11:15:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.692 11:15:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.692 11:15:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.692 11:15:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.692 11:15:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.692 11:15:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.692 11:15:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.692 11:15:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.692 11:15:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.692 11:15:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.692 11:15:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.692 11:15:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.692 11:15:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.692 11:15:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.692 11:15:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.692 11:15:50 -- scripts/common.sh@344 -- # : 1 00:05:37.692 11:15:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.692 11:15:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.692 11:15:50 -- scripts/common.sh@364 -- # decimal 1 00:05:37.692 11:15:50 -- scripts/common.sh@352 -- # local d=1 00:05:37.692 11:15:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.692 11:15:50 -- scripts/common.sh@354 -- # echo 1 00:05:37.692 11:15:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.692 11:15:50 -- scripts/common.sh@365 -- # decimal 2 00:05:37.692 11:15:50 -- scripts/common.sh@352 -- # local d=2 00:05:37.692 11:15:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.692 11:15:50 -- scripts/common.sh@354 -- # echo 2 00:05:37.692 11:15:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.692 11:15:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.692 11:15:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.692 11:15:50 -- scripts/common.sh@367 -- # return 0 00:05:37.692 11:15:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.692 11:15:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.692 --rc genhtml_branch_coverage=1 00:05:37.692 --rc genhtml_function_coverage=1 00:05:37.692 --rc genhtml_legend=1 00:05:37.692 --rc geninfo_all_blocks=1 00:05:37.692 --rc geninfo_unexecuted_blocks=1 00:05:37.692 00:05:37.692 ' 00:05:37.692 11:15:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.692 --rc genhtml_branch_coverage=1 00:05:37.692 --rc genhtml_function_coverage=1 00:05:37.692 --rc genhtml_legend=1 00:05:37.692 --rc geninfo_all_blocks=1 00:05:37.692 --rc geninfo_unexecuted_blocks=1 00:05:37.692 00:05:37.692 ' 00:05:37.692 11:15:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.692 --rc genhtml_branch_coverage=1 00:05:37.692 --rc genhtml_function_coverage=1 00:05:37.692 --rc genhtml_legend=1 00:05:37.692 --rc geninfo_all_blocks=1 00:05:37.692 --rc geninfo_unexecuted_blocks=1 00:05:37.692 00:05:37.692 ' 00:05:37.692 11:15:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.692 --rc genhtml_branch_coverage=1 00:05:37.692 --rc genhtml_function_coverage=1 00:05:37.692 --rc genhtml_legend=1 00:05:37.692 --rc geninfo_all_blocks=1 00:05:37.692 --rc geninfo_unexecuted_blocks=1 00:05:37.692 00:05:37.692 ' 00:05:37.692 11:15:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:37.692 11:15:50 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:37.692 11:15:50 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:37.692 11:15:50 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:37.692 11:15:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:37.692 11:15:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:37.692 11:15:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:37.692 11:15:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.692 11:15:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:37.692 11:15:50 -- setup/acl.sh@12 -- # devs=() 00:05:37.692 11:15:50 -- setup/acl.sh@12 -- # declare -a devs 00:05:37.692 11:15:50 -- setup/acl.sh@13 -- # drivers=() 00:05:37.692 11:15:50 -- setup/acl.sh@13 -- # declare -A drivers 00:05:37.692 11:15:50 -- setup/acl.sh@51 -- # setup reset 00:05:37.692 11:15:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.692 11:15:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.692 11:15:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:37.692 11:15:51 -- setup/acl.sh@16 -- # local dev driver 00:05:37.692 11:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.692 11:15:51 -- setup/acl.sh@15 -- # setup output status 00:05:37.692 11:15:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.692 11:15:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:37.692 Hugepages 00:05:37.692 node hugesize free / total 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # continue 00:05:37.692 11:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.692 00:05:37.692 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # continue 00:05:37.692 11:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:37.692 11:15:51 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:37.692 11:15:51 -- setup/acl.sh@20 -- # continue 00:05:37.692 11:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.692 11:15:51 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:37.692 11:15:51 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:37.692 11:15:51 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:37.692 11:15:51 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:37.692 11:15:51 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:37.692 11:15:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.692 11:15:51 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:37.692 11:15:51 -- setup/acl.sh@54 -- # run_test denied denied 00:05:37.692 11:15:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.692 11:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.692 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 ************************************ 00:05:37.692 START TEST denied 00:05:37.692 ************************************ 00:05:37.692 11:15:51 -- common/autotest_common.sh@1114 -- # denied 00:05:37.692 11:15:51 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:37.692 11:15:51 -- setup/acl.sh@38 -- # setup output config 00:05:37.692 11:15:51 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:37.692 11:15:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.692 11:15:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.692 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:37.692 11:15:52 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:37.692 11:15:52 -- setup/acl.sh@28 -- # local dev driver 00:05:37.692 11:15:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:37.692 11:15:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:37.692 11:15:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:37.692 11:15:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:37.692 11:15:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:37.692 11:15:52 -- setup/acl.sh@41 -- # setup reset 00:05:37.692 11:15:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.692 11:15:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.692 00:05:37.692 real 0m1.349s 00:05:37.692 user 0m0.366s 00:05:37.692 sys 0m1.040s 00:05:37.692 11:15:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.692 ************************************ 00:05:37.692 END TEST denied 00:05:37.692 ************************************ 00:05:37.692 11:15:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 11:15:53 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:37.692 11:15:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.692 11:15:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.692 11:15:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 ************************************ 00:05:37.692 START TEST allowed 00:05:37.692 ************************************ 00:05:37.692 11:15:53 -- common/autotest_common.sh@1114 -- # allowed 00:05:37.692 11:15:53 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:37.692 11:15:53 -- setup/acl.sh@45 -- # setup output config 00:05:37.692 11:15:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.692 11:15:53 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:37.692 11:15:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.692 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.692 11:15:54 -- setup/acl.sh@47 -- # verify 00:05:37.692 11:15:54 -- setup/acl.sh@28 -- # local dev driver 00:05:37.692 11:15:54 -- setup/acl.sh@48 -- # setup reset 00:05:37.692 11:15:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.692 11:15:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.692 00:05:37.692 real 0m1.492s 00:05:37.692 user 0m0.302s 00:05:37.692 sys 0m1.243s 00:05:37.692 11:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.692 ************************************ 00:05:37.692 END TEST allowed 00:05:37.692 ************************************ 00:05:37.692 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 00:05:37.692 real 0m3.907s 00:05:37.692 user 0m1.147s 00:05:37.692 sys 0m2.929s 00:05:37.693 11:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.693 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:37.693 ************************************ 00:05:37.693 END TEST acl 00:05:37.693 ************************************ 00:05:37.693 11:15:54 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:37.693 11:15:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.693 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:37.693 ************************************ 00:05:37.693 START TEST hugepages 00:05:37.693 ************************************ 00:05:37.693 11:15:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:37.693 * Looking for test storage... 00:05:37.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.693 11:15:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.693 11:15:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.693 11:15:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.693 11:15:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.693 11:15:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.693 11:15:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.693 11:15:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.693 11:15:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.693 11:15:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.693 11:15:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.693 11:15:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.693 11:15:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.693 11:15:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.693 11:15:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.693 11:15:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.693 11:15:54 -- scripts/common.sh@344 -- # : 1 00:05:37.693 11:15:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.693 11:15:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.693 11:15:54 -- scripts/common.sh@364 -- # decimal 1 00:05:37.693 11:15:54 -- scripts/common.sh@352 -- # local d=1 00:05:37.693 11:15:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.693 11:15:54 -- scripts/common.sh@354 -- # echo 1 00:05:37.693 11:15:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.693 11:15:54 -- scripts/common.sh@365 -- # decimal 2 00:05:37.693 11:15:54 -- scripts/common.sh@352 -- # local d=2 00:05:37.693 11:15:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.693 11:15:54 -- scripts/common.sh@354 -- # echo 2 00:05:37.693 11:15:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.693 11:15:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.693 11:15:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.693 11:15:54 -- scripts/common.sh@367 -- # return 0 00:05:37.693 11:15:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.693 --rc genhtml_branch_coverage=1 00:05:37.693 --rc genhtml_function_coverage=1 00:05:37.693 --rc genhtml_legend=1 00:05:37.693 --rc geninfo_all_blocks=1 00:05:37.693 --rc geninfo_unexecuted_blocks=1 00:05:37.693 00:05:37.693 ' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.693 --rc genhtml_branch_coverage=1 00:05:37.693 --rc genhtml_function_coverage=1 00:05:37.693 --rc genhtml_legend=1 00:05:37.693 --rc geninfo_all_blocks=1 00:05:37.693 --rc geninfo_unexecuted_blocks=1 00:05:37.693 00:05:37.693 ' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.693 --rc genhtml_branch_coverage=1 00:05:37.693 --rc genhtml_function_coverage=1 00:05:37.693 --rc genhtml_legend=1 00:05:37.693 --rc geninfo_all_blocks=1 00:05:37.693 --rc geninfo_unexecuted_blocks=1 00:05:37.693 00:05:37.693 ' 00:05:37.693 11:15:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.693 --rc genhtml_branch_coverage=1 00:05:37.693 --rc genhtml_function_coverage=1 00:05:37.693 --rc genhtml_legend=1 00:05:37.693 --rc geninfo_all_blocks=1 00:05:37.693 --rc geninfo_unexecuted_blocks=1 00:05:37.693 00:05:37.693 ' 00:05:37.693 11:15:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:37.693 11:15:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:37.693 11:15:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:37.693 11:15:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:37.693 11:15:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:37.693 11:15:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:37.693 11:15:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:37.693 11:15:54 -- setup/common.sh@18 -- # local node= 00:05:37.693 11:15:54 -- setup/common.sh@19 -- # local var val 00:05:37.693 11:15:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.693 11:15:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.693 11:15:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.693 11:15:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.693 11:15:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.693 11:15:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 1725804 kB' 'MemAvailable: 7360776 kB' 'Buffers: 39952 kB' 'Cached: 5694164 kB' 'SwapCached: 0 kB' 'Active: 415080 kB' 'Inactive: 5431416 kB' 'Active(anon): 123728 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431416 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141312 kB' 'Mapped: 58160 kB' 'Shmem: 2600 kB' 'KReclaimable: 233756 kB' 'Slab: 317824 kB' 'SReclaimable: 233756 kB' 'SUnreclaim: 84068 kB' 'KernelStack: 5040 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026008 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.693 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.693 11:15:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # continue 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.694 11:15:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.694 11:15:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.694 11:15:54 -- setup/common.sh@33 -- # echo 2048 00:05:37.694 11:15:54 -- setup/common.sh@33 -- # return 0 00:05:37.694 11:15:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:37.694 11:15:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:37.694 11:15:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:37.694 11:15:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:37.694 11:15:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:37.694 11:15:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:37.694 11:15:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:37.694 11:15:54 -- setup/hugepages.sh@207 -- # get_nodes 00:05:37.694 11:15:54 -- setup/hugepages.sh@27 -- # local node 00:05:37.694 11:15:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.694 11:15:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:37.694 11:15:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.694 11:15:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.694 11:15:54 -- setup/hugepages.sh@208 -- # clear_hp 00:05:37.694 11:15:54 -- setup/hugepages.sh@37 -- # local node hp 00:05:37.694 11:15:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:37.694 11:15:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.694 11:15:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:37.694 11:15:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.695 11:15:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:37.695 11:15:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:37.695 11:15:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:37.695 11:15:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:37.695 11:15:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.695 11:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.695 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:05:37.695 ************************************ 00:05:37.695 START TEST default_setup 00:05:37.695 ************************************ 00:05:37.695 11:15:54 -- common/autotest_common.sh@1114 -- # default_setup 00:05:37.695 11:15:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:37.695 11:15:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:37.695 11:15:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:37.695 11:15:54 -- setup/hugepages.sh@51 -- # shift 00:05:37.695 11:15:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:37.695 11:15:54 -- setup/hugepages.sh@52 -- # local node_ids 00:05:37.695 11:15:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.695 11:15:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:37.695 11:15:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:37.695 11:15:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:37.695 11:15:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.695 11:15:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:37.695 11:15:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.695 11:15:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.695 11:15:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.695 11:15:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:37.695 11:15:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.695 11:15:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:37.695 11:15:54 -- setup/hugepages.sh@73 -- # return 0 00:05:37.695 11:15:54 -- setup/hugepages.sh@137 -- # setup output 00:05:37.695 11:15:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.695 11:15:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:37.695 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.695 11:15:55 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:37.695 11:15:55 -- setup/hugepages.sh@89 -- # local node 00:05:37.695 11:15:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.695 11:15:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.695 11:15:55 -- setup/hugepages.sh@92 -- # local surp 00:05:37.695 11:15:55 -- setup/hugepages.sh@93 -- # local resv 00:05:37.695 11:15:55 -- setup/hugepages.sh@94 -- # local anon 00:05:37.695 11:15:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.695 11:15:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.695 11:15:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.695 11:15:55 -- setup/common.sh@18 -- # local node= 00:05:37.695 11:15:55 -- setup/common.sh@19 -- # local var val 00:05:37.695 11:15:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.695 11:15:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.695 11:15:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.695 11:15:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.695 11:15:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.695 11:15:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.695 11:15:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3789004 kB' 'MemAvailable: 9423916 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416300 kB' 'Inactive: 5431420 kB' 'Active(anon): 124948 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431420 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142528 kB' 'Mapped: 58116 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317844 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84152 kB' 'KernelStack: 5008 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.695 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.695 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.696 11:15:55 -- setup/common.sh@33 -- # echo 0 00:05:37.696 11:15:55 -- setup/common.sh@33 -- # return 0 00:05:37.696 11:15:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.696 11:15:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.696 11:15:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.696 11:15:55 -- setup/common.sh@18 -- # local node= 00:05:37.696 11:15:55 -- setup/common.sh@19 -- # local var val 00:05:37.696 11:15:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.696 11:15:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.696 11:15:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.696 11:15:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.696 11:15:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.696 11:15:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3789004 kB' 'MemAvailable: 9423916 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416264 kB' 'Inactive: 5431420 kB' 'Active(anon): 124912 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431420 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142456 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317844 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84152 kB' 'KernelStack: 4976 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.696 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.696 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.697 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.697 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.697 11:15:55 -- setup/common.sh@33 -- # echo 0 00:05:37.697 11:15:55 -- setup/common.sh@33 -- # return 0 00:05:37.697 11:15:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:37.697 11:15:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.698 11:15:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.698 11:15:55 -- setup/common.sh@18 -- # local node= 00:05:37.698 11:15:55 -- setup/common.sh@19 -- # local var val 00:05:37.698 11:15:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.698 11:15:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.698 11:15:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.698 11:15:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.698 11:15:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.698 11:15:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3791308 kB' 'MemAvailable: 9426228 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 415896 kB' 'Inactive: 5431428 kB' 'Active(anon): 124544 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142140 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317832 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84140 kB' 'KernelStack: 4992 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.698 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.698 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.699 11:15:55 -- setup/common.sh@33 -- # echo 0 00:05:37.699 11:15:55 -- setup/common.sh@33 -- # return 0 00:05:37.699 11:15:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:37.699 nr_hugepages=1024 00:05:37.699 11:15:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:37.699 resv_hugepages=0 00:05:37.699 11:15:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.699 surplus_hugepages=0 00:05:37.699 11:15:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.699 anon_hugepages=0 00:05:37.699 11:15:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.699 11:15:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.699 11:15:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:37.699 11:15:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.699 11:15:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.699 11:15:55 -- setup/common.sh@18 -- # local node= 00:05:37.699 11:15:55 -- setup/common.sh@19 -- # local var val 00:05:37.699 11:15:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.699 11:15:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.699 11:15:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.699 11:15:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.699 11:15:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.699 11:15:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793152 kB' 'MemAvailable: 9428072 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416120 kB' 'Inactive: 5431428 kB' 'Active(anon): 124768 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142364 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317816 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84124 kB' 'KernelStack: 4976 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.699 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.699 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.700 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.700 11:15:55 -- setup/common.sh@33 -- # echo 1024 00:05:37.700 11:15:55 -- setup/common.sh@33 -- # return 0 00:05:37.700 11:15:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.700 11:15:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.700 11:15:55 -- setup/hugepages.sh@27 -- # local node 00:05:37.700 11:15:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.700 11:15:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:37.700 11:15:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.700 11:15:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.700 11:15:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.700 11:15:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.700 11:15:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.700 11:15:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.700 11:15:55 -- setup/common.sh@18 -- # local node=0 00:05:37.700 11:15:55 -- setup/common.sh@19 -- # local var val 00:05:37.700 11:15:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.700 11:15:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.700 11:15:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.700 11:15:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.700 11:15:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.700 11:15:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.700 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3794628 kB' 'MemUsed: 8451696 kB' 'SwapCached: 0 kB' 'Active: 416092 kB' 'Inactive: 5431428 kB' 'Active(anon): 124740 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734120 kB' 'Mapped: 58112 kB' 'AnonPages: 142316 kB' 'Shmem: 2592 kB' 'KernelStack: 4960 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233692 kB' 'Slab: 317808 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # continue 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.701 11:15:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.701 11:15:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.701 11:15:55 -- setup/common.sh@33 -- # echo 0 00:05:37.701 11:15:55 -- setup/common.sh@33 -- # return 0 00:05:37.701 11:15:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.701 11:15:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.701 11:15:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.701 11:15:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.701 node0=1024 expecting 1024 00:05:37.702 11:15:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:37.702 11:15:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:37.702 00:05:37.702 real 0m0.895s 00:05:37.702 user 0m0.279s 00:05:37.702 sys 0m0.610s 00:05:37.702 11:15:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.702 11:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.702 ************************************ 00:05:37.702 END TEST default_setup 00:05:37.702 ************************************ 00:05:37.702 11:15:55 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:37.702 11:15:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.702 11:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.702 11:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.702 ************************************ 00:05:37.702 START TEST per_node_1G_alloc 00:05:37.702 ************************************ 00:05:37.702 11:15:55 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:37.702 11:15:55 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:37.702 11:15:55 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:37.702 11:15:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:37.702 11:15:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:37.702 11:15:55 -- setup/hugepages.sh@51 -- # shift 00:05:37.702 11:15:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:37.702 11:15:55 -- setup/hugepages.sh@52 -- # local node_ids 00:05:37.702 11:15:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.702 11:15:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:37.702 11:15:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:37.702 11:15:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:37.702 11:15:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.702 11:15:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:37.702 11:15:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.702 11:15:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.702 11:15:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.702 11:15:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:37.702 11:15:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.702 11:15:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:37.702 11:15:55 -- setup/hugepages.sh@73 -- # return 0 00:05:37.702 11:15:55 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:37.702 11:15:55 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:37.702 11:15:55 -- setup/hugepages.sh@146 -- # setup output 00:05:37.702 11:15:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.702 11:15:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:37.962 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.225 11:15:56 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:38.225 11:15:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:38.225 11:15:56 -- setup/hugepages.sh@89 -- # local node 00:05:38.225 11:15:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:38.225 11:15:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:38.225 11:15:56 -- setup/hugepages.sh@92 -- # local surp 00:05:38.225 11:15:56 -- setup/hugepages.sh@93 -- # local resv 00:05:38.225 11:15:56 -- setup/hugepages.sh@94 -- # local anon 00:05:38.225 11:15:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:38.225 11:15:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:38.225 11:15:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:38.225 11:15:56 -- setup/common.sh@18 -- # local node= 00:05:38.225 11:15:56 -- setup/common.sh@19 -- # local var val 00:05:38.225 11:15:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.225 11:15:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.225 11:15:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.225 11:15:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.225 11:15:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.225 11:15:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.225 11:15:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4835560 kB' 'MemAvailable: 10470480 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416372 kB' 'Inactive: 5431428 kB' 'Active(anon): 125020 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142564 kB' 'Mapped: 58132 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'KernelStack: 5008 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.225 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.225 11:15:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.226 11:15:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.226 11:15:56 -- setup/common.sh@33 -- # echo 0 00:05:38.226 11:15:56 -- setup/common.sh@33 -- # return 0 00:05:38.226 11:15:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:38.226 11:15:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:38.226 11:15:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.226 11:15:56 -- setup/common.sh@18 -- # local node= 00:05:38.226 11:15:56 -- setup/common.sh@19 -- # local var val 00:05:38.226 11:15:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.226 11:15:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.226 11:15:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.226 11:15:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.226 11:15:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.226 11:15:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.226 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4835560 kB' 'MemAvailable: 10470480 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416140 kB' 'Inactive: 5431428 kB' 'Active(anon): 124788 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142296 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'KernelStack: 4960 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.227 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.227 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.228 11:15:56 -- setup/common.sh@33 -- # echo 0 00:05:38.228 11:15:56 -- setup/common.sh@33 -- # return 0 00:05:38.228 11:15:56 -- setup/hugepages.sh@99 -- # surp=0 00:05:38.228 11:15:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.228 11:15:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.228 11:15:56 -- setup/common.sh@18 -- # local node= 00:05:38.228 11:15:56 -- setup/common.sh@19 -- # local var val 00:05:38.228 11:15:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.228 11:15:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.228 11:15:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.228 11:15:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.228 11:15:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.228 11:15:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4835560 kB' 'MemAvailable: 10470480 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416120 kB' 'Inactive: 5431428 kB' 'Active(anon): 124768 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142276 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'KernelStack: 4960 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.228 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.228 11:15:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.229 11:15:56 -- setup/common.sh@33 -- # echo 0 00:05:38.229 11:15:56 -- setup/common.sh@33 -- # return 0 00:05:38.229 11:15:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:38.229 11:15:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:38.229 nr_hugepages=512 00:05:38.229 resv_hugepages=0 00:05:38.229 surplus_hugepages=0 00:05:38.229 anon_hugepages=0 00:05:38.229 11:15:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.229 11:15:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.229 11:15:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.229 11:15:56 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:38.229 11:15:56 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:38.229 11:15:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.229 11:15:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.229 11:15:56 -- setup/common.sh@18 -- # local node= 00:05:38.229 11:15:56 -- setup/common.sh@19 -- # local var val 00:05:38.229 11:15:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.229 11:15:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.229 11:15:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.229 11:15:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.229 11:15:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.229 11:15:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4835560 kB' 'MemAvailable: 10470480 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416192 kB' 'Inactive: 5431428 kB' 'Active(anon): 124840 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142424 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317852 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84160 kB' 'KernelStack: 4992 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.229 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.229 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.230 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.230 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.231 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.231 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.491 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.491 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.492 11:15:56 -- setup/common.sh@33 -- # echo 512 00:05:38.492 11:15:56 -- setup/common.sh@33 -- # return 0 00:05:38.492 11:15:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:38.492 11:15:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.492 11:15:56 -- setup/hugepages.sh@27 -- # local node 00:05:38.492 11:15:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.492 11:15:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:38.492 11:15:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.492 11:15:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.492 11:15:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.492 11:15:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.492 11:15:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.492 11:15:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.492 11:15:56 -- setup/common.sh@18 -- # local node=0 00:05:38.492 11:15:56 -- setup/common.sh@19 -- # local var val 00:05:38.492 11:15:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.492 11:15:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.492 11:15:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.492 11:15:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.492 11:15:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.492 11:15:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4835560 kB' 'MemUsed: 7410764 kB' 'SwapCached: 0 kB' 'Active: 416196 kB' 'Inactive: 5431428 kB' 'Active(anon): 124844 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734120 kB' 'Mapped: 58112 kB' 'AnonPages: 142420 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233692 kB' 'Slab: 317844 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.492 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.492 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # continue 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.493 11:15:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.493 11:15:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.493 11:15:56 -- setup/common.sh@33 -- # echo 0 00:05:38.493 11:15:56 -- setup/common.sh@33 -- # return 0 00:05:38.493 11:15:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.493 11:15:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.493 node0=512 expecting 512 00:05:38.493 ************************************ 00:05:38.493 END TEST per_node_1G_alloc 00:05:38.493 ************************************ 00:05:38.493 11:15:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.493 11:15:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:38.493 11:15:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:38.493 00:05:38.493 real 0m0.647s 00:05:38.493 user 0m0.259s 00:05:38.493 sys 0m0.399s 00:05:38.493 11:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.493 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.493 11:15:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:38.493 11:15:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.493 11:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.493 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:05:38.493 ************************************ 00:05:38.493 START TEST even_2G_alloc 00:05:38.493 ************************************ 00:05:38.493 11:15:56 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:38.493 11:15:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:38.493 11:15:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:38.493 11:15:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:38.493 11:15:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.493 11:15:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.493 11:15:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.493 11:15:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:38.493 11:15:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.493 11:15:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.493 11:15:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.493 11:15:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:38.493 11:15:56 -- setup/hugepages.sh@83 -- # : 0 00:05:38.493 11:15:56 -- setup/hugepages.sh@84 -- # : 0 00:05:38.493 11:15:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.493 11:15:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:38.493 11:15:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:38.493 11:15:56 -- setup/hugepages.sh@153 -- # setup output 00:05:38.493 11:15:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.493 11:15:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:38.753 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.014 11:15:57 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:39.014 11:15:57 -- setup/hugepages.sh@89 -- # local node 00:05:39.014 11:15:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.014 11:15:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.014 11:15:57 -- setup/hugepages.sh@92 -- # local surp 00:05:39.014 11:15:57 -- setup/hugepages.sh@93 -- # local resv 00:05:39.014 11:15:57 -- setup/hugepages.sh@94 -- # local anon 00:05:39.014 11:15:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.014 11:15:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.014 11:15:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.014 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.014 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.014 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.014 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.014 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.014 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.014 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.014 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.014 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.014 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.014 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3790080 kB' 'MemAvailable: 9425000 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416336 kB' 'Inactive: 5431428 kB' 'Active(anon): 124984 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142496 kB' 'Mapped: 58072 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317868 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84176 kB' 'KernelStack: 5020 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.015 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.015 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.016 11:15:57 -- setup/common.sh@33 -- # echo 0 00:05:39.016 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.016 11:15:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.016 11:15:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.016 11:15:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.016 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.016 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.016 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.016 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.016 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.016 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.016 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.016 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3790748 kB' 'MemAvailable: 9425668 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416488 kB' 'Inactive: 5431428 kB' 'Active(anon): 125136 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142684 kB' 'Mapped: 58072 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317860 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84168 kB' 'KernelStack: 5004 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.016 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.016 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.017 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.017 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.018 11:15:57 -- setup/common.sh@33 -- # echo 0 00:05:39.018 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.018 11:15:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.018 11:15:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.018 11:15:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.018 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.018 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.018 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.018 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.018 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.018 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.018 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.018 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3791360 kB' 'MemAvailable: 9426280 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416472 kB' 'Inactive: 5431428 kB' 'Active(anon): 125120 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142632 kB' 'Mapped: 58072 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'KernelStack: 5004 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.018 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.018 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.019 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.019 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.019 11:15:57 -- setup/common.sh@33 -- # echo 0 00:05:39.019 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.019 11:15:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.019 nr_hugepages=1024 00:05:39.019 resv_hugepages=0 00:05:39.019 11:15:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.019 11:15:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.019 surplus_hugepages=0 00:05:39.019 anon_hugepages=0 00:05:39.019 11:15:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.019 11:15:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.019 11:15:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.019 11:15:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.019 11:15:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.019 11:15:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.019 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.019 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.019 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.019 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.020 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.020 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.020 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.020 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3792220 kB' 'MemAvailable: 9427140 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416244 kB' 'Inactive: 5431428 kB' 'Active(anon): 124892 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142444 kB' 'Mapped: 58372 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'KernelStack: 4988 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.020 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.020 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.021 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.021 11:15:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.282 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.282 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.282 11:15:57 -- setup/common.sh@33 -- # echo 1024 00:05:39.282 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.282 11:15:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.282 11:15:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.282 11:15:57 -- setup/hugepages.sh@27 -- # local node 00:05:39.282 11:15:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.282 11:15:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.282 11:15:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.282 11:15:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.283 11:15:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.283 11:15:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.283 11:15:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.283 11:15:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.283 11:15:57 -- setup/common.sh@18 -- # local node=0 00:05:39.283 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.283 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.283 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.283 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.283 11:15:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.283 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.283 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3792220 kB' 'MemUsed: 8454104 kB' 'SwapCached: 0 kB' 'Active: 416140 kB' 'Inactive: 5431428 kB' 'Active(anon): 124788 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734120 kB' 'Mapped: 58332 kB' 'AnonPages: 142292 kB' 'Shmem: 2592 kB' 'KernelStack: 4988 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233692 kB' 'Slab: 317856 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.283 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.283 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.284 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.284 11:15:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.284 11:15:57 -- setup/common.sh@33 -- # echo 0 00:05:39.284 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.284 11:15:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.284 11:15:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.284 11:15:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.284 node0=1024 expecting 1024 00:05:39.284 11:15:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:39.284 11:15:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:39.284 00:05:39.284 real 0m0.723s 00:05:39.284 user 0m0.222s 00:05:39.284 sys 0m0.528s 00:05:39.284 11:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.284 ************************************ 00:05:39.284 END TEST even_2G_alloc 00:05:39.284 ************************************ 00:05:39.284 11:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.284 11:15:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:39.284 11:15:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.284 11:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.284 11:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.284 ************************************ 00:05:39.284 START TEST odd_alloc 00:05:39.284 ************************************ 00:05:39.284 11:15:57 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:39.284 11:15:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:39.284 11:15:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:39.284 11:15:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:39.284 11:15:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:39.284 11:15:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.284 11:15:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.284 11:15:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:39.284 11:15:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.284 11:15:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.284 11:15:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.284 11:15:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:39.284 11:15:57 -- setup/hugepages.sh@83 -- # : 0 00:05:39.284 11:15:57 -- setup/hugepages.sh@84 -- # : 0 00:05:39.284 11:15:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.284 11:15:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:39.284 11:15:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:39.284 11:15:57 -- setup/hugepages.sh@160 -- # setup output 00:05:39.284 11:15:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.284 11:15:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:39.544 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.805 11:15:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:39.805 11:15:57 -- setup/hugepages.sh@89 -- # local node 00:05:39.805 11:15:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.805 11:15:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.805 11:15:57 -- setup/hugepages.sh@92 -- # local surp 00:05:39.805 11:15:57 -- setup/hugepages.sh@93 -- # local resv 00:05:39.805 11:15:57 -- setup/hugepages.sh@94 -- # local anon 00:05:39.805 11:15:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.805 11:15:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.805 11:15:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.805 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.805 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.805 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.805 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.805 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.805 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.805 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.805 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3811752 kB' 'MemAvailable: 9446672 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416408 kB' 'Inactive: 5431428 kB' 'Active(anon): 125056 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142572 kB' 'Mapped: 58120 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317868 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84176 kB' 'KernelStack: 5008 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.805 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.805 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.806 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.806 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.807 11:15:57 -- setup/common.sh@33 -- # echo 0 00:05:39.807 11:15:57 -- setup/common.sh@33 -- # return 0 00:05:39.807 11:15:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.807 11:15:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.807 11:15:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.807 11:15:57 -- setup/common.sh@18 -- # local node= 00:05:39.807 11:15:57 -- setup/common.sh@19 -- # local var val 00:05:39.807 11:15:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.807 11:15:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.807 11:15:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.807 11:15:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.807 11:15:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.807 11:15:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3812004 kB' 'MemAvailable: 9446924 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416092 kB' 'Inactive: 5431428 kB' 'Active(anon): 124740 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142508 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317860 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84168 kB' 'KernelStack: 4992 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:57 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.807 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.807 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.808 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.808 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.808 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:39.808 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:39.808 11:15:58 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.808 11:15:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.808 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.809 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:39.809 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:39.809 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.809 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.809 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.809 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.809 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.809 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3812004 kB' 'MemAvailable: 9446924 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 415932 kB' 'Inactive: 5431428 kB' 'Active(anon): 124580 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142316 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317852 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84160 kB' 'KernelStack: 4960 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.809 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.809 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # continue 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.810 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.810 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.810 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:39.810 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:39.810 11:15:58 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.810 nr_hugepages=1025 00:05:39.810 11:15:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:39.810 resv_hugepages=0 00:05:39.810 11:15:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.810 11:15:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.810 surplus_hugepages=0 00:05:39.810 anon_hugepages=0 00:05:39.810 11:15:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.810 11:15:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:39.810 11:15:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:40.071 11:15:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.071 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.071 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:40.071 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.071 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.071 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.071 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.071 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.071 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.071 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3812004 kB' 'MemAvailable: 9446924 kB' 'Buffers: 39952 kB' 'Cached: 5694168 kB' 'SwapCached: 0 kB' 'Active: 416192 kB' 'Inactive: 5431428 kB' 'Active(anon): 124840 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142316 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317852 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84160 kB' 'KernelStack: 4960 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.071 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.072 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.073 11:15:58 -- setup/common.sh@33 -- # echo 1025 00:05:40.073 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.073 11:15:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:40.073 11:15:58 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.073 11:15:58 -- setup/hugepages.sh@27 -- # local node 00:05:40.073 11:15:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.073 11:15:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:40.073 11:15:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.073 11:15:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.073 11:15:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.073 11:15:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.073 11:15:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.073 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.073 11:15:58 -- setup/common.sh@18 -- # local node=0 00:05:40.073 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.073 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.073 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.073 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.073 11:15:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.073 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.073 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3812784 kB' 'MemUsed: 8433540 kB' 'SwapCached: 0 kB' 'Active: 416400 kB' 'Inactive: 5431428 kB' 'Active(anon): 125048 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431428 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734120 kB' 'Mapped: 58112 kB' 'AnonPages: 142600 kB' 'Shmem: 2592 kB' 'KernelStack: 5012 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233692 kB' 'Slab: 317848 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.073 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.073 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.074 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.074 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.074 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:40.074 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.074 11:15:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.074 11:15:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.074 11:15:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.074 11:15:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.074 node0=1025 expecting 1025 00:05:40.074 11:15:58 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:40.074 11:15:58 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:40.074 00:05:40.074 real 0m0.768s 00:05:40.074 user 0m0.226s 00:05:40.074 sys 0m0.587s 00:05:40.075 11:15:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.075 ************************************ 00:05:40.075 END TEST odd_alloc 00:05:40.075 ************************************ 00:05:40.075 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.075 11:15:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:40.075 11:15:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.075 11:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.075 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.075 ************************************ 00:05:40.075 START TEST custom_alloc 00:05:40.075 ************************************ 00:05:40.075 11:15:58 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:40.075 11:15:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:40.075 11:15:58 -- setup/hugepages.sh@169 -- # local node 00:05:40.075 11:15:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:40.075 11:15:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:40.075 11:15:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:40.075 11:15:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:40.075 11:15:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:40.075 11:15:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:40.075 11:15:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:40.075 11:15:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.075 11:15:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.075 11:15:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.075 11:15:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.075 11:15:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@83 -- # : 0 00:05:40.075 11:15:58 -- setup/hugepages.sh@84 -- # : 0 00:05:40.075 11:15:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:40.075 11:15:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:40.075 11:15:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:40.075 11:15:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:40.075 11:15:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.075 11:15:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.075 11:15:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.075 11:15:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.075 11:15:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:40.075 11:15:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:40.075 11:15:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:40.075 11:15:58 -- setup/hugepages.sh@78 -- # return 0 00:05:40.075 11:15:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:40.075 11:15:58 -- setup/hugepages.sh@187 -- # setup output 00:05:40.075 11:15:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.075 11:15:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:40.334 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.596 11:15:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:40.596 11:15:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:40.596 11:15:58 -- setup/hugepages.sh@89 -- # local node 00:05:40.596 11:15:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.596 11:15:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.596 11:15:58 -- setup/hugepages.sh@92 -- # local surp 00:05:40.596 11:15:58 -- setup/hugepages.sh@93 -- # local resv 00:05:40.596 11:15:58 -- setup/hugepages.sh@94 -- # local anon 00:05:40.596 11:15:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.596 11:15:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.596 11:15:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.596 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:40.596 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.596 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.596 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.596 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.596 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.596 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.596 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.596 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.596 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.596 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4840288 kB' 'MemAvailable: 10475212 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 416476 kB' 'Inactive: 5431432 kB' 'Active(anon): 125124 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142636 kB' 'Mapped: 58116 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317900 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84208 kB' 'KernelStack: 5008 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:40.596 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.596 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.596 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.596 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.597 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.597 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.598 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:40.598 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.598 11:15:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.598 11:15:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.598 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.598 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:40.598 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.598 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.598 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.598 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.598 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.598 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.598 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4840540 kB' 'MemAvailable: 10475464 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 416308 kB' 'Inactive: 5431432 kB' 'Active(anon): 124956 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142508 kB' 'Mapped: 58116 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317900 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84208 kB' 'KernelStack: 5008 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.598 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.598 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.599 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.599 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.600 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:40.600 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.600 11:15:58 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.600 11:15:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.600 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.600 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:40.600 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.600 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.600 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.600 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.600 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.600 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.600 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4840540 kB' 'MemAvailable: 10475464 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 416236 kB' 'Inactive: 5431432 kB' 'Active(anon): 124884 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142436 kB' 'Mapped: 58116 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317900 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84208 kB' 'KernelStack: 4992 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.600 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.600 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.601 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.601 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.601 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:40.601 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.601 11:15:58 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.601 nr_hugepages=512 00:05:40.601 11:15:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:40.601 resv_hugepages=0 00:05:40.601 11:15:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.601 surplus_hugepages=0 00:05:40.601 11:15:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.601 11:15:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.601 anon_hugepages=0 00:05:40.601 11:15:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.601 11:15:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:40.601 11:15:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.601 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.601 11:15:58 -- setup/common.sh@18 -- # local node= 00:05:40.601 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.601 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.602 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.602 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.602 11:15:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.602 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.602 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4840540 kB' 'MemAvailable: 10475464 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 416268 kB' 'Inactive: 5431432 kB' 'Active(anon): 124916 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142396 kB' 'Mapped: 58112 kB' 'Shmem: 2592 kB' 'KReclaimable: 233692 kB' 'Slab: 317900 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84208 kB' 'KernelStack: 4944 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 367920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.602 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.602 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.603 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.603 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.603 11:15:58 -- setup/common.sh@33 -- # echo 512 00:05:40.603 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.603 11:15:58 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.603 11:15:58 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.603 11:15:58 -- setup/hugepages.sh@27 -- # local node 00:05:40.603 11:15:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.603 11:15:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.603 11:15:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.603 11:15:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.603 11:15:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.604 11:15:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.604 11:15:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.604 11:15:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.604 11:15:58 -- setup/common.sh@18 -- # local node=0 00:05:40.604 11:15:58 -- setup/common.sh@19 -- # local var val 00:05:40.604 11:15:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.604 11:15:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.604 11:15:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.604 11:15:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.604 11:15:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.604 11:15:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4840540 kB' 'MemUsed: 7405784 kB' 'SwapCached: 0 kB' 'Active: 416040 kB' 'Inactive: 5431432 kB' 'Active(anon): 124688 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734124 kB' 'Mapped: 58112 kB' 'AnonPages: 142176 kB' 'Shmem: 2592 kB' 'KernelStack: 4980 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233692 kB' 'Slab: 317900 kB' 'SReclaimable: 233692 kB' 'SUnreclaim: 84208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.604 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.604 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # continue 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.605 11:15:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.605 11:15:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.605 11:15:58 -- setup/common.sh@33 -- # echo 0 00:05:40.605 11:15:58 -- setup/common.sh@33 -- # return 0 00:05:40.605 11:15:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.605 11:15:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.605 11:15:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.605 11:15:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.605 node0=512 expecting 512 00:05:40.605 11:15:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:40.605 11:15:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:40.605 00:05:40.605 real 0m0.609s 00:05:40.605 user 0m0.248s 00:05:40.605 sys 0m0.401s 00:05:40.605 11:15:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.605 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.605 ************************************ 00:05:40.605 END TEST custom_alloc 00:05:40.605 ************************************ 00:05:40.605 11:15:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:40.605 11:15:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.605 11:15:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.605 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.605 ************************************ 00:05:40.605 START TEST no_shrink_alloc 00:05:40.605 ************************************ 00:05:40.605 11:15:58 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:40.605 11:15:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:40.605 11:15:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.605 11:15:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.605 11:15:58 -- setup/hugepages.sh@51 -- # shift 00:05:40.605 11:15:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:40.605 11:15:58 -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.605 11:15:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.605 11:15:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.605 11:15:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.605 11:15:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:40.605 11:15:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.605 11:15:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.605 11:15:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.605 11:15:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.605 11:15:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.605 11:15:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.605 11:15:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.605 11:15:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:40.605 11:15:58 -- setup/hugepages.sh@73 -- # return 0 00:05:40.605 11:15:58 -- setup/hugepages.sh@198 -- # setup output 00:05:40.605 11:15:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.605 11:15:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:41.122 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.384 11:15:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:41.384 11:15:59 -- setup/hugepages.sh@89 -- # local node 00:05:41.384 11:15:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.384 11:15:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.384 11:15:59 -- setup/hugepages.sh@92 -- # local surp 00:05:41.384 11:15:59 -- setup/hugepages.sh@93 -- # local resv 00:05:41.384 11:15:59 -- setup/hugepages.sh@94 -- # local anon 00:05:41.384 11:15:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.384 11:15:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.384 11:15:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.384 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.384 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.384 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.384 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.384 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.384 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.384 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.384 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.384 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793108 kB' 'MemAvailable: 9428024 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 415200 kB' 'Inactive: 5431432 kB' 'Active(anon): 123848 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141392 kB' 'Mapped: 57184 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317812 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84128 kB' 'KernelStack: 4992 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.384 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.384 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.385 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.385 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.385 11:15:59 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.385 11:15:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.385 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.385 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.385 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.385 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.385 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.385 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.385 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.385 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.385 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793360 kB' 'MemAvailable: 9428276 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 415140 kB' 'Inactive: 5431432 kB' 'Active(anon): 123788 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141336 kB' 'Mapped: 57180 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317808 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84124 kB' 'KernelStack: 4960 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.385 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.385 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.385 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.385 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.386 11:15:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.386 11:15:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.386 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.386 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.386 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.386 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.386 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.386 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.386 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.386 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.386 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793360 kB' 'MemAvailable: 9428276 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 414764 kB' 'Inactive: 5431432 kB' 'Active(anon): 123412 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 140968 kB' 'Mapped: 57180 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317808 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84124 kB' 'KernelStack: 4960 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.386 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.386 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.386 11:15:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.386 nr_hugepages=1024 00:05:41.386 11:15:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.386 11:15:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.386 resv_hugepages=0 00:05:41.386 surplus_hugepages=0 00:05:41.386 11:15:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.386 anon_hugepages=0 00:05:41.386 11:15:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.386 11:15:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.386 11:15:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.386 11:15:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.386 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.386 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.386 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.386 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.386 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.386 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.386 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.386 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.386 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793360 kB' 'MemAvailable: 9428276 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 414764 kB' 'Inactive: 5431432 kB' 'Active(anon): 123412 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 140968 kB' 'Mapped: 57180 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317808 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84124 kB' 'KernelStack: 4960 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.386 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.386 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.387 11:15:59 -- setup/common.sh@33 -- # echo 1024 00:05:41.387 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.387 11:15:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.387 11:15:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.387 11:15:59 -- setup/hugepages.sh@27 -- # local node 00:05:41.387 11:15:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.387 11:15:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.387 11:15:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.387 11:15:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.387 11:15:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.387 11:15:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.387 11:15:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.387 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.387 11:15:59 -- setup/common.sh@18 -- # local node=0 00:05:41.387 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.387 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.387 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.387 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.387 11:15:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.387 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.387 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3793880 kB' 'MemUsed: 8452444 kB' 'SwapCached: 0 kB' 'Active: 414956 kB' 'Inactive: 5431432 kB' 'Active(anon): 123604 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 5734124 kB' 'Mapped: 57180 kB' 'AnonPages: 141160 kB' 'Shmem: 2592 kB' 'KernelStack: 4944 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233684 kB' 'Slab: 317808 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.387 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.387 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.388 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.388 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.388 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.388 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.388 11:15:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.388 11:15:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.388 11:15:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.388 11:15:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.388 11:15:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.388 node0=1024 expecting 1024 00:05:41.388 11:15:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.388 11:15:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:41.388 11:15:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:41.388 11:15:59 -- setup/hugepages.sh@202 -- # setup output 00:05:41.388 11:15:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.388 11:15:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:41.647 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.647 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:41.647 11:15:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:41.647 11:15:59 -- setup/hugepages.sh@89 -- # local node 00:05:41.647 11:15:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.647 11:15:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.647 11:15:59 -- setup/hugepages.sh@92 -- # local surp 00:05:41.647 11:15:59 -- setup/hugepages.sh@93 -- # local resv 00:05:41.647 11:15:59 -- setup/hugepages.sh@94 -- # local anon 00:05:41.647 11:15:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.647 11:15:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.647 11:15:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.647 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.647 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.647 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.647 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.647 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.647 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.647 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.647 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3806564 kB' 'MemAvailable: 9441480 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 415132 kB' 'Inactive: 5431432 kB' 'Active(anon): 123780 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141572 kB' 'Mapped: 57232 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317728 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84044 kB' 'KernelStack: 4976 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.647 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.647 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.648 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.648 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.648 11:15:59 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.648 11:15:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.648 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.648 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.648 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.648 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.648 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.648 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.648 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.648 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.648 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3806564 kB' 'MemAvailable: 9441480 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 415012 kB' 'Inactive: 5431432 kB' 'Active(anon): 123660 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141192 kB' 'Mapped: 57272 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317728 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84044 kB' 'KernelStack: 4944 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.648 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.648 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.649 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.649 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.650 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.650 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.650 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.650 11:15:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.650 11:15:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.650 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.650 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.650 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.650 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.650 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.650 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.650 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.650 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.650 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.650 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3807068 kB' 'MemAvailable: 9441984 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 414888 kB' 'Inactive: 5431432 kB' 'Active(anon): 123536 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141068 kB' 'Mapped: 57232 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317736 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84052 kB' 'KernelStack: 4896 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.911 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.911 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.912 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.912 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.912 11:15:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.912 nr_hugepages=1024 00:05:41.912 11:15:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.912 resv_hugepages=0 00:05:41.912 surplus_hugepages=0 00:05:41.912 11:15:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.912 11:15:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.912 anon_hugepages=0 00:05:41.912 11:15:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.912 11:15:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.912 11:15:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.912 11:15:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.912 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.912 11:15:59 -- setup/common.sh@18 -- # local node= 00:05:41.912 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.912 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.912 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.912 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.912 11:15:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.912 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.912 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3807068 kB' 'MemAvailable: 9441984 kB' 'Buffers: 39952 kB' 'Cached: 5694172 kB' 'SwapCached: 0 kB' 'Active: 414760 kB' 'Inactive: 5431432 kB' 'Active(anon): 123408 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 140948 kB' 'Mapped: 57232 kB' 'Shmem: 2592 kB' 'KReclaimable: 233684 kB' 'Slab: 317736 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84052 kB' 'KernelStack: 4928 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 356548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 5091328 kB' 'DirectMap1G: 9437184 kB' 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.912 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.912 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.913 11:15:59 -- setup/common.sh@33 -- # echo 1024 00:05:41.913 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.913 11:15:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.913 11:15:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.913 11:15:59 -- setup/hugepages.sh@27 -- # local node 00:05:41.913 11:15:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.913 11:15:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.913 11:15:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.913 11:15:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.913 11:15:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.913 11:15:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.913 11:15:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.913 11:15:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.913 11:15:59 -- setup/common.sh@18 -- # local node=0 00:05:41.913 11:15:59 -- setup/common.sh@19 -- # local var val 00:05:41.913 11:15:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.913 11:15:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.913 11:15:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.913 11:15:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.913 11:15:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.913 11:15:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 3807068 kB' 'MemUsed: 8439256 kB' 'SwapCached: 0 kB' 'Active: 414648 kB' 'Inactive: 5431432 kB' 'Active(anon): 123296 kB' 'Inactive(anon): 0 kB' 'Active(file): 291352 kB' 'Inactive(file): 5431432 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5734124 kB' 'Mapped: 57232 kB' 'AnonPages: 140836 kB' 'Shmem: 2592 kB' 'KernelStack: 4896 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233684 kB' 'Slab: 317736 kB' 'SReclaimable: 233684 kB' 'SUnreclaim: 84052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.913 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.913 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # continue 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.914 11:15:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.914 11:15:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.914 11:15:59 -- setup/common.sh@33 -- # echo 0 00:05:41.914 11:15:59 -- setup/common.sh@33 -- # return 0 00:05:41.914 11:15:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.914 11:15:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.914 11:15:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.914 11:15:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.914 node0=1024 expecting 1024 00:05:41.914 11:15:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.914 11:15:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.914 00:05:41.914 real 0m1.144s 00:05:41.914 user 0m0.462s 00:05:41.914 sys 0m0.761s 00:05:41.914 11:15:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.914 ************************************ 00:05:41.914 END TEST no_shrink_alloc 00:05:41.914 ************************************ 00:05:41.914 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 11:15:59 -- setup/hugepages.sh@217 -- # clear_hp 00:05:41.914 11:15:59 -- setup/hugepages.sh@37 -- # local node hp 00:05:41.914 11:15:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.914 11:15:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.914 11:15:59 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.914 11:15:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.914 11:15:59 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.914 11:15:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.914 11:15:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.914 00:05:41.914 real 0m5.369s 00:05:41.914 user 0m1.947s 00:05:41.914 sys 0m3.612s 00:05:41.914 11:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.914 ************************************ 00:05:41.914 END TEST hugepages 00:05:41.914 ************************************ 00:05:41.914 11:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 11:16:00 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:41.914 11:16:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.914 11:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.914 11:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 ************************************ 00:05:41.914 START TEST driver 00:05:41.914 ************************************ 00:05:41.914 11:16:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:41.914 * Looking for test storage... 00:05:41.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:41.914 11:16:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.914 11:16:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.914 11:16:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.174 11:16:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.174 11:16:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.174 11:16:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.174 11:16:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.174 11:16:00 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.174 11:16:00 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.174 11:16:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.174 11:16:00 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.174 11:16:00 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.174 11:16:00 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.174 11:16:00 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.174 11:16:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.174 11:16:00 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.174 11:16:00 -- scripts/common.sh@344 -- # : 1 00:05:42.174 11:16:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.175 11:16:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.175 11:16:00 -- scripts/common.sh@364 -- # decimal 1 00:05:42.175 11:16:00 -- scripts/common.sh@352 -- # local d=1 00:05:42.175 11:16:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.175 11:16:00 -- scripts/common.sh@354 -- # echo 1 00:05:42.175 11:16:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.175 11:16:00 -- scripts/common.sh@365 -- # decimal 2 00:05:42.175 11:16:00 -- scripts/common.sh@352 -- # local d=2 00:05:42.175 11:16:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.175 11:16:00 -- scripts/common.sh@354 -- # echo 2 00:05:42.175 11:16:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.175 11:16:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.175 11:16:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.175 11:16:00 -- scripts/common.sh@367 -- # return 0 00:05:42.175 11:16:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.175 11:16:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.175 --rc genhtml_branch_coverage=1 00:05:42.175 --rc genhtml_function_coverage=1 00:05:42.175 --rc genhtml_legend=1 00:05:42.175 --rc geninfo_all_blocks=1 00:05:42.175 --rc geninfo_unexecuted_blocks=1 00:05:42.175 00:05:42.175 ' 00:05:42.175 11:16:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.175 --rc genhtml_branch_coverage=1 00:05:42.175 --rc genhtml_function_coverage=1 00:05:42.175 --rc genhtml_legend=1 00:05:42.175 --rc geninfo_all_blocks=1 00:05:42.175 --rc geninfo_unexecuted_blocks=1 00:05:42.175 00:05:42.175 ' 00:05:42.175 11:16:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.175 --rc genhtml_branch_coverage=1 00:05:42.175 --rc genhtml_function_coverage=1 00:05:42.175 --rc genhtml_legend=1 00:05:42.175 --rc geninfo_all_blocks=1 00:05:42.175 --rc geninfo_unexecuted_blocks=1 00:05:42.175 00:05:42.175 ' 00:05:42.175 11:16:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.175 --rc genhtml_branch_coverage=1 00:05:42.175 --rc genhtml_function_coverage=1 00:05:42.175 --rc genhtml_legend=1 00:05:42.175 --rc geninfo_all_blocks=1 00:05:42.175 --rc geninfo_unexecuted_blocks=1 00:05:42.175 00:05:42.175 ' 00:05:42.175 11:16:00 -- setup/driver.sh@68 -- # setup reset 00:05:42.175 11:16:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.175 11:16:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.752 11:16:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:42.752 11:16:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.752 11:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.752 11:16:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.752 ************************************ 00:05:42.752 START TEST guess_driver 00:05:42.752 ************************************ 00:05:42.752 11:16:00 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:42.752 11:16:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:42.752 11:16:00 -- setup/driver.sh@47 -- # local fail=0 00:05:42.752 11:16:00 -- setup/driver.sh@49 -- # pick_driver 00:05:42.752 11:16:00 -- setup/driver.sh@36 -- # vfio 00:05:42.752 11:16:00 -- setup/driver.sh@21 -- # local iommu_grups 00:05:42.752 11:16:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:42.752 11:16:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:42.752 11:16:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:42.752 11:16:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:42.752 11:16:00 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:42.752 11:16:00 -- setup/driver.sh@32 -- # return 1 00:05:42.752 11:16:00 -- setup/driver.sh@38 -- # uio 00:05:42.752 11:16:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:05:42.752 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:05:42.752 11:16:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:42.752 Looking for driver=uio_pci_generic 00:05:42.752 11:16:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:42.752 11:16:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.752 11:16:00 -- setup/driver.sh@45 -- # setup output config 00:05:42.752 11:16:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.752 11:16:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.011 11:16:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:43.011 11:16:01 -- setup/driver.sh@58 -- # continue 00:05:43.011 11:16:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.011 11:16:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:43.011 11:16:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:43.011 11:16:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.579 11:16:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:43.579 11:16:01 -- setup/driver.sh@65 -- # setup reset 00:05:43.579 11:16:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.579 11:16:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.145 00:05:44.145 real 0m1.520s 00:05:44.145 user 0m0.319s 00:05:44.145 sys 0m1.237s 00:05:44.145 11:16:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.145 11:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.145 ************************************ 00:05:44.145 END TEST guess_driver 00:05:44.145 ************************************ 00:05:44.145 00:05:44.145 real 0m2.209s 00:05:44.145 user 0m0.584s 00:05:44.145 sys 0m1.728s 00:05:44.145 11:16:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.145 11:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.145 ************************************ 00:05:44.145 END TEST driver 00:05:44.145 ************************************ 00:05:44.145 11:16:02 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.145 11:16:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.145 11:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.145 11:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.145 ************************************ 00:05:44.145 START TEST devices 00:05:44.145 ************************************ 00:05:44.145 11:16:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.145 * Looking for test storage... 00:05:44.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.145 11:16:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.404 11:16:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.404 11:16:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.404 11:16:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.404 11:16:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.404 11:16:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.404 11:16:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.404 11:16:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.404 11:16:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.404 11:16:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.404 11:16:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.404 11:16:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.404 11:16:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.404 11:16:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.404 11:16:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.404 11:16:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.404 11:16:02 -- scripts/common.sh@344 -- # : 1 00:05:44.404 11:16:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.404 11:16:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.404 11:16:02 -- scripts/common.sh@364 -- # decimal 1 00:05:44.404 11:16:02 -- scripts/common.sh@352 -- # local d=1 00:05:44.404 11:16:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.404 11:16:02 -- scripts/common.sh@354 -- # echo 1 00:05:44.404 11:16:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.404 11:16:02 -- scripts/common.sh@365 -- # decimal 2 00:05:44.404 11:16:02 -- scripts/common.sh@352 -- # local d=2 00:05:44.404 11:16:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.404 11:16:02 -- scripts/common.sh@354 -- # echo 2 00:05:44.404 11:16:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.404 11:16:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.404 11:16:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.404 11:16:02 -- scripts/common.sh@367 -- # return 0 00:05:44.404 11:16:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.404 11:16:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.404 --rc genhtml_branch_coverage=1 00:05:44.404 --rc genhtml_function_coverage=1 00:05:44.404 --rc genhtml_legend=1 00:05:44.404 --rc geninfo_all_blocks=1 00:05:44.404 --rc geninfo_unexecuted_blocks=1 00:05:44.404 00:05:44.404 ' 00:05:44.404 11:16:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.404 --rc genhtml_branch_coverage=1 00:05:44.404 --rc genhtml_function_coverage=1 00:05:44.404 --rc genhtml_legend=1 00:05:44.404 --rc geninfo_all_blocks=1 00:05:44.404 --rc geninfo_unexecuted_blocks=1 00:05:44.404 00:05:44.404 ' 00:05:44.404 11:16:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.404 --rc genhtml_branch_coverage=1 00:05:44.404 --rc genhtml_function_coverage=1 00:05:44.404 --rc genhtml_legend=1 00:05:44.404 --rc geninfo_all_blocks=1 00:05:44.404 --rc geninfo_unexecuted_blocks=1 00:05:44.404 00:05:44.404 ' 00:05:44.404 11:16:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.404 --rc genhtml_branch_coverage=1 00:05:44.404 --rc genhtml_function_coverage=1 00:05:44.404 --rc genhtml_legend=1 00:05:44.404 --rc geninfo_all_blocks=1 00:05:44.404 --rc geninfo_unexecuted_blocks=1 00:05:44.404 00:05:44.404 ' 00:05:44.404 11:16:02 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:44.404 11:16:02 -- setup/devices.sh@192 -- # setup reset 00:05:44.404 11:16:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.404 11:16:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.970 11:16:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:44.970 11:16:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:44.970 11:16:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:44.970 11:16:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:44.970 11:16:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:44.970 11:16:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:44.970 11:16:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:44.970 11:16:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.970 11:16:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:44.970 11:16:02 -- setup/devices.sh@196 -- # blocks=() 00:05:44.970 11:16:02 -- setup/devices.sh@196 -- # declare -a blocks 00:05:44.970 11:16:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:44.970 11:16:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:44.970 11:16:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:44.970 11:16:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:44.970 11:16:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:44.970 11:16:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:44.970 11:16:02 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:44.970 11:16:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:44.970 11:16:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:44.970 11:16:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:44.970 11:16:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:44.970 No valid GPT data, bailing 00:05:44.970 11:16:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:44.970 11:16:03 -- scripts/common.sh@393 -- # pt= 00:05:44.970 11:16:03 -- scripts/common.sh@394 -- # return 1 00:05:44.970 11:16:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:44.970 11:16:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:44.970 11:16:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:44.970 11:16:03 -- setup/common.sh@80 -- # echo 5368709120 00:05:44.970 11:16:03 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:44.970 11:16:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.970 11:16:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:44.970 11:16:03 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:44.970 11:16:03 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:44.970 11:16:03 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:44.970 11:16:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.970 11:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.970 11:16:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.970 ************************************ 00:05:44.970 START TEST nvme_mount 00:05:44.970 ************************************ 00:05:44.970 11:16:03 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:44.970 11:16:03 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:44.970 11:16:03 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:44.970 11:16:03 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.970 11:16:03 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.970 11:16:03 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:44.970 11:16:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.970 11:16:03 -- setup/common.sh@40 -- # local part_no=1 00:05:44.970 11:16:03 -- setup/common.sh@41 -- # local size=1073741824 00:05:44.970 11:16:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.970 11:16:03 -- setup/common.sh@44 -- # parts=() 00:05:44.970 11:16:03 -- setup/common.sh@44 -- # local parts 00:05:44.970 11:16:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.970 11:16:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.970 11:16:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.970 11:16:03 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.970 11:16:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.970 11:16:03 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.970 11:16:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.970 11:16:03 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:45.904 Creating new GPT entries in memory. 00:05:45.904 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.904 other utilities. 00:05:45.904 11:16:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.904 11:16:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.904 11:16:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.904 11:16:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.904 11:16:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.280 Creating new GPT entries in memory. 00:05:47.280 The operation has completed successfully. 00:05:47.280 11:16:05 -- setup/common.sh@57 -- # (( part++ )) 00:05:47.280 11:16:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.280 11:16:05 -- setup/common.sh@62 -- # wait 67128 00:05:47.280 11:16:05 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.280 11:16:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:47.280 11:16:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.280 11:16:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:47.280 11:16:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:47.280 11:16:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.280 11:16:05 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.280 11:16:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:47.280 11:16:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:47.280 11:16:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.280 11:16:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.280 11:16:05 -- setup/devices.sh@53 -- # local found=0 00:05:47.280 11:16:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.280 11:16:05 -- setup/devices.sh@56 -- # : 00:05:47.280 11:16:05 -- setup/devices.sh@59 -- # local pci status 00:05:47.280 11:16:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.280 11:16:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:47.280 11:16:05 -- setup/devices.sh@47 -- # setup output config 00:05:47.280 11:16:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.280 11:16:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.280 11:16:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.280 11:16:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:47.280 11:16:05 -- setup/devices.sh@63 -- # found=1 00:05:47.280 11:16:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.280 11:16:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.280 11:16:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.280 11:16:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:47.280 11:16:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.846 11:16:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.846 11:16:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:47.846 11:16:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.846 11:16:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.846 11:16:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.846 11:16:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:47.846 11:16:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.846 11:16:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.846 11:16:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.846 11:16:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:47.846 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.846 11:16:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.846 11:16:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.104 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:48.105 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:48.105 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:48.105 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:48.105 11:16:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:48.105 11:16:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:48.105 11:16:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.105 11:16:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:48.105 11:16:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:48.105 11:16:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.105 11:16:06 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.105 11:16:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.105 11:16:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:48.105 11:16:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.105 11:16:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.105 11:16:06 -- setup/devices.sh@53 -- # local found=0 00:05:48.105 11:16:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.105 11:16:06 -- setup/devices.sh@56 -- # : 00:05:48.105 11:16:06 -- setup/devices.sh@59 -- # local pci status 00:05:48.105 11:16:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.105 11:16:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.105 11:16:06 -- setup/devices.sh@47 -- # setup output config 00:05:48.105 11:16:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.105 11:16:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.364 11:16:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.364 11:16:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:48.364 11:16:06 -- setup/devices.sh@63 -- # found=1 00:05:48.364 11:16:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.364 11:16:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.364 11:16:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.623 11:16:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.623 11:16:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.881 11:16:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.881 11:16:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:48.881 11:16:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.882 11:16:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.882 11:16:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.882 11:16:07 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.882 11:16:07 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:48.882 11:16:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.882 11:16:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:48.882 11:16:07 -- setup/devices.sh@50 -- # local mount_point= 00:05:48.882 11:16:07 -- setup/devices.sh@51 -- # local test_file= 00:05:48.882 11:16:07 -- setup/devices.sh@53 -- # local found=0 00:05:48.882 11:16:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:48.882 11:16:07 -- setup/devices.sh@59 -- # local pci status 00:05:48.882 11:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.882 11:16:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.882 11:16:07 -- setup/devices.sh@47 -- # setup output config 00:05:48.882 11:16:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.882 11:16:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.140 11:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.140 11:16:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:49.140 11:16:07 -- setup/devices.sh@63 -- # found=1 00:05:49.140 11:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.140 11:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.140 11:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.399 11:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.399 11:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.968 11:16:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.968 11:16:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.968 11:16:08 -- setup/devices.sh@68 -- # return 0 00:05:49.968 11:16:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:49.968 11:16:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.968 11:16:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.968 11:16:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.968 11:16:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.968 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.968 00:05:49.968 real 0m5.028s 00:05:49.968 user 0m0.472s 00:05:49.968 sys 0m2.313s 00:05:49.968 11:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.968 11:16:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.968 ************************************ 00:05:49.968 END TEST nvme_mount 00:05:49.968 ************************************ 00:05:49.968 11:16:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:49.968 11:16:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.968 11:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.968 11:16:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.968 ************************************ 00:05:49.968 START TEST dm_mount 00:05:49.968 ************************************ 00:05:49.968 11:16:08 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:49.968 11:16:08 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:49.968 11:16:08 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:49.968 11:16:08 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:49.968 11:16:08 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:49.968 11:16:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.968 11:16:08 -- setup/common.sh@40 -- # local part_no=2 00:05:49.968 11:16:08 -- setup/common.sh@41 -- # local size=1073741824 00:05:49.968 11:16:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.968 11:16:08 -- setup/common.sh@44 -- # parts=() 00:05:49.968 11:16:08 -- setup/common.sh@44 -- # local parts 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.968 11:16:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part++ )) 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.968 11:16:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part++ )) 00:05:49.968 11:16:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.968 11:16:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:49.968 11:16:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.968 11:16:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:50.904 Creating new GPT entries in memory. 00:05:50.904 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.904 other utilities. 00:05:50.904 11:16:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.904 11:16:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.904 11:16:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.904 11:16:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.904 11:16:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.284 Creating new GPT entries in memory. 00:05:52.284 The operation has completed successfully. 00:05:52.284 11:16:10 -- setup/common.sh@57 -- # (( part++ )) 00:05:52.284 11:16:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.284 11:16:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:52.284 11:16:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:52.284 11:16:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:53.222 The operation has completed successfully. 00:05:53.222 11:16:11 -- setup/common.sh@57 -- # (( part++ )) 00:05:53.222 11:16:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.222 11:16:11 -- setup/common.sh@62 -- # wait 67553 00:05:53.222 11:16:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:53.222 11:16:11 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.222 11:16:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.222 11:16:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:53.222 11:16:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:53.222 11:16:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.222 11:16:11 -- setup/devices.sh@161 -- # break 00:05:53.222 11:16:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.222 11:16:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:53.222 11:16:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:53.222 11:16:11 -- setup/devices.sh@166 -- # dm=dm-0 00:05:53.222 11:16:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:53.222 11:16:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:53.222 11:16:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.222 11:16:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:53.222 11:16:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.222 11:16:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.222 11:16:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:53.222 11:16:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.222 11:16:11 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.222 11:16:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:53.222 11:16:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:53.222 11:16:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.222 11:16:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.222 11:16:11 -- setup/devices.sh@53 -- # local found=0 00:05:53.222 11:16:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.222 11:16:11 -- setup/devices.sh@56 -- # : 00:05:53.222 11:16:11 -- setup/devices.sh@59 -- # local pci status 00:05:53.222 11:16:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.222 11:16:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:53.222 11:16:11 -- setup/devices.sh@47 -- # setup output config 00:05:53.222 11:16:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.222 11:16:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.482 11:16:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.482 11:16:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:53.482 11:16:11 -- setup/devices.sh@63 -- # found=1 00:05:53.482 11:16:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.482 11:16:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.482 11:16:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.482 11:16:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.482 11:16:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.051 11:16:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.051 11:16:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:54.051 11:16:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.051 11:16:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.051 11:16:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.051 11:16:12 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.051 11:16:12 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:54.051 11:16:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:54.051 11:16:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:54.051 11:16:12 -- setup/devices.sh@50 -- # local mount_point= 00:05:54.051 11:16:12 -- setup/devices.sh@51 -- # local test_file= 00:05:54.051 11:16:12 -- setup/devices.sh@53 -- # local found=0 00:05:54.051 11:16:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.051 11:16:12 -- setup/devices.sh@59 -- # local pci status 00:05:54.051 11:16:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.051 11:16:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:54.051 11:16:12 -- setup/devices.sh@47 -- # setup output config 00:05:54.051 11:16:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.051 11:16:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.310 11:16:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.310 11:16:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:54.310 11:16:12 -- setup/devices.sh@63 -- # found=1 00:05:54.310 11:16:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.310 11:16:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.310 11:16:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.310 11:16:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.310 11:16:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.878 11:16:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.879 11:16:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.879 11:16:13 -- setup/devices.sh@68 -- # return 0 00:05:54.879 11:16:13 -- setup/devices.sh@187 -- # cleanup_dm 00:05:54.879 11:16:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.879 11:16:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.879 11:16:13 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:54.879 11:16:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.879 11:16:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:54.879 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.879 11:16:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.879 11:16:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:55.137 00:05:55.137 real 0m5.018s 00:05:55.137 user 0m0.332s 00:05:55.137 sys 0m1.646s 00:05:55.137 11:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.137 ************************************ 00:05:55.137 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.137 END TEST dm_mount 00:05:55.137 ************************************ 00:05:55.137 11:16:13 -- setup/devices.sh@1 -- # cleanup 00:05:55.137 11:16:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.137 11:16:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.137 11:16:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.137 11:16:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.137 11:16:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.137 11:16:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.396 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.396 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.396 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:55.396 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:55.396 11:16:13 -- setup/devices.sh@12 -- # cleanup_dm 00:05:55.396 11:16:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.396 11:16:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.396 11:16:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.396 11:16:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.396 11:16:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.396 11:16:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:55.396 00:05:55.396 real 0m11.148s 00:05:55.396 user 0m1.194s 00:05:55.396 sys 0m4.442s 00:05:55.396 11:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.396 ************************************ 00:05:55.396 END TEST devices 00:05:55.396 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.396 ************************************ 00:05:55.396 ************************************ 00:05:55.396 END TEST setup.sh 00:05:55.396 ************************************ 00:05:55.396 00:05:55.396 real 0m23.010s 00:05:55.396 user 0m5.053s 00:05:55.396 sys 0m12.908s 00:05:55.396 11:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.396 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.396 11:16:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:55.655 Hugepages 00:05:55.655 node hugesize free / total 00:05:55.655 node0 1048576kB 0 / 0 00:05:55.655 node0 2048kB 2048 / 2048 00:05:55.655 00:05:55.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.655 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:55.655 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:55.655 11:16:13 -- spdk/autotest.sh@128 -- # uname -s 00:05:55.655 11:16:13 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:55.655 11:16:13 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:55.655 11:16:13 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:56.224 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.805 11:16:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:57.754 11:16:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:57.754 11:16:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:57.754 11:16:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:57.754 11:16:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:57.754 11:16:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:57.754 11:16:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:57.754 11:16:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.754 11:16:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.754 11:16:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:57.754 11:16:15 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:57.754 11:16:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:57.754 11:16:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:58.323 Waiting for block devices as requested 00:05:58.323 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.323 11:16:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:58.323 11:16:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:58.323 11:16:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:58.323 11:16:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:58.323 11:16:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:58.323 11:16:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:58.323 11:16:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:58.323 11:16:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:58.323 11:16:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:58.323 11:16:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:58.323 11:16:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:58.323 11:16:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:58.323 11:16:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:58.323 11:16:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:58.323 11:16:16 -- common/autotest_common.sh@1552 -- # continue 00:05:58.323 11:16:16 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:58.323 11:16:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.323 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.323 11:16:16 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:58.323 11:16:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.323 11:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.323 11:16:16 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:58.892 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.462 11:16:17 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:59.462 11:16:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.462 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.462 11:16:17 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:59.462 11:16:17 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:59.462 11:16:17 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:59.462 11:16:17 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:59.462 11:16:17 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:59.462 11:16:17 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:59.462 11:16:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:59.462 11:16:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:59.462 11:16:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.462 11:16:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.462 11:16:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:59.462 11:16:17 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:59.462 11:16:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:59.462 11:16:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:59.462 11:16:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:59.462 11:16:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:59.462 11:16:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.462 11:16:17 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:59.462 11:16:17 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:59.462 11:16:17 -- common/autotest_common.sh@1588 -- # return 0 00:05:59.462 11:16:17 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:05:59.462 11:16:17 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:59.462 11:16:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.462 11:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.462 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.462 ************************************ 00:05:59.462 START TEST unittest 00:05:59.462 ************************************ 00:05:59.462 11:16:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:59.462 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:59.462 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:59.462 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:59.462 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:59.462 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:59.462 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:59.462 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:59.462 ++ rpc_py=rpc_cmd 00:05:59.462 ++ set -e 00:05:59.462 ++ shopt -s nullglob 00:05:59.462 ++ shopt -s extglob 00:05:59.462 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:59.462 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:59.462 +++ CONFIG_WPDK_DIR= 00:05:59.462 +++ CONFIG_ASAN=y 00:05:59.462 +++ CONFIG_VBDEV_COMPRESS=n 00:05:59.462 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:59.462 +++ CONFIG_USDT=n 00:05:59.462 +++ CONFIG_CUSTOMOCF=n 00:05:59.462 +++ CONFIG_PREFIX=/usr/local 00:05:59.462 +++ CONFIG_RBD=n 00:05:59.462 +++ CONFIG_LIBDIR= 00:05:59.462 +++ CONFIG_IDXD=y 00:05:59.462 +++ CONFIG_NVME_CUSE=y 00:05:59.462 +++ CONFIG_SMA=n 00:05:59.462 +++ CONFIG_VTUNE=n 00:05:59.462 +++ CONFIG_TSAN=n 00:05:59.462 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:59.462 +++ CONFIG_VFIO_USER_DIR= 00:05:59.462 +++ CONFIG_PGO_CAPTURE=n 00:05:59.462 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:59.462 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:59.462 +++ CONFIG_LTO=n 00:05:59.462 +++ CONFIG_ISCSI_INITIATOR=y 00:05:59.462 +++ CONFIG_CET=n 00:05:59.462 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:59.462 +++ CONFIG_OCF_PATH= 00:05:59.462 +++ CONFIG_RDMA_SET_TOS=y 00:05:59.462 +++ CONFIG_HAVE_ARC4RANDOM=y 00:05:59.462 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:59.462 +++ CONFIG_UBLK=y 00:05:59.462 +++ CONFIG_ISAL_CRYPTO=y 00:05:59.462 +++ CONFIG_OPENSSL_PATH= 00:05:59.462 +++ CONFIG_OCF=n 00:05:59.462 +++ CONFIG_FUSE=n 00:05:59.462 +++ CONFIG_VTUNE_DIR= 00:05:59.462 +++ CONFIG_FUZZER_LIB= 00:05:59.462 +++ CONFIG_FUZZER=n 00:05:59.462 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:05:59.462 +++ CONFIG_CRYPTO=n 00:05:59.462 +++ CONFIG_PGO_USE=n 00:05:59.462 +++ CONFIG_VHOST=y 00:05:59.462 +++ CONFIG_DAOS=n 00:05:59.462 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:05:59.462 +++ CONFIG_DAOS_DIR= 00:05:59.462 +++ CONFIG_UNIT_TESTS=y 00:05:59.462 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:59.462 +++ CONFIG_VIRTIO=y 00:05:59.462 +++ CONFIG_COVERAGE=y 00:05:59.462 +++ CONFIG_RDMA=y 00:05:59.462 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:59.462 +++ CONFIG_URING_PATH= 00:05:59.462 +++ CONFIG_XNVME=n 00:05:59.462 +++ CONFIG_VFIO_USER=n 00:05:59.462 +++ CONFIG_ARCH=native 00:05:59.462 +++ CONFIG_URING_ZNS=n 00:05:59.462 +++ CONFIG_WERROR=y 00:05:59.462 +++ CONFIG_HAVE_LIBBSD=n 00:05:59.462 +++ CONFIG_UBSAN=y 00:05:59.462 +++ CONFIG_IPSEC_MB_DIR= 00:05:59.462 +++ CONFIG_GOLANG=n 00:05:59.462 +++ CONFIG_ISAL=y 00:05:59.462 +++ CONFIG_IDXD_KERNEL=y 00:05:59.462 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:59.462 +++ CONFIG_RDMA_PROV=verbs 00:05:59.462 +++ CONFIG_APPS=y 00:05:59.462 +++ CONFIG_SHARED=n 00:05:59.462 +++ CONFIG_FC_PATH= 00:05:59.462 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:59.462 +++ CONFIG_FC=n 00:05:59.462 +++ CONFIG_AVAHI=n 00:05:59.462 +++ CONFIG_FIO_PLUGIN=y 00:05:59.462 +++ CONFIG_RAID5F=y 00:05:59.462 +++ CONFIG_EXAMPLES=y 00:05:59.462 +++ CONFIG_TESTS=y 00:05:59.462 +++ CONFIG_CRYPTO_MLX5=n 00:05:59.462 +++ CONFIG_MAX_LCORES= 00:05:59.462 +++ CONFIG_IPSEC_MB=n 00:05:59.462 +++ CONFIG_DEBUG=y 00:05:59.462 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:59.462 +++ CONFIG_CROSS_PREFIX= 00:05:59.462 +++ CONFIG_URING=n 00:05:59.462 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:59.462 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:59.462 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:59.462 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:59.462 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:59.462 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:59.462 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:59.462 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:59.462 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:59.462 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:59.462 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:59.462 +++ VHOST_APP=("$_app_dir/vhost") 00:05:59.463 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:59.463 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:59.463 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:59.463 +++ [[ #ifndef SPDK_CONFIG_H 00:05:59.463 #define SPDK_CONFIG_H 00:05:59.463 #define SPDK_CONFIG_APPS 1 00:05:59.463 #define SPDK_CONFIG_ARCH native 00:05:59.463 #define SPDK_CONFIG_ASAN 1 00:05:59.463 #undef SPDK_CONFIG_AVAHI 00:05:59.463 #undef SPDK_CONFIG_CET 00:05:59.463 #define SPDK_CONFIG_COVERAGE 1 00:05:59.463 #define SPDK_CONFIG_CROSS_PREFIX 00:05:59.463 #undef SPDK_CONFIG_CRYPTO 00:05:59.463 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:59.463 #undef SPDK_CONFIG_CUSTOMOCF 00:05:59.463 #undef SPDK_CONFIG_DAOS 00:05:59.463 #define SPDK_CONFIG_DAOS_DIR 00:05:59.463 #define SPDK_CONFIG_DEBUG 1 00:05:59.463 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:59.463 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:05:59.463 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:05:59.463 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:05:59.463 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:59.463 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:59.463 #define SPDK_CONFIG_EXAMPLES 1 00:05:59.463 #undef SPDK_CONFIG_FC 00:05:59.463 #define SPDK_CONFIG_FC_PATH 00:05:59.463 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:59.463 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:59.463 #undef SPDK_CONFIG_FUSE 00:05:59.463 #undef SPDK_CONFIG_FUZZER 00:05:59.463 #define SPDK_CONFIG_FUZZER_LIB 00:05:59.463 #undef SPDK_CONFIG_GOLANG 00:05:59.463 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:59.463 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:59.463 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:59.463 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:59.463 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:59.463 #define SPDK_CONFIG_IDXD 1 00:05:59.463 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:59.463 #undef SPDK_CONFIG_IPSEC_MB 00:05:59.463 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:59.463 #define SPDK_CONFIG_ISAL 1 00:05:59.463 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:59.463 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:59.463 #define SPDK_CONFIG_LIBDIR 00:05:59.463 #undef SPDK_CONFIG_LTO 00:05:59.463 #define SPDK_CONFIG_MAX_LCORES 00:05:59.463 #define SPDK_CONFIG_NVME_CUSE 1 00:05:59.463 #undef SPDK_CONFIG_OCF 00:05:59.463 #define SPDK_CONFIG_OCF_PATH 00:05:59.463 #define SPDK_CONFIG_OPENSSL_PATH 00:05:59.463 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:59.463 #undef SPDK_CONFIG_PGO_USE 00:05:59.463 #define SPDK_CONFIG_PREFIX /usr/local 00:05:59.463 #define SPDK_CONFIG_RAID5F 1 00:05:59.463 #undef SPDK_CONFIG_RBD 00:05:59.463 #define SPDK_CONFIG_RDMA 1 00:05:59.463 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:59.463 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:59.463 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:59.463 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:59.463 #undef SPDK_CONFIG_SHARED 00:05:59.463 #undef SPDK_CONFIG_SMA 00:05:59.463 #define SPDK_CONFIG_TESTS 1 00:05:59.463 #undef SPDK_CONFIG_TSAN 00:05:59.463 #define SPDK_CONFIG_UBLK 1 00:05:59.463 #define SPDK_CONFIG_UBSAN 1 00:05:59.463 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:59.463 #undef SPDK_CONFIG_URING 00:05:59.463 #define SPDK_CONFIG_URING_PATH 00:05:59.463 #undef SPDK_CONFIG_URING_ZNS 00:05:59.463 #undef SPDK_CONFIG_USDT 00:05:59.463 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:59.463 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:59.463 #undef SPDK_CONFIG_VFIO_USER 00:05:59.463 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:59.463 #define SPDK_CONFIG_VHOST 1 00:05:59.463 #define SPDK_CONFIG_VIRTIO 1 00:05:59.463 #undef SPDK_CONFIG_VTUNE 00:05:59.463 #define SPDK_CONFIG_VTUNE_DIR 00:05:59.463 #define SPDK_CONFIG_WERROR 1 00:05:59.463 #define SPDK_CONFIG_WPDK_DIR 00:05:59.463 #undef SPDK_CONFIG_XNVME 00:05:59.463 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:59.463 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:59.463 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.463 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:59.463 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.463 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.463 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:59.463 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:59.463 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:59.463 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:59.463 ++++ export PATH 00:05:59.463 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:59.463 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:59.463 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:59.463 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:59.463 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:59.463 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:59.463 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:59.463 +++ TEST_TAG=N/A 00:05:59.463 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:59.463 ++ : 1 00:05:59.463 ++ export RUN_NIGHTLY 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_RUN_VALGRIND 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_TEST_UNITTEST 00:05:59.463 ++ : 00:05:59.463 ++ export SPDK_TEST_AUTOBUILD 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_RELEASE_BUILD 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ISAL 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ISCSI 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_TEST_NVME 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVME_PMR 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVME_BP 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVME_CLI 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVME_CUSE 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVME_FDP 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVMF 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VFIOUSER 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_FUZZER 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_FUZZER_SHORT 00:05:59.463 ++ : rdma 00:05:59.463 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_RBD 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VHOST 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_TEST_BLOCKDEV 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_IOAT 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_BLOBFS 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VHOST_INIT 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_LVOL 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_RUN_ASAN 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_RUN_UBSAN 00:05:59.463 ++ : /home/vagrant/spdk_repo/dpdk/build 00:05:59.463 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_RUN_NON_ROOT 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_CRYPTO 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_FTL 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_OCF 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_VMD 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_OPAL 00:05:59.463 ++ : v23.11 00:05:59.463 ++ export SPDK_TEST_NATIVE_DPDK 00:05:59.463 ++ : true 00:05:59.463 ++ export SPDK_AUTOTEST_X 00:05:59.463 ++ : 1 00:05:59.463 ++ export SPDK_TEST_RAID5 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_URING 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_USDT 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_USE_IGB_UIO 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_SCHEDULER 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_SCANBUILD 00:05:59.463 ++ : 00:05:59.463 ++ export SPDK_TEST_NVMF_NICS 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_SMA 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_DAOS 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_XNVME 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ACCEL_DSA 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ACCEL_IAA 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_ACCEL_IOAT 00:05:59.463 ++ : 00:05:59.463 ++ export SPDK_TEST_FUZZER_TARGET 00:05:59.463 ++ : 0 00:05:59.463 ++ export SPDK_TEST_NVMF_MDNS 00:05:59.464 ++ : 0 00:05:59.464 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:59.464 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:59.464 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:59.464 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:59.464 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:59.464 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:59.464 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:59.464 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:59.464 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:59.464 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.464 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:59.464 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:59.464 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:59.464 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:59.464 ++ PYTHONDONTWRITEBYTECODE=1 00:05:59.464 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.464 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:59.464 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.464 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:59.464 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:59.464 ++ rm -rf /var/tmp/asan_suppression_file 00:05:59.464 ++ cat 00:05:59.464 ++ echo leak:libfuse3.so 00:05:59.464 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.464 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:59.464 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.464 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:59.464 ++ '[' -z /var/spdk/dependencies ']' 00:05:59.464 ++ export DEPENDENCY_DIR 00:05:59.464 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:59.464 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:59.464 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:59.464 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:59.464 ++ export QEMU_BIN= 00:05:59.464 ++ QEMU_BIN= 00:05:59.464 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:59.464 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:59.464 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:59.464 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:59.464 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.464 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:59.464 ++ _LCOV_MAIN=0 00:05:59.464 ++ _LCOV_LLVM=1 00:05:59.464 ++ _LCOV= 00:05:59.464 ++ [[ '' == *clang* ]] 00:05:59.464 ++ [[ 0 -eq 1 ]] 00:05:59.464 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:59.464 ++ _lcov_opt[_LCOV_MAIN]= 00:05:59.464 ++ lcov_opt= 00:05:59.464 ++ '[' 0 -eq 0 ']' 00:05:59.464 ++ export valgrind= 00:05:59.464 ++ valgrind= 00:05:59.464 +++ uname -s 00:05:59.464 ++ '[' Linux = Linux ']' 00:05:59.464 ++ HUGEMEM=4096 00:05:59.464 ++ export CLEAR_HUGE=yes 00:05:59.464 ++ CLEAR_HUGE=yes 00:05:59.464 ++ [[ 0 -eq 1 ]] 00:05:59.464 ++ [[ 0 -eq 1 ]] 00:05:59.464 ++ MAKE=make 00:05:59.464 +++ nproc 00:05:59.464 ++ MAKEFLAGS=-j10 00:05:59.464 ++ export HUGEMEM=4096 00:05:59.464 ++ HUGEMEM=4096 00:05:59.464 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:59.464 ++ NO_HUGE=() 00:05:59.464 ++ TEST_MODE= 00:05:59.464 ++ [[ -z '' ]] 00:05:59.464 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:59.464 ++ exec 00:05:59.464 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:59.464 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:59.464 ++ set_test_storage 2147483648 00:05:59.464 ++ [[ -v testdir ]] 00:05:59.464 ++ local requested_size=2147483648 00:05:59.464 ++ local mount target_dir 00:05:59.464 ++ local -A mounts fss sizes avails uses 00:05:59.464 ++ local source fs size avail mount use 00:05:59.464 ++ local storage_fallback storage_candidates 00:05:59.464 +++ mktemp -udt spdk.XXXXXX 00:05:59.464 ++ storage_fallback=/tmp/spdk.CgVU11 00:05:59.464 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:59.464 ++ [[ -n '' ]] 00:05:59.464 ++ [[ -n '' ]] 00:05:59.464 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.CgVU11/tests/unit /tmp/spdk.CgVU11 00:05:59.464 ++ requested_size=2214592512 00:05:59.464 ++ read -r source fs size use avail _ mount 00:05:59.464 +++ df -T 00:05:59.464 +++ grep -v Filesystem 00:05:59.723 ++ mounts["$mount"]=tmpfs 00:05:59.723 ++ fss["$mount"]=tmpfs 00:05:59.723 ++ avails["$mount"]=1252958208 00:05:59.723 ++ sizes["$mount"]=1254027264 00:05:59.723 ++ uses["$mount"]=1069056 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=/dev/vda1 00:05:59.723 ++ fss["$mount"]=ext4 00:05:59.723 ++ avails["$mount"]=9058287616 00:05:59.723 ++ sizes["$mount"]=19681529856 00:05:59.723 ++ uses["$mount"]=10606465024 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=tmpfs 00:05:59.723 ++ fss["$mount"]=tmpfs 00:05:59.723 ++ avails["$mount"]=6270115840 00:05:59.723 ++ sizes["$mount"]=6270115840 00:05:59.723 ++ uses["$mount"]=0 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=tmpfs 00:05:59.723 ++ fss["$mount"]=tmpfs 00:05:59.723 ++ avails["$mount"]=5242880 00:05:59.723 ++ sizes["$mount"]=5242880 00:05:59.723 ++ uses["$mount"]=0 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=/dev/vda16 00:05:59.723 ++ fss["$mount"]=ext4 00:05:59.723 ++ avails["$mount"]=777306112 00:05:59.723 ++ sizes["$mount"]=923156480 00:05:59.723 ++ uses["$mount"]=81207296 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=/dev/vda15 00:05:59.723 ++ fss["$mount"]=vfat 00:05:59.723 ++ avails["$mount"]=103000064 00:05:59.723 ++ sizes["$mount"]=109395968 00:05:59.723 ++ uses["$mount"]=6395904 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=tmpfs 00:05:59.723 ++ fss["$mount"]=tmpfs 00:05:59.723 ++ avails["$mount"]=1254010880 00:05:59.723 ++ sizes["$mount"]=1254023168 00:05:59.723 ++ uses["$mount"]=12288 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:05:59.723 ++ fss["$mount"]=fuse.sshfs 00:05:59.723 ++ avails["$mount"]=97234329600 00:05:59.723 ++ sizes["$mount"]=105088212992 00:05:59.723 ++ uses["$mount"]=2468450304 00:05:59.723 ++ read -r source fs size use avail _ mount 00:05:59.723 ++ printf '* Looking for test storage...\n' 00:05:59.723 * Looking for test storage... 00:05:59.723 ++ local target_space new_size 00:05:59.723 ++ for target_dir in "${storage_candidates[@]}" 00:05:59.723 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:59.723 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:59.723 ++ mount=/ 00:05:59.723 ++ target_space=9058287616 00:05:59.723 ++ (( target_space == 0 || target_space < requested_size )) 00:05:59.723 ++ (( target_space >= requested_size )) 00:05:59.723 ++ [[ ext4 == tmpfs ]] 00:05:59.723 ++ [[ ext4 == ramfs ]] 00:05:59.723 ++ [[ / == / ]] 00:05:59.723 ++ new_size=12821057536 00:05:59.723 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:59.723 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:59.723 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:59.723 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:59.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:59.723 ++ return 0 00:05:59.723 ++ set -o errtrace 00:05:59.723 ++ shopt -s extdebug 00:05:59.723 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:59.723 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:59.723 11:16:17 -- common/autotest_common.sh@1682 -- # true 00:05:59.723 11:16:17 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:05:59.723 11:16:17 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:59.723 11:16:17 -- common/autotest_common.sh@29 -- # exec 00:05:59.723 11:16:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:59.723 11:16:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:59.723 11:16:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:59.723 11:16:17 -- common/autotest_common.sh@18 -- # set -x 00:05:59.723 11:16:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.723 11:16:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.723 11:16:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.723 11:16:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.723 11:16:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.723 11:16:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.723 11:16:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.723 11:16:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.723 11:16:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.723 11:16:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.723 11:16:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.723 11:16:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.723 11:16:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.723 11:16:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.723 11:16:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.724 11:16:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.724 11:16:17 -- scripts/common.sh@344 -- # : 1 00:05:59.724 11:16:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.724 11:16:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.724 11:16:17 -- scripts/common.sh@364 -- # decimal 1 00:05:59.724 11:16:17 -- scripts/common.sh@352 -- # local d=1 00:05:59.724 11:16:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.724 11:16:17 -- scripts/common.sh@354 -- # echo 1 00:05:59.724 11:16:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.724 11:16:17 -- scripts/common.sh@365 -- # decimal 2 00:05:59.724 11:16:17 -- scripts/common.sh@352 -- # local d=2 00:05:59.724 11:16:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.724 11:16:17 -- scripts/common.sh@354 -- # echo 2 00:05:59.724 11:16:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.724 11:16:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.724 11:16:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.724 11:16:17 -- scripts/common.sh@367 -- # return 0 00:05:59.724 11:16:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.724 11:16:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.724 --rc genhtml_branch_coverage=1 00:05:59.724 --rc genhtml_function_coverage=1 00:05:59.724 --rc genhtml_legend=1 00:05:59.724 --rc geninfo_all_blocks=1 00:05:59.724 --rc geninfo_unexecuted_blocks=1 00:05:59.724 00:05:59.724 ' 00:05:59.724 11:16:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.724 --rc genhtml_branch_coverage=1 00:05:59.724 --rc genhtml_function_coverage=1 00:05:59.724 --rc genhtml_legend=1 00:05:59.724 --rc geninfo_all_blocks=1 00:05:59.724 --rc geninfo_unexecuted_blocks=1 00:05:59.724 00:05:59.724 ' 00:05:59.724 11:16:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.724 --rc genhtml_branch_coverage=1 00:05:59.724 --rc genhtml_function_coverage=1 00:05:59.724 --rc genhtml_legend=1 00:05:59.724 --rc geninfo_all_blocks=1 00:05:59.724 --rc geninfo_unexecuted_blocks=1 00:05:59.724 00:05:59.724 ' 00:05:59.724 11:16:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.724 --rc genhtml_branch_coverage=1 00:05:59.724 --rc genhtml_function_coverage=1 00:05:59.724 --rc genhtml_legend=1 00:05:59.724 --rc geninfo_all_blocks=1 00:05:59.724 --rc geninfo_unexecuted_blocks=1 00:05:59.724 00:05:59.724 ' 00:05:59.724 11:16:17 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:59.724 11:16:17 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:59.724 11:16:17 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:59.724 11:16:17 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:59.724 11:16:17 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:05:59.724 11:16:17 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:59.724 11:16:17 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:59.724 11:16:17 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:14.605 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:14.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:14.605 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:14.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:14.605 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:14.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:53.360 11:17:10 -- unit/unittest.sh@182 -- # uname -m 00:06:53.360 11:17:10 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:06:53.361 11:17:10 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:53.361 11:17:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.361 11:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.361 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.361 ************************************ 00:06:53.361 START TEST unittest_pci_event 00:06:53.361 ************************************ 00:06:53.361 11:17:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:53.361 00:06:53.361 00:06:53.361 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.361 http://cunit.sourceforge.net/ 00:06:53.361 00:06:53.361 00:06:53.361 Suite: pci_event 00:06:53.361 Test: test_pci_parse_event ...[2024-11-26 11:17:10.491353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid passed 00:06:53.361 00:06:53.361 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.361 suites 1 1 n/a 0 0 00:06:53.361 tests 1 1 1 0 0 00:06:53.361 asserts 15 15 15 0 n/a 00:06:53.361 00:06:53.361 Elapsed time = 0.001 seconds 00:06:53.361 format for PCI device BDF: 0000 00:06:53.361 [2024-11-26 11:17:10.491917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:53.361 00:06:53.361 real 0m0.036s 00:06:53.361 user 0m0.014s 00:06:53.361 sys 0m0.016s 00:06:53.361 11:17:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.361 ************************************ 00:06:53.361 END TEST unittest_pci_event 00:06:53.361 ************************************ 00:06:53.361 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.361 11:17:10 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:53.361 11:17:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.361 11:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.361 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.361 ************************************ 00:06:53.361 START TEST unittest_include 00:06:53.361 ************************************ 00:06:53.361 11:17:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:53.361 00:06:53.361 00:06:53.361 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.361 http://cunit.sourceforge.net/ 00:06:53.361 00:06:53.361 00:06:53.361 Suite: histogram 00:06:53.361 Test: histogram_test ...passed 00:06:53.361 Test: histogram_merge ...passed 00:06:53.361 00:06:53.361 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.361 suites 1 1 n/a 0 0 00:06:53.361 tests 2 2 2 0 0 00:06:53.361 asserts 50 50 50 0 n/a 00:06:53.361 00:06:53.361 Elapsed time = 0.007 seconds 00:06:53.361 00:06:53.361 real 0m0.033s 00:06:53.361 user 0m0.022s 00:06:53.361 sys 0m0.011s 00:06:53.361 ************************************ 00:06:53.361 END TEST unittest_include 00:06:53.361 ************************************ 00:06:53.361 11:17:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.361 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.361 11:17:10 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:06:53.361 11:17:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.361 11:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.361 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.361 ************************************ 00:06:53.361 START TEST unittest_bdev 00:06:53.361 ************************************ 00:06:53.361 11:17:10 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:06:53.361 11:17:10 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:53.361 00:06:53.361 00:06:53.361 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.361 http://cunit.sourceforge.net/ 00:06:53.361 00:06:53.361 00:06:53.361 Suite: bdev 00:06:53.361 Test: bytes_to_blocks_test ...passed 00:06:53.361 Test: num_blocks_test ...passed 00:06:53.361 Test: io_valid_test ...passed 00:06:53.361 Test: open_write_test ...[2024-11-26 11:17:10.700565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:53.361 [2024-11-26 11:17:10.700941] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:53.361 [2024-11-26 11:17:10.701063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:53.361 passed 00:06:53.361 Test: claim_test ...passed 00:06:53.361 Test: alias_add_del_test ...[2024-11-26 11:17:10.761843] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:53.361 [2024-11-26 11:17:10.761998] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:53.361 [2024-11-26 11:17:10.762061] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:53.361 passed 00:06:53.361 Test: get_device_stat_test ...passed 00:06:53.361 Test: bdev_io_types_test ...passed 00:06:53.361 Test: bdev_io_wait_test ...passed 00:06:53.361 Test: bdev_io_spans_split_test ...passed 00:06:53.361 Test: bdev_io_boundary_split_test ...passed 00:06:53.361 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-26 11:17:10.884166] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:53.361 passed 00:06:53.361 Test: bdev_io_mix_split_test ...passed 00:06:53.361 Test: bdev_io_split_with_io_wait ...passed 00:06:53.361 Test: bdev_io_write_unit_split_test ...[2024-11-26 11:17:10.950862] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:53.361 [2024-11-26 11:17:10.950987] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:53.361 [2024-11-26 11:17:10.951016] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:53.361 [2024-11-26 11:17:10.951074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:53.361 passed 00:06:53.361 Test: bdev_io_alignment_with_boundary ...passed 00:06:53.361 Test: bdev_io_alignment ...passed 00:06:53.361 Test: bdev_histograms ...passed 00:06:53.361 Test: bdev_write_zeroes ...passed 00:06:53.361 Test: bdev_compare_and_write ...passed 00:06:53.361 Test: bdev_compare ...passed 00:06:53.361 Test: bdev_compare_emulated ...passed 00:06:53.361 Test: bdev_zcopy_write ...passed 00:06:53.361 Test: bdev_zcopy_read ...passed 00:06:53.361 Test: bdev_open_while_hotremove ...passed 00:06:53.361 Test: bdev_close_while_hotremove ...passed 00:06:53.361 Test: bdev_open_ext_test ...passed 00:06:53.361 Test: bdev_open_ext_unregister ...[2024-11-26 11:17:11.192705] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:53.361 [2024-11-26 11:17:11.192899] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:53.361 passed 00:06:53.361 Test: bdev_set_io_timeout ...passed 00:06:53.361 Test: bdev_set_qd_sampling ...passed 00:06:53.361 Test: lba_range_overlap ...passed 00:06:53.361 Test: lock_lba_range_check_ranges ...passed 00:06:53.361 Test: lock_lba_range_with_io_outstanding ...passed 00:06:53.361 Test: lock_lba_range_overlapped ...passed 00:06:53.361 Test: bdev_quiesce ...[2024-11-26 11:17:11.294725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:53.361 passed 00:06:53.361 Test: bdev_io_abort ...passed 00:06:53.361 Test: bdev_unmap ...passed 00:06:53.361 Test: bdev_write_zeroes_split_test ...passed 00:06:53.361 Test: bdev_set_options_test ...passed 00:06:53.361 Test: bdev_get_memory_domains ...passed 00:06:53.361 Test: bdev_io_ext ...[2024-11-26 11:17:11.375377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:53.361 passed 00:06:53.361 Test: bdev_io_ext_no_opts ...passed 00:06:53.361 Test: bdev_io_ext_invalid_opts ...passed 00:06:53.361 Test: bdev_io_ext_split ...passed 00:06:53.361 Test: bdev_io_ext_bounce_buffer ...passed 00:06:53.361 Test: bdev_register_uuid_alias ...[2024-11-26 11:17:11.476582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name d3b01723-8864-465e-8348-e1ec7a3d7076 already exists 00:06:53.361 [2024-11-26 11:17:11.476657] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:d3b01723-8864-465e-8348-e1ec7a3d7076 alias for bdev bdev0 00:06:53.361 passed 00:06:53.361 Test: bdev_unregister_by_name ...[2024-11-26 11:17:11.491528] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:53.361 [2024-11-26 11:17:11.491567] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:53.361 passed 00:06:53.361 Test: for_each_bdev_test ...passed 00:06:53.361 Test: bdev_seek_test ...passed 00:06:53.361 Test: bdev_copy ...passed 00:06:53.361 Test: bdev_copy_split_test ...passed 00:06:53.361 Test: examine_locks ...passed 00:06:53.361 Test: claim_v2_rwo ...[2024-11-26 11:17:11.550302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.361 [2024-11-26 11:17:11.550361] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.361 [2024-11-26 11:17:11.550384] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.361 [2024-11-26 11:17:11.550397] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.361 [2024-11-26 11:17:11.550424] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.361 [2024-11-26 11:17:11.550454] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:53.362 passed 00:06:53.362 Test: claim_v2_rom ...passed 00:06:53.362 Test: claim_v2_rwm ...[2024-11-26 11:17:11.550614] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.550655] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.550671] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.550683] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.550764] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:53.362 [2024-11-26 11:17:11.550789] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.362 [2024-11-26 11:17:11.550914] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:53.362 [2024-11-26 11:17:11.550963] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.550988] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551001] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551031] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551078] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:53.362 passed 00:06:53.362 Test: claim_v2_existing_writer ...passed 00:06:53.362 Test: claim_v2_existing_v1 ...passed 00:06:53.362 Test: claim_v1_existing_v2 ...passed 00:06:53.362 Test: examine_claimed ...[2024-11-26 11:17:11.551203] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.362 [2024-11-26 11:17:11.551240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:53.362 [2024-11-26 11:17:11.551329] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551352] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551364] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551452] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551485] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:53.362 [2024-11-26 11:17:11.551511] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:53.362 passed 00:06:53.362 00:06:53.362 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.362 suites 1 1 n/a 0 0 00:06:53.362 tests 59 59 59 0 0 00:06:53.362 asserts 4599 4599 4599 0 n/a 00:06:53.362 00:06:53.362 Elapsed time = 0.892 seconds 00:06:53.362 [2024-11-26 11:17:11.551770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:53.362 11:17:11 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:53.362 00:06:53.362 00:06:53.362 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.362 http://cunit.sourceforge.net/ 00:06:53.362 00:06:53.362 00:06:53.362 Suite: nvme 00:06:53.362 Test: test_create_ctrlr ...passed 00:06:53.621 Test: test_reset_ctrlr ...[2024-11-26 11:17:11.594342] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 passed 00:06:53.621 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:53.621 Test: test_failover_ctrlr ...passed 00:06:53.621 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-26 11:17:11.596853] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 [2024-11-26 11:17:11.597122] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 [2024-11-26 11:17:11.597341] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 passed 00:06:53.621 Test: test_pending_reset ...[2024-11-26 11:17:11.599035] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 passed 00:06:53.621 Test: test_attach_ctrlr ...[2024-11-26 11:17:11.599286] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.621 [2024-11-26 11:17:11.600325] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:53.621 passed 00:06:53.621 Test: test_aer_cb ...passed 00:06:53.621 Test: test_submit_nvme_cmd ...passed 00:06:53.621 Test: test_add_remove_trid ...passed 00:06:53.621 Test: test_abort ...[2024-11-26 11:17:11.603289] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:53.621 passed 00:06:53.621 Test: test_get_io_qpair ...passed 00:06:53.621 Test: test_bdev_unregister ...passed 00:06:53.621 Test: test_compare_ns ...passed 00:06:53.621 Test: test_init_ana_log_page ...passed 00:06:53.621 Test: test_get_memory_domains ...passed 00:06:53.622 Test: test_reconnect_qpair ...[2024-11-26 11:17:11.605698] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_create_bdev_ctrlr ...[2024-11-26 11:17:11.606217] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:53.622 passed 00:06:53.622 Test: test_add_multi_ns_to_bdev ...[2024-11-26 11:17:11.607431] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:53.622 passed 00:06:53.622 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:53.622 Test: test_admin_path ...passed 00:06:53.622 Test: test_reset_bdev_ctrlr ...passed 00:06:53.622 Test: test_find_io_path ...passed 00:06:53.622 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:53.622 Test: test_retry_io_for_io_path_error ...passed 00:06:53.622 Test: test_retry_io_count ...passed 00:06:53.622 Test: test_concurrent_read_ana_log_page ...passed 00:06:53.622 Test: test_retry_io_for_ana_error ...passed 00:06:53.622 Test: test_check_io_error_resiliency_params ...[2024-11-26 11:17:11.613655] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:53.622 passed 00:06:53.622 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-26 11:17:11.613708] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:53.622 [2024-11-26 11:17:11.613744] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:53.622 [2024-11-26 11:17:11.613770] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:53.622 [2024-11-26 11:17:11.613784] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:53.622 [2024-11-26 11:17:11.613800] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:53.622 [2024-11-26 11:17:11.613822] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:53.622 [2024-11-26 11:17:11.613838] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:53.622 [2024-11-26 11:17:11.613853] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:53.622 passed 00:06:53.622 Test: test_reconnect_ctrlr ...[2024-11-26 11:17:11.614520] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.614639] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.614910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.615015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.615106] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_retry_failover_ctrlr ...[2024-11-26 11:17:11.615415] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_fail_path ...[2024-11-26 11:17:11.615938] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.616066] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.616168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_nvme_ns_cmp ...passed 00:06:53.622 Test: test_ana_transition ...[2024-11-26 11:17:11.616242] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.616325] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_set_preferred_path ...passed 00:06:53.622 Test: test_find_next_io_path ...passed 00:06:53.622 Test: test_find_io_path_min_qd ...passed 00:06:53.622 Test: test_disable_auto_failback ...[2024-11-26 11:17:11.617816] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_set_multipath_policy ...passed 00:06:53.622 Test: test_uuid_generation ...passed 00:06:53.622 Test: test_retry_io_to_same_path ...passed 00:06:53.622 Test: test_race_between_reset_and_disconnected ...passed 00:06:53.622 Test: test_ctrlr_op_rpc ...passed 00:06:53.622 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:53.622 Test: test_disable_enable_ctrlr ...[2024-11-26 11:17:11.621291] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 [2024-11-26 11:17:11.621458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:53.622 passed 00:06:53.622 Test: test_delete_ctrlr_done ...passed 00:06:53.622 Test: test_ns_remove_during_reset ...passed 00:06:53.622 00:06:53.622 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.622 suites 1 1 n/a 0 0 00:06:53.622 tests 48 48 48 0 0 00:06:53.622 asserts 3553 3553 3553 0 n/a 00:06:53.622 00:06:53.622 Elapsed time = 0.029 seconds 00:06:53.622 11:17:11 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:53.622 Test Options 00:06:53.622 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:53.622 00:06:53.622 00:06:53.622 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.622 http://cunit.sourceforge.net/ 00:06:53.622 00:06:53.622 00:06:53.622 Suite: raid 00:06:53.622 Test: test_create_raid ...passed 00:06:53.622 Test: test_create_raid_superblock ...passed 00:06:53.622 Test: test_delete_raid ...passed 00:06:53.622 Test: test_create_raid_invalid_args ...[2024-11-26 11:17:11.655698] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:53.622 [2024-11-26 11:17:11.656003] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:53.622 [2024-11-26 11:17:11.656435] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:53.622 [2024-11-26 11:17:11.656553] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:53.622 [2024-11-26 11:17:11.657124] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:53.622 passed 00:06:53.622 Test: test_delete_raid_invalid_args ...passed 00:06:53.622 Test: test_io_channel ...passed 00:06:53.622 Test: test_reset_io ...passed 00:06:53.622 Test: test_write_io ...passed 00:06:53.622 Test: test_read_io ...passed 00:06:54.191 Test: test_unmap_io ...passed 00:06:54.191 Test: test_io_failure ...[2024-11-26 11:17:12.147558] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:54.191 passed 00:06:54.191 Test: test_multi_raid_no_io ...passed 00:06:54.191 Test: test_multi_raid_with_io ...passed 00:06:54.191 Test: test_io_type_supported ...passed 00:06:54.191 Test: test_raid_json_dump_info ...passed 00:06:54.191 Test: test_context_size ...passed 00:06:54.191 Test: test_raid_level_conversions ...passed 00:06:54.191 Test: test_raid_process ...passed 00:06:54.191 Test: test_raid_io_split ...passed 00:06:54.191 00:06:54.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.191 suites 1 1 n/a 0 0 00:06:54.191 tests 19 19 19 0 0 00:06:54.191 asserts 177879 177879 177879 0 n/a 00:06:54.191 00:06:54.191 Elapsed time = 0.501 seconds 00:06:54.191 11:17:12 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:54.191 00:06:54.191 00:06:54.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.191 http://cunit.sourceforge.net/ 00:06:54.191 00:06:54.191 00:06:54.191 Suite: raid_sb 00:06:54.191 Test: test_raid_bdev_write_superblock ...passed 00:06:54.191 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:54.191 Test: test_raid_bdev_parse_superblock ...passed 00:06:54.191 00:06:54.191 [2024-11-26 11:17:12.197263] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:54.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.191 suites 1 1 n/a 0 0 00:06:54.191 tests 3 3 3 0 0 00:06:54.191 asserts 32 32 32 0 n/a 00:06:54.191 00:06:54.191 Elapsed time = 0.001 seconds 00:06:54.191 11:17:12 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:54.191 00:06:54.191 00:06:54.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.191 http://cunit.sourceforge.net/ 00:06:54.191 00:06:54.191 00:06:54.191 Suite: concat 00:06:54.191 Test: test_concat_start ...passed 00:06:54.191 Test: test_concat_rw ...passed 00:06:54.191 Test: test_concat_null_payload ...passed 00:06:54.191 00:06:54.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.191 suites 1 1 n/a 0 0 00:06:54.191 tests 3 3 3 0 0 00:06:54.191 asserts 8097 8097 8097 0 n/a 00:06:54.191 00:06:54.191 Elapsed time = 0.008 seconds 00:06:54.191 11:17:12 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:54.191 00:06:54.191 00:06:54.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.191 http://cunit.sourceforge.net/ 00:06:54.191 00:06:54.191 00:06:54.191 Suite: raid1 00:06:54.191 Test: test_raid1_start ...passed 00:06:54.191 Test: test_raid1_read_balancing ...passed 00:06:54.191 00:06:54.191 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.191 suites 1 1 n/a 0 0 00:06:54.191 tests 2 2 2 0 0 00:06:54.191 asserts 2856 2856 2856 0 n/a 00:06:54.191 00:06:54.191 Elapsed time = 0.005 seconds 00:06:54.191 11:17:12 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:54.191 00:06:54.191 00:06:54.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.191 http://cunit.sourceforge.net/ 00:06:54.191 00:06:54.191 00:06:54.191 Suite: zone 00:06:54.191 Test: test_zone_get_operation ...passed 00:06:54.191 Test: test_bdev_zone_get_info ...passed 00:06:54.191 Test: test_bdev_zone_management ...passed 00:06:54.191 Test: test_bdev_zone_append ...passed 00:06:54.191 Test: test_bdev_zone_append_with_md ...passed 00:06:54.191 Test: test_bdev_zone_appendv ...passed 00:06:54.191 Test: test_bdev_zone_appendv_with_md ...passed 00:06:54.192 Test: test_bdev_io_get_append_location ...passed 00:06:54.192 00:06:54.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.192 suites 1 1 n/a 0 0 00:06:54.192 tests 8 8 8 0 0 00:06:54.192 asserts 94 94 94 0 n/a 00:06:54.192 00:06:54.192 Elapsed time = 0.001 seconds 00:06:54.192 11:17:12 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:54.192 00:06:54.192 00:06:54.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.192 http://cunit.sourceforge.net/ 00:06:54.192 00:06:54.192 00:06:54.192 Suite: gpt_parse 00:06:54.192 Test: test_parse_mbr_and_primary ...[2024-11-26 11:17:12.337023] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.192 [2024-11-26 11:17:12.337200] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.192 [2024-11-26 11:17:12.337347] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:54.192 [2024-11-26 11:17:12.337371] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:54.192 [2024-11-26 11:17:12.337408] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:54.192 [2024-11-26 11:17:12.337430] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:54.192 passed 00:06:54.192 Test: test_parse_secondary ...[2024-11-26 11:17:12.338086] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:54.192 [2024-11-26 11:17:12.338130] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:54.192 [2024-11-26 11:17:12.338157] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:54.192 [2024-11-26 11:17:12.338178] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:54.192 passed 00:06:54.192 Test: test_check_mbr ...passed 00:06:54.192 Test: test_read_header ...[2024-11-26 11:17:12.338815] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.192 [2024-11-26 11:17:12.338850] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:54.192 [2024-11-26 11:17:12.339003] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:54.192 [2024-11-26 11:17:12.339039] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:54.192 [2024-11-26 11:17:12.339073] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:54.192 passed 00:06:54.192 Test: test_read_partitions ...[2024-11-26 11:17:12.339102] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:54.192 [2024-11-26 11:17:12.339146] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:54.192 [2024-11-26 11:17:12.339163] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:54.192 [2024-11-26 11:17:12.339294] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:54.192 [2024-11-26 11:17:12.339316] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:54.192 [2024-11-26 11:17:12.339341] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:54.192 [2024-11-26 11:17:12.339361] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:54.192 [2024-11-26 11:17:12.339615] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:54.192 passed 00:06:54.192 00:06:54.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.192 suites 1 1 n/a 0 0 00:06:54.192 tests 5 5 5 0 0 00:06:54.192 asserts 33 33 33 0 n/a 00:06:54.192 00:06:54.192 Elapsed time = 0.003 seconds 00:06:54.192 11:17:12 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:54.192 00:06:54.192 00:06:54.192 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.192 http://cunit.sourceforge.net/ 00:06:54.192 00:06:54.192 00:06:54.192 Suite: bdev_part 00:06:54.192 Test: part_test ...[2024-11-26 11:17:12.371182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:54.192 passed 00:06:54.192 Test: part_free_test ...passed 00:06:54.192 Test: part_get_io_channel_test ...passed 00:06:54.192 Test: part_construct_ext ...passed 00:06:54.192 00:06:54.192 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.192 suites 1 1 n/a 0 0 00:06:54.192 tests 4 4 4 0 0 00:06:54.192 asserts 48 48 48 0 n/a 00:06:54.192 00:06:54.192 Elapsed time = 0.038 seconds 00:06:54.452 11:17:12 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:54.452 00:06:54.452 00:06:54.452 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.452 http://cunit.sourceforge.net/ 00:06:54.452 00:06:54.452 00:06:54.452 Suite: scsi_nvme_suite 00:06:54.452 Test: scsi_nvme_translate_test ...passed 00:06:54.452 00:06:54.452 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.452 suites 1 1 n/a 0 0 00:06:54.452 tests 1 1 1 0 0 00:06:54.452 asserts 104 104 104 0 n/a 00:06:54.452 00:06:54.452 Elapsed time = 0.000 seconds 00:06:54.452 11:17:12 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:54.452 00:06:54.452 00:06:54.452 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.452 http://cunit.sourceforge.net/ 00:06:54.452 00:06:54.452 00:06:54.452 Suite: lvol 00:06:54.452 Test: ut_lvs_init ...[2024-11-26 11:17:12.474491] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:54.452 [2024-11-26 11:17:12.474844] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:54.452 passed 00:06:54.452 Test: ut_lvol_init ...passed 00:06:54.452 Test: ut_lvol_snapshot ...passed 00:06:54.452 Test: ut_lvol_clone ...passed 00:06:54.452 Test: ut_lvs_destroy ...passed 00:06:54.452 Test: ut_lvs_unload ...passed 00:06:54.452 Test: ut_lvol_resize ...passed 00:06:54.452 Test: ut_lvol_set_read_only ...passed 00:06:54.453 Test: ut_lvol_hotremove ...passed 00:06:54.453 Test: ut_vbdev_lvol_get_io_channel ...[2024-11-26 11:17:12.476363] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:54.453 passed 00:06:54.453 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:54.453 Test: ut_lvol_read_write ...passed 00:06:54.453 Test: ut_vbdev_lvol_submit_request ...passed 00:06:54.453 Test: ut_lvol_examine_config ...passed 00:06:54.453 Test: ut_lvol_examine_disk ...[2024-11-26 11:17:12.477025] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:54.453 passed 00:06:54.453 Test: ut_lvol_rename ...[2024-11-26 11:17:12.477968] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:54.453 [2024-11-26 11:17:12.478022] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:54.453 passed 00:06:54.453 Test: ut_bdev_finish ...passed 00:06:54.453 Test: ut_lvs_rename ...passed 00:06:54.453 Test: ut_lvol_seek ...passed 00:06:54.453 Test: ut_esnap_dev_create ...passed 00:06:54.453 Test: ut_lvol_esnap_clone_bad_args ...[2024-11-26 11:17:12.478574] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:54.453 [2024-11-26 11:17:12.478654] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:54.453 [2024-11-26 11:17:12.478689] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:54.453 [2024-11-26 11:17:12.478721] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:54.453 [2024-11-26 11:17:12.478858] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:54.453 [2024-11-26 11:17:12.478907] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:54.453 passed 00:06:54.453 00:06:54.453 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.453 suites 1 1 n/a 0 0 00:06:54.453 tests 21 21 21 0 0 00:06:54.453 asserts 712 712 712 0 n/a 00:06:54.453 00:06:54.453 Elapsed time = 0.005 seconds 00:06:54.453 11:17:12 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:54.453 00:06:54.453 00:06:54.453 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.453 http://cunit.sourceforge.net/ 00:06:54.453 00:06:54.453 00:06:54.453 Suite: zone_block 00:06:54.453 Test: test_zone_block_create ...passed 00:06:54.453 Test: test_zone_block_create_invalid ...[2024-11-26 11:17:12.547558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:54.453 [2024-11-26 11:17:12.547770] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-26 11:17:12.547933] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:54.453 [2024-11-26 11:17:12.547990] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-26 11:17:12.548146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:54.453 [2024-11-26 11:17:12.548173] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-26 11:17:12.548279] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:54.453 [2024-11-26 11:17:12.548306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:54.453 Test: test_get_zone_info ...[2024-11-26 11:17:12.549023] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.549092] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.549164] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_supported_io_types ...passed 00:06:54.453 Test: test_reset_zone ...[2024-11-26 11:17:12.550074] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_open_zone ...[2024-11-26 11:17:12.550129] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.550587] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_zone_write ...[2024-11-26 11:17:12.551371] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.551444] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.551954] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:54.453 [2024-11-26 11:17:12.552003] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.552063] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:54.453 [2024-11-26 11:17:12.552093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.558760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:54.453 [2024-11-26 11:17:12.558816] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.559004] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:54.453 [2024-11-26 11:17:12.559044] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.565621] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:54.453 [2024-11-26 11:17:12.565666] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_zone_read ...[2024-11-26 11:17:12.566191] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:54.453 [2024-11-26 11:17:12.566233] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.566300] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:54.453 [2024-11-26 11:17:12.566329] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.566867] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:54.453 passed 00:06:54.453 Test: test_close_zone ...[2024-11-26 11:17:12.566918] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.567226] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.567306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_finish_zone ...[2024-11-26 11:17:12.567499] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.567531] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 Test: test_append_zone ...[2024-11-26 11:17:12.568205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.568256] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.568618] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:54.453 [2024-11-26 11:17:12.568663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 [2024-11-26 11:17:12.568711] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:54.453 [2024-11-26 11:17:12.568728] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 passed 00:06:54.453 00:06:54.453 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.453 suites 1 1 n/a 0 0 00:06:54.453 tests 11 11 11 0 0 00:06:54.453 asserts 3437 3437 3437 0 n/a 00:06:54.453 00:06:54.453 Elapsed time = 0.036 seconds 00:06:54.453 [2024-11-26 11:17:12.581603] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:54.453 [2024-11-26 11:17:12.581660] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:54.453 11:17:12 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:54.453 00:06:54.453 00:06:54.454 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.454 http://cunit.sourceforge.net/ 00:06:54.454 00:06:54.454 00:06:54.454 Suite: bdev 00:06:54.454 Test: basic ...[2024-11-26 11:17:12.650197] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x58870b0c4ec1): Operation not permitted (rc=-1) 00:06:54.454 [2024-11-26 11:17:12.650465] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x58870b0c4e80): Operation not permitted (rc=-1) 00:06:54.454 [2024-11-26 11:17:12.650506] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x58870b0c4ec1): Operation not permitted (rc=-1) 00:06:54.454 passed 00:06:54.713 Test: unregister_and_close ...passed 00:06:54.713 Test: unregister_and_close_different_threads ...passed 00:06:54.713 Test: basic_qos ...passed 00:06:54.713 Test: put_channel_during_reset ...passed 00:06:54.713 Test: aborted_reset ...passed 00:06:54.713 Test: aborted_reset_no_outstanding_io ...passed 00:06:54.713 Test: io_during_reset ...passed 00:06:54.713 Test: reset_completions ...passed 00:06:54.972 Test: io_during_qos_queue ...passed 00:06:54.972 Test: io_during_qos_reset ...passed 00:06:54.972 Test: enomem ...passed 00:06:54.972 Test: enomem_multi_bdev ...passed 00:06:54.972 Test: enomem_multi_bdev_unregister ...passed 00:06:54.972 Test: enomem_multi_io_target ...passed 00:06:54.972 Test: qos_dynamic_enable ...passed 00:06:54.972 Test: bdev_histograms_mt ...passed 00:06:54.972 Test: bdev_set_io_timeout_mt ...[2024-11-26 11:17:13.190418] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:06:54.972 passed 00:06:54.972 Test: lock_lba_range_then_submit_io ...[2024-11-26 11:17:13.200371] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x58870b0c4e40 already registered (old:0x5130000003c0 new:0x513000000c80) 00:06:55.230 passed 00:06:55.231 Test: unregister_during_reset ...passed 00:06:55.231 Test: event_notify_and_close ...passed 00:06:55.231 Test: unregister_and_qos_poller ...passed 00:06:55.231 Suite: bdev_wrong_thread 00:06:55.231 Test: spdk_bdev_register_wt ...[2024-11-26 11:17:13.285183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x518000001480 (0x518000001480) 00:06:55.231 passed 00:06:55.231 Test: spdk_bdev_examine_wt ...[2024-11-26 11:17:13.285492] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x518000001480 (0x518000001480) 00:06:55.231 passed 00:06:55.231 00:06:55.231 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.231 suites 2 2 n/a 0 0 00:06:55.231 tests 24 24 24 0 0 00:06:55.231 asserts 621 621 621 0 n/a 00:06:55.231 00:06:55.231 Elapsed time = 0.639 seconds 00:06:55.231 00:06:55.231 real 0m2.674s 00:06:55.231 user 0m1.241s 00:06:55.231 sys 0m1.429s 00:06:55.231 11:17:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.231 11:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:55.231 ************************************ 00:06:55.231 END TEST unittest_bdev 00:06:55.231 ************************************ 00:06:55.231 11:17:13 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.231 11:17:13 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.231 11:17:13 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.231 11:17:13 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:55.231 11:17:13 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:55.231 11:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.231 11:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.231 11:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:55.231 ************************************ 00:06:55.231 START TEST unittest_bdev_raid5f 00:06:55.231 ************************************ 00:06:55.231 11:17:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:55.231 00:06:55.231 00:06:55.231 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.231 http://cunit.sourceforge.net/ 00:06:55.231 00:06:55.231 00:06:55.231 Suite: raid5f 00:06:55.231 Test: test_raid5f_start ...passed 00:06:55.800 Test: test_raid5f_submit_read_request ...passed 00:06:55.800 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:59.083 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:13.964 Test: test_raid5f_chunk_write_error ...passed 00:07:20.524 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:23.812 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:50.386 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:50.386 00:07:50.386 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.386 suites 1 1 n/a 0 0 00:07:50.386 tests 8 8 8 0 0 00:07:50.386 asserts 351864 351864 351864 0 n/a 00:07:50.386 00:07:50.386 Elapsed time = 52.583 seconds 00:07:50.386 00:07:50.386 real 0m52.675s 00:07:50.386 user 0m50.236s 00:07:50.386 sys 0m2.422s 00:07:50.386 11:18:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.386 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 ************************************ 00:07:50.386 END TEST unittest_bdev_raid5f 00:07:50.386 ************************************ 00:07:50.386 11:18:06 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:07:50.386 11:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.386 11:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.386 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 ************************************ 00:07:50.386 START TEST unittest_blob_blobfs 00:07:50.386 ************************************ 00:07:50.386 11:18:06 -- common/autotest_common.sh@1114 -- # unittest_blob 00:07:50.386 11:18:06 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:50.386 11:18:06 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:50.386 00:07:50.386 00:07:50.386 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.386 http://cunit.sourceforge.net/ 00:07:50.386 00:07:50.386 00:07:50.386 Suite: blob_nocopy_noextent 00:07:50.386 Test: blob_init ...[2024-11-26 11:18:06.146084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:50.386 passed 00:07:50.386 Test: blob_thin_provision ...passed 00:07:50.386 Test: blob_read_only ...passed 00:07:50.386 Test: bs_load ...[2024-11-26 11:18:06.231103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:50.386 passed 00:07:50.386 Test: bs_load_custom_cluster_size ...passed 00:07:50.386 Test: bs_load_after_failed_grow ...passed 00:07:50.386 Test: bs_cluster_sz ...[2024-11-26 11:18:06.254652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:50.386 [2024-11-26 11:18:06.255087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:50.386 [2024-11-26 11:18:06.255201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:50.386 passed 00:07:50.386 Test: bs_resize_md ...passed 00:07:50.386 Test: bs_destroy ...passed 00:07:50.386 Test: bs_type ...passed 00:07:50.386 Test: bs_super_block ...passed 00:07:50.386 Test: bs_test_recover_cluster_count ...passed 00:07:50.386 Test: bs_grow_live ...passed 00:07:50.386 Test: bs_grow_live_no_space ...passed 00:07:50.386 Test: bs_test_grow ...passed 00:07:50.386 Test: blob_serialize_test ...passed 00:07:50.386 Test: super_block_crc ...passed 00:07:50.386 Test: blob_thin_prov_write_count_io ...passed 00:07:50.386 Test: bs_load_iter_test ...passed 00:07:50.386 Test: blob_relations ...[2024-11-26 11:18:06.373032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.373152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 [2024-11-26 11:18:06.374128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.374190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 passed 00:07:50.386 Test: blob_relations2 ...[2024-11-26 11:18:06.384530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.384618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 [2024-11-26 11:18:06.384662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.384675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 [2024-11-26 11:18:06.386248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.386324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 [2024-11-26 11:18:06.386770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.386 [2024-11-26 11:18:06.386812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.386 passed 00:07:50.386 Test: blob_relations3 ...passed 00:07:50.386 Test: blobstore_clean_power_failure ...passed 00:07:50.386 Test: blob_delete_snapshot_power_failure ...[2024-11-26 11:18:06.492135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.387 [2024-11-26 11:18:06.500803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.387 [2024-11-26 11:18:06.500933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.387 [2024-11-26 11:18:06.500961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:06.509311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.387 [2024-11-26 11:18:06.509392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.387 [2024-11-26 11:18:06.509432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.387 [2024-11-26 11:18:06.509453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:06.517667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:50.387 [2024-11-26 11:18:06.517798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:06.527107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:50.387 [2024-11-26 11:18:06.527287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:06.536398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:50.387 [2024-11-26 11:18:06.536525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 passed 00:07:50.387 Test: blob_create_snapshot_power_failure ...[2024-11-26 11:18:06.561598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.387 [2024-11-26 11:18:06.578403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.387 [2024-11-26 11:18:06.587729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:50.387 passed 00:07:50.387 Test: blob_io_unit ...passed 00:07:50.387 Test: blob_io_unit_compatibility ...passed 00:07:50.387 Test: blob_ext_md_pages ...passed 00:07:50.387 Test: blob_esnap_io_4096_4096 ...passed 00:07:50.387 Test: blob_esnap_io_512_512 ...passed 00:07:50.387 Test: blob_esnap_io_4096_512 ...passed 00:07:50.387 Test: blob_esnap_io_512_4096 ...passed 00:07:50.387 Suite: blob_bs_nocopy_noextent 00:07:50.387 Test: blob_open ...passed 00:07:50.387 Test: blob_create ...[2024-11-26 11:18:06.755218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:50.387 passed 00:07:50.387 Test: blob_create_loop ...passed 00:07:50.387 Test: blob_create_fail ...[2024-11-26 11:18:06.827572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.387 passed 00:07:50.387 Test: blob_create_internal ...passed 00:07:50.387 Test: blob_create_zero_extent ...passed 00:07:50.387 Test: blob_snapshot ...passed 00:07:50.387 Test: blob_clone ...passed 00:07:50.387 Test: blob_inflate ...[2024-11-26 11:18:06.946751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:50.387 passed 00:07:50.387 Test: blob_delete ...passed 00:07:50.387 Test: blob_resize_test ...[2024-11-26 11:18:06.987767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:50.387 passed 00:07:50.387 Test: channel_ops ...passed 00:07:50.387 Test: blob_super ...passed 00:07:50.387 Test: blob_rw_verify_iov ...passed 00:07:50.387 Test: blob_unmap ...passed 00:07:50.387 Test: blob_iter ...passed 00:07:50.387 Test: blob_parse_md ...passed 00:07:50.387 Test: bs_load_pending_removal ...passed 00:07:50.387 Test: bs_unload ...[2024-11-26 11:18:07.152996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:50.387 passed 00:07:50.387 Test: bs_usable_clusters ...passed 00:07:50.387 Test: blob_crc ...[2024-11-26 11:18:07.190234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.387 [2024-11-26 11:18:07.190404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.387 passed 00:07:50.387 Test: blob_flags ...passed 00:07:50.387 Test: bs_version ...passed 00:07:50.387 Test: blob_set_xattrs_test ...[2024-11-26 11:18:07.253988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.387 [2024-11-26 11:18:07.254108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.387 passed 00:07:50.387 Test: blob_thin_prov_alloc ...passed 00:07:50.387 Test: blob_insert_cluster_msg_test ...passed 00:07:50.387 Test: blob_thin_prov_rw ...passed 00:07:50.387 Test: blob_thin_prov_rle ...passed 00:07:50.387 Test: blob_thin_prov_rw_iov ...passed 00:07:50.387 Test: blob_snapshot_rw ...passed 00:07:50.387 Test: blob_snapshot_rw_iov ...passed 00:07:50.387 Test: blob_inflate_rw ...passed 00:07:50.387 Test: blob_snapshot_freeze_io ...passed 00:07:50.387 Test: blob_operation_split_rw ...passed 00:07:50.387 Test: blob_operation_split_rw_iov ...passed 00:07:50.387 Test: blob_simultaneous_operations ...[2024-11-26 11:18:07.988402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.387 [2024-11-26 11:18:07.988513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:07.989690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.387 [2024-11-26 11:18:07.989743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.000192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.387 [2024-11-26 11:18:08.000281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.000437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.387 [2024-11-26 11:18:08.000462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 passed 00:07:50.387 Test: blob_persist_test ...passed 00:07:50.387 Test: blob_decouple_snapshot ...passed 00:07:50.387 Test: blob_seek_io_unit ...passed 00:07:50.387 Test: blob_nested_freezes ...passed 00:07:50.387 Suite: blob_blob_nocopy_noextent 00:07:50.387 Test: blob_write ...passed 00:07:50.387 Test: blob_read ...passed 00:07:50.387 Test: blob_rw_verify ...passed 00:07:50.387 Test: blob_rw_verify_iov_nomem ...passed 00:07:50.387 Test: blob_rw_iov_read_only ...passed 00:07:50.387 Test: blob_xattr ...passed 00:07:50.387 Test: blob_dirty_shutdown ...passed 00:07:50.387 Test: blob_is_degraded ...passed 00:07:50.387 Suite: blob_esnap_bs_nocopy_noextent 00:07:50.387 Test: blob_esnap_create ...passed 00:07:50.387 Test: blob_esnap_thread_add_remove ...passed 00:07:50.387 Test: blob_esnap_clone_snapshot ...passed 00:07:50.387 Test: blob_esnap_clone_inflate ...passed 00:07:50.387 Test: blob_esnap_clone_decouple ...passed 00:07:50.387 Test: blob_esnap_clone_reload ...passed 00:07:50.387 Test: blob_esnap_hotplug ...passed 00:07:50.387 Suite: blob_nocopy_extent 00:07:50.387 Test: blob_init ...[2024-11-26 11:18:08.438656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:50.387 passed 00:07:50.387 Test: blob_thin_provision ...passed 00:07:50.387 Test: blob_read_only ...passed 00:07:50.387 Test: bs_load ...[2024-11-26 11:18:08.466946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:50.387 passed 00:07:50.387 Test: bs_load_custom_cluster_size ...passed 00:07:50.387 Test: bs_load_after_failed_grow ...passed 00:07:50.387 Test: bs_cluster_sz ...[2024-11-26 11:18:08.485196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:50.387 [2024-11-26 11:18:08.485467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:50.387 [2024-11-26 11:18:08.485550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:50.387 passed 00:07:50.387 Test: bs_resize_md ...passed 00:07:50.387 Test: bs_destroy ...passed 00:07:50.387 Test: bs_type ...passed 00:07:50.387 Test: bs_super_block ...passed 00:07:50.387 Test: bs_test_recover_cluster_count ...passed 00:07:50.387 Test: bs_grow_live ...passed 00:07:50.387 Test: bs_grow_live_no_space ...passed 00:07:50.387 Test: bs_test_grow ...passed 00:07:50.387 Test: blob_serialize_test ...passed 00:07:50.387 Test: super_block_crc ...passed 00:07:50.387 Test: blob_thin_prov_write_count_io ...passed 00:07:50.387 Test: bs_load_iter_test ...passed 00:07:50.387 Test: blob_relations ...[2024-11-26 11:18:08.576871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.387 [2024-11-26 11:18:08.577006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.577987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.387 [2024-11-26 11:18:08.578047] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 passed 00:07:50.387 Test: blob_relations2 ...[2024-11-26 11:18:08.586981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.387 [2024-11-26 11:18:08.587038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.587078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.387 [2024-11-26 11:18:08.587090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.588518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.387 [2024-11-26 11:18:08.588563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.387 [2024-11-26 11:18:08.589020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.388 [2024-11-26 11:18:08.589064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.388 passed 00:07:50.388 Test: blob_relations3 ...passed 00:07:50.647 Test: blobstore_clean_power_failure ...passed 00:07:50.647 Test: blob_delete_snapshot_power_failure ...[2024-11-26 11:18:08.680079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:50.647 [2024-11-26 11:18:08.688862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:50.647 [2024-11-26 11:18:08.697765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.647 [2024-11-26 11:18:08.697832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.647 [2024-11-26 11:18:08.697859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 [2024-11-26 11:18:08.706698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:50.647 [2024-11-26 11:18:08.706782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.647 [2024-11-26 11:18:08.706820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.647 [2024-11-26 11:18:08.706839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 [2024-11-26 11:18:08.715685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:50.647 [2024-11-26 11:18:08.715757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.647 [2024-11-26 11:18:08.715780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.647 [2024-11-26 11:18:08.715802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 [2024-11-26 11:18:08.724683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:50.647 [2024-11-26 11:18:08.724777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 [2024-11-26 11:18:08.733100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:50.647 [2024-11-26 11:18:08.733202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 [2024-11-26 11:18:08.741818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:50.647 [2024-11-26 11:18:08.741927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.647 passed 00:07:50.647 Test: blob_create_snapshot_power_failure ...[2024-11-26 11:18:08.768374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.647 [2024-11-26 11:18:08.776634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:50.647 [2024-11-26 11:18:08.792863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:50.647 [2024-11-26 11:18:08.801298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:50.647 passed 00:07:50.647 Test: blob_io_unit ...passed 00:07:50.647 Test: blob_io_unit_compatibility ...passed 00:07:50.647 Test: blob_ext_md_pages ...passed 00:07:50.647 Test: blob_esnap_io_4096_4096 ...passed 00:07:50.906 Test: blob_esnap_io_512_512 ...passed 00:07:50.906 Test: blob_esnap_io_4096_512 ...passed 00:07:50.906 Test: blob_esnap_io_512_4096 ...passed 00:07:50.906 Suite: blob_bs_nocopy_extent 00:07:50.906 Test: blob_open ...passed 00:07:50.906 Test: blob_create ...[2024-11-26 11:18:08.948605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:50.906 passed 00:07:50.906 Test: blob_create_loop ...passed 00:07:50.906 Test: blob_create_fail ...[2024-11-26 11:18:09.019669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.906 passed 00:07:50.906 Test: blob_create_internal ...passed 00:07:50.906 Test: blob_create_zero_extent ...passed 00:07:50.906 Test: blob_snapshot ...passed 00:07:50.906 Test: blob_clone ...passed 00:07:50.906 Test: blob_inflate ...[2024-11-26 11:18:09.121807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:50.906 passed 00:07:51.164 Test: blob_delete ...passed 00:07:51.164 Test: blob_resize_test ...[2024-11-26 11:18:09.159427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:51.164 passed 00:07:51.164 Test: channel_ops ...passed 00:07:51.164 Test: blob_super ...passed 00:07:51.164 Test: blob_rw_verify_iov ...passed 00:07:51.164 Test: blob_unmap ...passed 00:07:51.164 Test: blob_iter ...passed 00:07:51.164 Test: blob_parse_md ...passed 00:07:51.164 Test: bs_load_pending_removal ...passed 00:07:51.164 Test: bs_unload ...[2024-11-26 11:18:09.307459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:51.164 passed 00:07:51.164 Test: bs_usable_clusters ...passed 00:07:51.164 Test: blob_crc ...[2024-11-26 11:18:09.347606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.164 [2024-11-26 11:18:09.347751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:51.164 passed 00:07:51.164 Test: blob_flags ...passed 00:07:51.164 Test: bs_version ...passed 00:07:51.423 Test: blob_set_xattrs_test ...[2024-11-26 11:18:09.405900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.423 [2024-11-26 11:18:09.406011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.423 passed 00:07:51.423 Test: blob_thin_prov_alloc ...passed 00:07:51.423 Test: blob_insert_cluster_msg_test ...passed 00:07:51.423 Test: blob_thin_prov_rw ...passed 00:07:51.423 Test: blob_thin_prov_rle ...passed 00:07:51.423 Test: blob_thin_prov_rw_iov ...passed 00:07:51.423 Test: blob_snapshot_rw ...passed 00:07:51.423 Test: blob_snapshot_rw_iov ...passed 00:07:51.681 Test: blob_inflate_rw ...passed 00:07:51.681 Test: blob_snapshot_freeze_io ...passed 00:07:51.940 Test: blob_operation_split_rw ...passed 00:07:51.940 Test: blob_operation_split_rw_iov ...passed 00:07:51.940 Test: blob_simultaneous_operations ...[2024-11-26 11:18:10.147137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.940 [2024-11-26 11:18:10.147235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.940 [2024-11-26 11:18:10.148522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.940 [2024-11-26 11:18:10.148579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.940 [2024-11-26 11:18:10.160015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.940 [2024-11-26 11:18:10.160071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.940 [2024-11-26 11:18:10.160187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.940 [2024-11-26 11:18:10.160206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.940 passed 00:07:52.209 Test: blob_persist_test ...passed 00:07:52.209 Test: blob_decouple_snapshot ...passed 00:07:52.209 Test: blob_seek_io_unit ...passed 00:07:52.209 Test: blob_nested_freezes ...passed 00:07:52.209 Suite: blob_blob_nocopy_extent 00:07:52.209 Test: blob_write ...passed 00:07:52.209 Test: blob_read ...passed 00:07:52.209 Test: blob_rw_verify ...passed 00:07:52.209 Test: blob_rw_verify_iov_nomem ...passed 00:07:52.209 Test: blob_rw_iov_read_only ...passed 00:07:52.209 Test: blob_xattr ...passed 00:07:52.209 Test: blob_dirty_shutdown ...passed 00:07:52.484 Test: blob_is_degraded ...passed 00:07:52.484 Suite: blob_esnap_bs_nocopy_extent 00:07:52.484 Test: blob_esnap_create ...passed 00:07:52.484 Test: blob_esnap_thread_add_remove ...passed 00:07:52.484 Test: blob_esnap_clone_snapshot ...passed 00:07:52.484 Test: blob_esnap_clone_inflate ...passed 00:07:52.484 Test: blob_esnap_clone_decouple ...passed 00:07:52.484 Test: blob_esnap_clone_reload ...passed 00:07:52.484 Test: blob_esnap_hotplug ...passed 00:07:52.484 Suite: blob_copy_noextent 00:07:52.484 Test: blob_init ...[2024-11-26 11:18:10.595931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:52.484 passed 00:07:52.484 Test: blob_thin_provision ...passed 00:07:52.484 Test: blob_read_only ...passed 00:07:52.484 Test: bs_load ...[2024-11-26 11:18:10.624457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:52.484 passed 00:07:52.484 Test: bs_load_custom_cluster_size ...passed 00:07:52.484 Test: bs_load_after_failed_grow ...passed 00:07:52.484 Test: bs_cluster_sz ...[2024-11-26 11:18:10.640376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:52.484 [2024-11-26 11:18:10.640558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:52.484 [2024-11-26 11:18:10.640596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:52.484 passed 00:07:52.484 Test: bs_resize_md ...passed 00:07:52.484 Test: bs_destroy ...passed 00:07:52.484 Test: bs_type ...passed 00:07:52.484 Test: bs_super_block ...passed 00:07:52.484 Test: bs_test_recover_cluster_count ...passed 00:07:52.484 Test: bs_grow_live ...passed 00:07:52.484 Test: bs_grow_live_no_space ...passed 00:07:52.484 Test: bs_test_grow ...passed 00:07:52.484 Test: blob_serialize_test ...passed 00:07:52.484 Test: super_block_crc ...passed 00:07:52.746 Test: blob_thin_prov_write_count_io ...passed 00:07:52.746 Test: bs_load_iter_test ...passed 00:07:52.746 Test: blob_relations ...[2024-11-26 11:18:10.739911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.740036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.740626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.740665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 passed 00:07:52.746 Test: blob_relations2 ...[2024-11-26 11:18:10.749211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.749267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.749304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.749315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.750241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.750326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.750594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.746 [2024-11-26 11:18:10.750618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 passed 00:07:52.746 Test: blob_relations3 ...passed 00:07:52.746 Test: blobstore_clean_power_failure ...passed 00:07:52.746 Test: blob_delete_snapshot_power_failure ...[2024-11-26 11:18:10.843901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:52.746 [2024-11-26 11:18:10.852692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:52.746 [2024-11-26 11:18:10.852776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:52.746 [2024-11-26 11:18:10.852800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.861763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:52.746 [2024-11-26 11:18:10.861937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:52.746 [2024-11-26 11:18:10.861956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:52.746 [2024-11-26 11:18:10.861975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.870628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:52.746 [2024-11-26 11:18:10.870795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.879544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:52.746 [2024-11-26 11:18:10.879700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 [2024-11-26 11:18:10.888761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:52.746 [2024-11-26 11:18:10.888884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.746 passed 00:07:52.746 Test: blob_create_snapshot_power_failure ...[2024-11-26 11:18:10.914138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:52.746 [2024-11-26 11:18:10.929705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:52.746 [2024-11-26 11:18:10.938024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:52.746 passed 00:07:52.746 Test: blob_io_unit ...passed 00:07:53.005 Test: blob_io_unit_compatibility ...passed 00:07:53.005 Test: blob_ext_md_pages ...passed 00:07:53.005 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.005 Test: blob_esnap_io_512_512 ...passed 00:07:53.005 Test: blob_esnap_io_4096_512 ...passed 00:07:53.005 Test: blob_esnap_io_512_4096 ...passed 00:07:53.005 Suite: blob_bs_copy_noextent 00:07:53.005 Test: blob_open ...passed 00:07:53.005 Test: blob_create ...[2024-11-26 11:18:11.099593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:53.005 passed 00:07:53.005 Test: blob_create_loop ...passed 00:07:53.005 Test: blob_create_fail ...[2024-11-26 11:18:11.162851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.005 passed 00:07:53.005 Test: blob_create_internal ...passed 00:07:53.005 Test: blob_create_zero_extent ...passed 00:07:53.005 Test: blob_snapshot ...passed 00:07:53.265 Test: blob_clone ...passed 00:07:53.265 Test: blob_inflate ...[2024-11-26 11:18:11.262594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:53.265 passed 00:07:53.265 Test: blob_delete ...passed 00:07:53.265 Test: blob_resize_test ...[2024-11-26 11:18:11.304540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:53.265 passed 00:07:53.265 Test: channel_ops ...passed 00:07:53.265 Test: blob_super ...passed 00:07:53.265 Test: blob_rw_verify_iov ...passed 00:07:53.265 Test: blob_unmap ...passed 00:07:53.265 Test: blob_iter ...passed 00:07:53.265 Test: blob_parse_md ...passed 00:07:53.265 Test: bs_load_pending_removal ...passed 00:07:53.265 Test: bs_unload ...[2024-11-26 11:18:11.470698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:53.265 passed 00:07:53.523 Test: bs_usable_clusters ...passed 00:07:53.523 Test: blob_crc ...[2024-11-26 11:18:11.512844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.523 [2024-11-26 11:18:11.513015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.523 passed 00:07:53.523 Test: blob_flags ...passed 00:07:53.523 Test: bs_version ...passed 00:07:53.523 Test: blob_set_xattrs_test ...[2024-11-26 11:18:11.579620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.523 [2024-11-26 11:18:11.579749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.523 passed 00:07:53.523 Test: blob_thin_prov_alloc ...passed 00:07:53.523 Test: blob_insert_cluster_msg_test ...passed 00:07:53.523 Test: blob_thin_prov_rw ...passed 00:07:53.782 Test: blob_thin_prov_rle ...passed 00:07:53.782 Test: blob_thin_prov_rw_iov ...passed 00:07:53.782 Test: blob_snapshot_rw ...passed 00:07:53.782 Test: blob_snapshot_rw_iov ...passed 00:07:54.041 Test: blob_inflate_rw ...passed 00:07:54.041 Test: blob_snapshot_freeze_io ...passed 00:07:54.041 Test: blob_operation_split_rw ...passed 00:07:54.301 Test: blob_operation_split_rw_iov ...passed 00:07:54.301 Test: blob_simultaneous_operations ...[2024-11-26 11:18:12.306186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.301 [2024-11-26 11:18:12.306282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.301 [2024-11-26 11:18:12.306699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.301 [2024-11-26 11:18:12.306733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.301 [2024-11-26 11:18:12.309031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.301 [2024-11-26 11:18:12.309084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.301 [2024-11-26 11:18:12.309161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.301 [2024-11-26 11:18:12.309178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.301 passed 00:07:54.301 Test: blob_persist_test ...passed 00:07:54.301 Test: blob_decouple_snapshot ...passed 00:07:54.301 Test: blob_seek_io_unit ...passed 00:07:54.301 Test: blob_nested_freezes ...passed 00:07:54.301 Suite: blob_blob_copy_noextent 00:07:54.301 Test: blob_write ...passed 00:07:54.301 Test: blob_read ...passed 00:07:54.301 Test: blob_rw_verify ...passed 00:07:54.301 Test: blob_rw_verify_iov_nomem ...passed 00:07:54.301 Test: blob_rw_iov_read_only ...passed 00:07:54.301 Test: blob_xattr ...passed 00:07:54.560 Test: blob_dirty_shutdown ...passed 00:07:54.560 Test: blob_is_degraded ...passed 00:07:54.560 Suite: blob_esnap_bs_copy_noextent 00:07:54.560 Test: blob_esnap_create ...passed 00:07:54.560 Test: blob_esnap_thread_add_remove ...passed 00:07:54.560 Test: blob_esnap_clone_snapshot ...passed 00:07:54.560 Test: blob_esnap_clone_inflate ...passed 00:07:54.560 Test: blob_esnap_clone_decouple ...passed 00:07:54.560 Test: blob_esnap_clone_reload ...passed 00:07:54.560 Test: blob_esnap_hotplug ...passed 00:07:54.560 Suite: blob_copy_extent 00:07:54.560 Test: blob_init ...[2024-11-26 11:18:12.710142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:54.560 passed 00:07:54.560 Test: blob_thin_provision ...passed 00:07:54.560 Test: blob_read_only ...passed 00:07:54.560 Test: bs_load ...[2024-11-26 11:18:12.738716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:54.560 passed 00:07:54.560 Test: bs_load_custom_cluster_size ...passed 00:07:54.560 Test: bs_load_after_failed_grow ...passed 00:07:54.560 Test: bs_cluster_sz ...[2024-11-26 11:18:12.754389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:54.560 [2024-11-26 11:18:12.754602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:54.560 [2024-11-26 11:18:12.754644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:54.560 passed 00:07:54.560 Test: bs_resize_md ...passed 00:07:54.560 Test: bs_destroy ...passed 00:07:54.819 Test: bs_type ...passed 00:07:54.819 Test: bs_super_block ...passed 00:07:54.819 Test: bs_test_recover_cluster_count ...passed 00:07:54.819 Test: bs_grow_live ...passed 00:07:54.819 Test: bs_grow_live_no_space ...passed 00:07:54.819 Test: bs_test_grow ...passed 00:07:54.819 Test: blob_serialize_test ...passed 00:07:54.819 Test: super_block_crc ...passed 00:07:54.819 Test: blob_thin_prov_write_count_io ...passed 00:07:54.819 Test: bs_load_iter_test ...passed 00:07:54.819 Test: blob_relations ...[2024-11-26 11:18:12.850487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.850577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:12.851501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.851558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 passed 00:07:54.819 Test: blob_relations2 ...[2024-11-26 11:18:12.860367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.860456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:12.860523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.860535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:12.861938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.862007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:12.862437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:54.819 [2024-11-26 11:18:12.862481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 passed 00:07:54.819 Test: blob_relations3 ...passed 00:07:54.819 Test: blobstore_clean_power_failure ...passed 00:07:54.819 Test: blob_delete_snapshot_power_failure ...[2024-11-26 11:18:12.967028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:54.819 [2024-11-26 11:18:12.979313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:54.819 [2024-11-26 11:18:12.988363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:54.819 [2024-11-26 11:18:12.988458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:54.819 [2024-11-26 11:18:12.988495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:12.997595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:54.819 [2024-11-26 11:18:12.997727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:54.819 [2024-11-26 11:18:12.997746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:54.819 [2024-11-26 11:18:12.997766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:13.006672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:54.819 [2024-11-26 11:18:13.006771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:54.819 [2024-11-26 11:18:13.006793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:54.819 [2024-11-26 11:18:13.006814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:13.015900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:54.819 [2024-11-26 11:18:13.016044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:13.024771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:54.819 [2024-11-26 11:18:13.024929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 [2024-11-26 11:18:13.032984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:54.819 [2024-11-26 11:18:13.033076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.819 passed 00:07:55.078 Test: blob_create_snapshot_power_failure ...[2024-11-26 11:18:13.055610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:55.078 [2024-11-26 11:18:13.063576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:55.078 [2024-11-26 11:18:13.079080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:55.078 [2024-11-26 11:18:13.086837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:55.078 passed 00:07:55.078 Test: blob_io_unit ...passed 00:07:55.078 Test: blob_io_unit_compatibility ...passed 00:07:55.078 Test: blob_ext_md_pages ...passed 00:07:55.078 Test: blob_esnap_io_4096_4096 ...passed 00:07:55.078 Test: blob_esnap_io_512_512 ...passed 00:07:55.078 Test: blob_esnap_io_4096_512 ...passed 00:07:55.078 Test: blob_esnap_io_512_4096 ...passed 00:07:55.078 Suite: blob_bs_copy_extent 00:07:55.078 Test: blob_open ...passed 00:07:55.078 Test: blob_create ...[2024-11-26 11:18:13.236355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:55.078 passed 00:07:55.078 Test: blob_create_loop ...passed 00:07:55.078 Test: blob_create_fail ...[2024-11-26 11:18:13.301479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.078 passed 00:07:55.337 Test: blob_create_internal ...passed 00:07:55.337 Test: blob_create_zero_extent ...passed 00:07:55.337 Test: blob_snapshot ...passed 00:07:55.337 Test: blob_clone ...passed 00:07:55.337 Test: blob_inflate ...[2024-11-26 11:18:13.410786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:55.337 passed 00:07:55.337 Test: blob_delete ...passed 00:07:55.337 Test: blob_resize_test ...[2024-11-26 11:18:13.451094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:55.337 passed 00:07:55.337 Test: channel_ops ...passed 00:07:55.337 Test: blob_super ...passed 00:07:55.338 Test: blob_rw_verify_iov ...passed 00:07:55.338 Test: blob_unmap ...passed 00:07:55.338 Test: blob_iter ...passed 00:07:55.597 Test: blob_parse_md ...passed 00:07:55.597 Test: bs_load_pending_removal ...passed 00:07:55.597 Test: bs_unload ...[2024-11-26 11:18:13.619684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:55.597 passed 00:07:55.597 Test: bs_usable_clusters ...passed 00:07:55.597 Test: blob_crc ...[2024-11-26 11:18:13.659854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.597 [2024-11-26 11:18:13.660002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.597 passed 00:07:55.597 Test: blob_flags ...passed 00:07:55.597 Test: bs_version ...passed 00:07:55.597 Test: blob_set_xattrs_test ...[2024-11-26 11:18:13.715600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.597 [2024-11-26 11:18:13.715721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.597 passed 00:07:55.597 Test: blob_thin_prov_alloc ...passed 00:07:55.855 Test: blob_insert_cluster_msg_test ...passed 00:07:55.856 Test: blob_thin_prov_rw ...passed 00:07:55.856 Test: blob_thin_prov_rle ...passed 00:07:55.856 Test: blob_thin_prov_rw_iov ...passed 00:07:55.856 Test: blob_snapshot_rw ...passed 00:07:55.856 Test: blob_snapshot_rw_iov ...passed 00:07:56.115 Test: blob_inflate_rw ...passed 00:07:56.115 Test: blob_snapshot_freeze_io ...passed 00:07:56.115 Test: blob_operation_split_rw ...passed 00:07:56.374 Test: blob_operation_split_rw_iov ...passed 00:07:56.374 Test: blob_simultaneous_operations ...[2024-11-26 11:18:14.423541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.374 [2024-11-26 11:18:14.423659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.374 [2024-11-26 11:18:14.424144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.374 [2024-11-26 11:18:14.424176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.374 [2024-11-26 11:18:14.426260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.374 [2024-11-26 11:18:14.426311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.374 [2024-11-26 11:18:14.426407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:56.374 [2024-11-26 11:18:14.426424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.374 passed 00:07:56.374 Test: blob_persist_test ...passed 00:07:56.374 Test: blob_decouple_snapshot ...passed 00:07:56.374 Test: blob_seek_io_unit ...passed 00:07:56.374 Test: blob_nested_freezes ...passed 00:07:56.374 Suite: blob_blob_copy_extent 00:07:56.374 Test: blob_write ...passed 00:07:56.374 Test: blob_read ...passed 00:07:56.374 Test: blob_rw_verify ...passed 00:07:56.374 Test: blob_rw_verify_iov_nomem ...passed 00:07:56.633 Test: blob_rw_iov_read_only ...passed 00:07:56.633 Test: blob_xattr ...passed 00:07:56.633 Test: blob_dirty_shutdown ...passed 00:07:56.633 Test: blob_is_degraded ...passed 00:07:56.633 Suite: blob_esnap_bs_copy_extent 00:07:56.634 Test: blob_esnap_create ...passed 00:07:56.634 Test: blob_esnap_thread_add_remove ...passed 00:07:56.634 Test: blob_esnap_clone_snapshot ...passed 00:07:56.634 Test: blob_esnap_clone_inflate ...passed 00:07:56.634 Test: blob_esnap_clone_decouple ...passed 00:07:56.634 Test: blob_esnap_clone_reload ...passed 00:07:56.634 Test: blob_esnap_hotplug ...passed 00:07:56.634 00:07:56.634 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.634 suites 16 16 n/a 0 0 00:07:56.634 tests 348 348 348 0 0 00:07:56.634 asserts 92605 92605 92605 0 n/a 00:07:56.634 00:07:56.634 Elapsed time = 8.693 seconds 00:07:56.893 11:18:14 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:56.893 00:07:56.893 00:07:56.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.893 http://cunit.sourceforge.net/ 00:07:56.893 00:07:56.893 00:07:56.893 Suite: blob_bdev 00:07:56.893 Test: create_bs_dev ...passed 00:07:56.893 Test: create_bs_dev_ro ...[2024-11-26 11:18:14.930938] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:56.893 passed 00:07:56.893 Test: create_bs_dev_rw ...passed 00:07:56.893 Test: claim_bs_dev ...passed 00:07:56.893 Test: claim_bs_dev_ro ...[2024-11-26 11:18:14.931313] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:56.893 passed 00:07:56.893 Test: deferred_destroy_refs ...passed 00:07:56.893 Test: deferred_destroy_channels ...passed 00:07:56.893 Test: deferred_destroy_threads ...passed 00:07:56.893 00:07:56.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.893 suites 1 1 n/a 0 0 00:07:56.893 tests 8 8 8 0 0 00:07:56.893 asserts 119 119 119 0 n/a 00:07:56.893 00:07:56.893 Elapsed time = 0.001 seconds 00:07:56.893 11:18:14 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:56.893 00:07:56.893 00:07:56.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.893 http://cunit.sourceforge.net/ 00:07:56.893 00:07:56.893 00:07:56.893 Suite: tree 00:07:56.893 Test: blobfs_tree_op_test ...passed 00:07:56.893 00:07:56.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.893 suites 1 1 n/a 0 0 00:07:56.893 tests 1 1 1 0 0 00:07:56.893 asserts 27 27 27 0 n/a 00:07:56.893 00:07:56.893 Elapsed time = 0.000 seconds 00:07:56.893 11:18:14 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:56.893 00:07:56.893 00:07:56.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.893 http://cunit.sourceforge.net/ 00:07:56.893 00:07:56.893 00:07:56.893 Suite: blobfs_async_ut 00:07:56.893 Test: fs_init ...passed 00:07:56.893 Test: fs_open ...passed 00:07:56.893 Test: fs_create ...passed 00:07:56.893 Test: fs_truncate ...passed 00:07:56.893 Test: fs_rename ...[2024-11-26 11:18:15.070499] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:56.893 passed 00:07:56.893 Test: fs_rw_async ...passed 00:07:56.893 Test: fs_writev_readv_async ...passed 00:07:56.893 Test: tree_find_buffer_ut ...passed 00:07:56.893 Test: channel_ops ...passed 00:07:56.893 Test: channel_ops_sync ...passed 00:07:56.893 00:07:56.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.893 suites 1 1 n/a 0 0 00:07:56.893 tests 10 10 10 0 0 00:07:56.893 asserts 292 292 292 0 n/a 00:07:56.893 00:07:56.893 Elapsed time = 0.116 seconds 00:07:57.152 11:18:15 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:57.152 00:07:57.152 00:07:57.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.152 http://cunit.sourceforge.net/ 00:07:57.152 00:07:57.152 00:07:57.152 Suite: blobfs_sync_ut 00:07:57.152 Test: cache_read_after_write ...[2024-11-26 11:18:15.240216] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:57.152 passed 00:07:57.152 Test: file_length ...passed 00:07:57.152 Test: append_write_to_extend_blob ...passed 00:07:57.152 Test: partial_buffer ...passed 00:07:57.152 Test: cache_write_null_buffer ...passed 00:07:57.152 Test: fs_create_sync ...passed 00:07:57.152 Test: fs_rename_sync ...passed 00:07:57.152 Test: cache_append_no_cache ...passed 00:07:57.152 Test: fs_delete_file_without_close ...passed 00:07:57.152 00:07:57.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.152 suites 1 1 n/a 0 0 00:07:57.152 tests 9 9 9 0 0 00:07:57.152 asserts 345 345 345 0 n/a 00:07:57.152 00:07:57.152 Elapsed time = 0.329 seconds 00:07:57.412 11:18:15 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:57.412 00:07:57.412 00:07:57.412 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.412 http://cunit.sourceforge.net/ 00:07:57.412 00:07:57.412 00:07:57.412 Suite: blobfs_bdev_ut 00:07:57.412 Test: spdk_blobfs_bdev_detect_test ...[2024-11-26 11:18:15.412980] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:57.412 passed 00:07:57.412 Test: spdk_blobfs_bdev_create_test ...passed 00:07:57.412 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:57.412 00:07:57.412 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.412 suites 1 1 n/a 0 0 00:07:57.412 tests 3 3 3 0 0 00:07:57.412 asserts 9 9 9 0 n/a 00:07:57.412 00:07:57.412 Elapsed time = 0.000 seconds 00:07:57.412 [2024-11-26 11:18:15.413219] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:57.412 00:07:57.412 real 0m9.303s 00:07:57.413 user 0m8.835s 00:07:57.413 sys 0m0.642s 00:07:57.413 11:18:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.413 ************************************ 00:07:57.413 END TEST unittest_blob_blobfs 00:07:57.413 ************************************ 00:07:57.413 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.413 11:18:15 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:07:57.413 11:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.413 11:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.413 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.413 ************************************ 00:07:57.413 START TEST unittest_event 00:07:57.413 ************************************ 00:07:57.413 11:18:15 -- common/autotest_common.sh@1114 -- # unittest_event 00:07:57.413 11:18:15 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:57.413 00:07:57.413 00:07:57.413 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.413 http://cunit.sourceforge.net/ 00:07:57.413 00:07:57.413 00:07:57.413 Suite: app_suite 00:07:57.413 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:57.413 options: 00:07:57.413 -c, --config JSON config file (default none) 00:07:57.413 --json JSON config file (default none) 00:07:57.413 --json-ignore-init-errors 00:07:57.413 don't exit on invalid config entry 00:07:57.413 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:57.413 -g, --single-file-segments 00:07:57.413 force creating just one hugetlbfs file 00:07:57.413 -h, --help show this usage 00:07:57.413 -i, --shm-id shared memory ID (optional) 00:07:57.413 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:57.413 --lcores lcore to CPU mapping list. The list is in the format: 00:07:57.413 [<,lcores[@CPUs]>...] 00:07:57.413 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:57.413 Within the group, '-' is used for range separator, 00:07:57.413 ',' is used for single number separator. 00:07:57.413 '( )' can be omitted for single element group, 00:07:57.413 '@' can be omitted if cpus and lcores have the same value 00:07:57.413 -n, --mem-channels channel number of memory channels used for DPDK 00:07:57.413 -p, --main-core main (primary) core for DPDK 00:07:57.413 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:57.413 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:57.413 --disable-cpumask-locks Disable CPU core lock files. 00:07:57.413 --silence-noticelog disable notice level logging to stderr 00:07:57.413 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:57.413 -u, --no-pci disable PCI access 00:07:57.413 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:57.413 --max-delay maximum reactor delay (in microseconds) 00:07:57.413 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:57.413 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:57.413 -R, --huge-unlink unlink huge files after initialization 00:07:57.413 -v, --version print SPDK version 00:07:57.413 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:57.413 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:57.413 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:57.413 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:57.413 Tracepoints vary in size and can use more than one trace entry. 00:07:57.413 --rpcs-allowed comma-separated list of permitted RPCS 00:07:57.413 --env-context Opaque context for use of the env implementation 00:07:57.413 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:57.413 --no-huge run without using hugepages 00:07:57.413 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:57.413 -e, --tpoint-group [:] 00:07:57.413 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:57.413 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:57.413 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:57.413 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:57.413 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:57.413 app_ut: invalid option -- 'z' 00:07:57.413 app_ut [options] 00:07:57.413 options: 00:07:57.413 -c, --config JSON config file (default none) 00:07:57.413 --json JSON config file (default none) 00:07:57.413 --json-ignore-init-errors 00:07:57.413 don't exit on invalid config entry 00:07:57.413 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:57.413 -g, --single-file-segments 00:07:57.413 force creating just one hugetlbfs file 00:07:57.413 -h, --help show this usage 00:07:57.413 -i, --shm-id shared memory ID (optional) 00:07:57.413 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:57.413 --lcores lcore to CPU mapping list. The list is in the format: 00:07:57.413 [<,lcores[@CPUs]>...] 00:07:57.413 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:57.413 Within the group, '-' is used for range separator, 00:07:57.413 ',' is used for single number separator. 00:07:57.413 '( )' can be omitted for single element group, 00:07:57.413 '@' can be omitted if cpus and lcores have the same value 00:07:57.413 -n, --mem-channels channel number of memory channels used for DPDK 00:07:57.413 -p, --main-core main (primary) core for DPDK 00:07:57.413 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:57.413 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:57.413 --disable-cpumask-locks Disable CPU core lock files. 00:07:57.413 --silence-noticelog disable notice level logging to stderr 00:07:57.413 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:57.413 -u, --no-pci disable PCI access 00:07:57.413 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:57.413 --max-delay maximum reactor delay (in microseconds) 00:07:57.413 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:57.413 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:57.413 -R, --huge-unlink unlink huge files after initialization 00:07:57.413 -v, --version print SPDK version 00:07:57.413 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:57.413 app_ut: unrecognized option '--test-long-opt' 00:07:57.413 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:57.413 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:57.413 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:57.413 Tracepoints vary in size and can use more than one trace entry. 00:07:57.413 --rpcs-allowed comma-separated list of permitted RPCS 00:07:57.413 --env-context Opaque context for use of the env implementation 00:07:57.413 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:57.414 --no-huge run without using hugepages 00:07:57.414 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:57.414 -e, --tpoint-group [:] 00:07:57.414 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:57.414 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:57.414 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:57.414 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:57.414 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:57.414 app_ut [options] 00:07:57.414 options: 00:07:57.414 -c, --config JSON config file (default none) 00:07:57.414 --json JSON config file (default none) 00:07:57.414 --json-ignore-init-errors 00:07:57.414 don't exit on invalid config entry 00:07:57.414 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:57.414 -g, --single-file-segments 00:07:57.414 force creating just one hugetlbfs file 00:07:57.414 -h, --help show this usage 00:07:57.414 -i, --shm-id shared memory ID (optional) 00:07:57.414 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:57.414 --lcores lcore to CPU mapping list. The list is in the format: 00:07:57.414 [<,lcores[@CPUs]>...] 00:07:57.414 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:57.414 Within the group, '-' is used for range separator, 00:07:57.414 ',' is used for single number separator. 00:07:57.414 '( )' can be omitted for single element group, 00:07:57.414 '@' can be omitted if cpus and lcores have the same value 00:07:57.414 -n, --mem-channels channel number of memory channels used for DPDK 00:07:57.414 -p, --main-core main (primary) core for DPDK 00:07:57.414 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:57.414 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:57.414 --disable-cpumask-locks Disable CPU core lock files. 00:07:57.414 --silence-noticelog disable notice level logging to stderr 00:07:57.414 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:57.414 -u, --no-pci disable PCI access 00:07:57.414 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:57.414 --max-delay maximum reactor delay (in microseconds) 00:07:57.414 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:57.414 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:57.414 -R, --huge-unlink unlink huge files after initialization 00:07:57.414 -v, --version print SPDK version 00:07:57.414 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:57.414 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:57.414 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:57.414 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:57.414 Tracepoints vary in size and can use more than one trace entry. 00:07:57.414 --rpcs-allowed comma-separated list of permitted RPCS 00:07:57.414 --env-context Opaque context for use of the env implementation 00:07:57.414 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:57.414 --no-huge run without using hugepages 00:07:57.414 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:57.414 -e, --tpoint-group [:] 00:07:57.414 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:57.414 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:57.414 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:57.414 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:57.414 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:57.414 passed 00:07:57.414 00:07:57.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.414 suites 1 1 n/a 0 0 00:07:57.414 tests 1 1 1 0 0 00:07:57.414 asserts 8 8 8 0 n/a 00:07:57.414 00:07:57.414 Elapsed time = 0.001 seconds 00:07:57.414 [2024-11-26 11:18:15.495721] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:57.414 [2024-11-26 11:18:15.496089] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:57.414 [2024-11-26 11:18:15.496298] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:57.414 11:18:15 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:57.414 00:07:57.414 00:07:57.414 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.414 http://cunit.sourceforge.net/ 00:07:57.414 00:07:57.414 00:07:57.414 Suite: app_suite 00:07:57.414 Test: test_create_reactor ...passed 00:07:57.414 Test: test_init_reactors ...passed 00:07:57.414 Test: test_event_call ...passed 00:07:57.414 Test: test_schedule_thread ...passed 00:07:57.414 Test: test_reschedule_thread ...passed 00:07:57.414 Test: test_bind_thread ...passed 00:07:57.414 Test: test_for_each_reactor ...passed 00:07:57.414 Test: test_reactor_stats ...passed 00:07:57.414 Test: test_scheduler ...passed 00:07:57.414 Test: test_governor ...passed 00:07:57.414 00:07:57.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.414 suites 1 1 n/a 0 0 00:07:57.414 tests 10 10 10 0 0 00:07:57.414 asserts 344 344 344 0 n/a 00:07:57.414 00:07:57.414 Elapsed time = 0.025 seconds 00:07:57.414 ************************************ 00:07:57.414 END TEST unittest_event 00:07:57.414 ************************************ 00:07:57.414 00:07:57.414 real 0m0.105s 00:07:57.414 user 0m0.058s 00:07:57.414 sys 0m0.045s 00:07:57.414 11:18:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.414 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.414 11:18:15 -- unit/unittest.sh@209 -- # uname -s 00:07:57.414 11:18:15 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:07:57.414 11:18:15 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:07:57.414 11:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.414 11:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.414 11:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.414 ************************************ 00:07:57.414 START TEST unittest_ftl 00:07:57.414 ************************************ 00:07:57.674 11:18:15 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:07:57.674 11:18:15 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:57.674 00:07:57.674 00:07:57.674 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.674 http://cunit.sourceforge.net/ 00:07:57.674 00:07:57.674 00:07:57.674 Suite: ftl_band_suite 00:07:57.674 Test: test_band_block_offset_from_addr_base ...passed 00:07:57.674 Test: test_band_block_offset_from_addr_offset ...passed 00:07:57.674 Test: test_band_addr_from_block_offset ...passed 00:07:57.674 Test: test_band_set_addr ...passed 00:07:57.674 Test: test_invalidate_addr ...passed 00:07:57.674 Test: test_next_xfer_addr ...passed 00:07:57.674 00:07:57.674 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.674 suites 1 1 n/a 0 0 00:07:57.674 tests 6 6 6 0 0 00:07:57.674 asserts 30356 30356 30356 0 n/a 00:07:57.674 00:07:57.674 Elapsed time = 0.188 seconds 00:07:57.933 11:18:15 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:57.933 00:07:57.933 00:07:57.933 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.933 http://cunit.sourceforge.net/ 00:07:57.933 00:07:57.933 00:07:57.933 Suite: ftl_bitmap 00:07:57.933 Test: test_ftl_bitmap_create ...passed 00:07:57.933 Test: test_ftl_bitmap_get ...passed 00:07:57.933 Test: test_ftl_bitmap_set ...passed 00:07:57.933 Test: test_ftl_bitmap_clear ...[2024-11-26 11:18:15.938529] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:57.934 [2024-11-26 11:18:15.938702] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:57.934 passed 00:07:57.934 Test: test_ftl_bitmap_find_first_set ...passed 00:07:57.934 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:57.934 Test: test_ftl_bitmap_count_set ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 7 7 7 0 0 00:07:57.934 asserts 137 137 137 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.001 seconds 00:07:57.934 11:18:15 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_io_suite 00:07:57.934 Test: test_completion ...passed 00:07:57.934 Test: test_multiple_ios ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 2 2 2 0 0 00:07:57.934 asserts 47 47 47 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.004 seconds 00:07:57.934 11:18:15 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_mngt 00:07:57.934 Test: test_next_step ...passed 00:07:57.934 Test: test_continue_step ...passed 00:07:57.934 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:57.934 Test: test_fail_step ...passed 00:07:57.934 Test: test_mngt_call_and_call_rollback ...passed 00:07:57.934 Test: test_nested_process_failure ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 6 6 6 0 0 00:07:57.934 asserts 176 176 176 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.002 seconds 00:07:57.934 11:18:16 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_mempool 00:07:57.934 Test: test_ftl_mempool_create ...passed 00:07:57.934 Test: test_ftl_mempool_get_put ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 2 2 2 0 0 00:07:57.934 asserts 36 36 36 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.000 seconds 00:07:57.934 11:18:16 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_addr64_suite 00:07:57.934 Test: test_addr_cached ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 1 1 1 0 0 00:07:57.934 asserts 1536 1536 1536 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.000 seconds 00:07:57.934 11:18:16 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_sb 00:07:57.934 Test: test_sb_crc_v2 ...passed 00:07:57.934 Test: test_sb_crc_v3 ...passed 00:07:57.934 Test: test_sb_v3_md_layout ...[2024-11-26 11:18:16.096904] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:57.934 [2024-11-26 11:18:16.097654] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.934 [2024-11-26 11:18:16.097739] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.934 [2024-11-26 11:18:16.097768] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:57.934 [2024-11-26 11:18:16.097802] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:57.934 [2024-11-26 11:18:16.097831] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:57.934 [2024-11-26 11:18:16.097949] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:57.934 [2024-11-26 11:18:16.097983] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:57.934 [2024-11-26 11:18:16.098071] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:57.934 [2024-11-26 11:18:16.098104] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:57.934 passed 00:07:57.934 Test: test_sb_v5_md_layout ...[2024-11-26 11:18:16.098142] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:57.934 passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 4 4 4 0 0 00:07:57.934 asserts 148 148 148 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.002 seconds 00:07:57.934 11:18:16 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:57.934 00:07:57.934 00:07:57.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.934 http://cunit.sourceforge.net/ 00:07:57.934 00:07:57.934 00:07:57.934 Suite: ftl_layout_upgrade 00:07:57.934 Test: test_l2p_upgrade ...passed 00:07:57.934 00:07:57.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.934 suites 1 1 n/a 0 0 00:07:57.934 tests 1 1 1 0 0 00:07:57.934 asserts 140 140 140 0 n/a 00:07:57.934 00:07:57.934 Elapsed time = 0.001 seconds 00:07:57.934 00:07:57.934 real 0m0.499s 00:07:57.934 user 0m0.223s 00:07:57.934 sys 0m0.276s 00:07:57.934 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.934 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:57.934 ************************************ 00:07:57.934 END TEST unittest_ftl 00:07:57.934 ************************************ 00:07:58.194 11:18:16 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:58.194 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.194 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.194 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.194 ************************************ 00:07:58.194 START TEST unittest_accel 00:07:58.194 ************************************ 00:07:58.194 11:18:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:58.194 00:07:58.194 00:07:58.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.194 http://cunit.sourceforge.net/ 00:07:58.194 00:07:58.194 00:07:58.194 Suite: accel_sequence 00:07:58.194 Test: test_sequence_fill_copy ...passed 00:07:58.194 Test: test_sequence_abort ...passed 00:07:58.194 Test: test_sequence_append_error ...passed 00:07:58.194 Test: test_sequence_completion_error ...[2024-11-26 11:18:16.224450] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7c4d1a3287c0 00:07:58.194 passed 00:07:58.194 Test: test_sequence_decompress ...[2024-11-26 11:18:16.224718] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7c4d1a3287c0 00:07:58.194 [2024-11-26 11:18:16.224782] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7c4d1a3287c0 00:07:58.194 [2024-11-26 11:18:16.224820] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7c4d1a3287c0 00:07:58.194 passed 00:07:58.194 Test: test_sequence_reverse ...passed 00:07:58.194 Test: test_sequence_copy_elision ...passed 00:07:58.194 Test: test_sequence_accel_buffers ...passed 00:07:58.194 Test: test_sequence_memory_domain ...[2024-11-26 11:18:16.236706] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:58.194 passed 00:07:58.194 Test: test_sequence_module_memory_domain ...[2024-11-26 11:18:16.236901] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:58.194 passed 00:07:58.194 Test: test_sequence_crypto ...passed 00:07:58.194 Test: test_sequence_driver ...[2024-11-26 11:18:16.243800] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7c4d176da7c0 using driver: ut 00:07:58.194 passed 00:07:58.194 Test: test_sequence_same_iovs ...[2024-11-26 11:18:16.243904] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7c4d176da7c0 through driver: ut 00:07:58.194 passed 00:07:58.194 Test: test_sequence_crc32 ...passed 00:07:58.194 Suite: accel 00:07:58.194 Test: test_spdk_accel_task_complete ...passed 00:07:58.194 Test: test_get_task ...passed 00:07:58.194 Test: test_spdk_accel_submit_copy ...passed 00:07:58.194 Test: test_spdk_accel_submit_dualcast ...passed 00:07:58.194 Test: test_spdk_accel_submit_compare ...passed 00:07:58.194 Test: test_spdk_accel_submit_fill ...passed 00:07:58.194 Test: test_spdk_accel_submit_crc32c ...passed 00:07:58.194 Test: test_spdk_accel_submit_crc32cv ...[2024-11-26 11:18:16.248958] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:58.194 [2024-11-26 11:18:16.249022] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:58.194 passed 00:07:58.194 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:58.194 Test: test_spdk_accel_submit_xor ...passed 00:07:58.194 Test: test_spdk_accel_module_find_by_name ...passed 00:07:58.194 Test: test_spdk_accel_module_register ...passed 00:07:58.194 00:07:58.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.194 suites 2 2 n/a 0 0 00:07:58.194 tests 26 26 26 0 0 00:07:58.194 asserts 831 831 831 0 n/a 00:07:58.194 00:07:58.195 Elapsed time = 0.036 seconds 00:07:58.195 00:07:58.195 real 0m0.076s 00:07:58.195 user 0m0.048s 00:07:58.195 sys 0m0.028s 00:07:58.195 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.195 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.195 ************************************ 00:07:58.195 END TEST unittest_accel 00:07:58.195 ************************************ 00:07:58.195 11:18:16 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:58.195 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.195 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.195 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.195 ************************************ 00:07:58.195 START TEST unittest_ioat 00:07:58.195 ************************************ 00:07:58.195 11:18:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:58.195 00:07:58.195 00:07:58.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.195 http://cunit.sourceforge.net/ 00:07:58.195 00:07:58.195 00:07:58.195 Suite: ioat 00:07:58.195 Test: ioat_state_check ...passed 00:07:58.195 00:07:58.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.195 suites 1 1 n/a 0 0 00:07:58.195 tests 1 1 1 0 0 00:07:58.195 asserts 32 32 32 0 n/a 00:07:58.195 00:07:58.195 Elapsed time = 0.000 seconds 00:07:58.195 00:07:58.195 real 0m0.027s 00:07:58.195 user 0m0.013s 00:07:58.195 sys 0m0.015s 00:07:58.195 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.195 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.195 ************************************ 00:07:58.195 END TEST unittest_ioat 00:07:58.195 ************************************ 00:07:58.195 11:18:16 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:58.195 11:18:16 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:58.195 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.195 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.195 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.195 ************************************ 00:07:58.195 START TEST unittest_idxd_user 00:07:58.195 ************************************ 00:07:58.195 11:18:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:58.455 00:07:58.455 00:07:58.455 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.455 http://cunit.sourceforge.net/ 00:07:58.455 00:07:58.455 00:07:58.455 Suite: idxd_user 00:07:58.455 Test: test_idxd_wait_cmd ...passed 00:07:58.455 Test: test_idxd_reset_dev ...passed 00:07:58.455 Test: test_idxd_group_config ...[2024-11-26 11:18:16.433786] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:58.455 [2024-11-26 11:18:16.433983] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:58.455 [2024-11-26 11:18:16.434086] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:58.455 [2024-11-26 11:18:16.434124] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:58.455 passed 00:07:58.455 Test: test_idxd_wq_config ...passed 00:07:58.455 00:07:58.455 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.455 suites 1 1 n/a 0 0 00:07:58.455 tests 4 4 4 0 0 00:07:58.455 asserts 20 20 20 0 n/a 00:07:58.455 00:07:58.455 Elapsed time = 0.001 seconds 00:07:58.455 00:07:58.455 real 0m0.032s 00:07:58.455 user 0m0.014s 00:07:58.455 sys 0m0.018s 00:07:58.455 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.455 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.455 ************************************ 00:07:58.455 END TEST unittest_idxd_user 00:07:58.455 ************************************ 00:07:58.455 11:18:16 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:07:58.455 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.455 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.455 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.455 ************************************ 00:07:58.455 START TEST unittest_iscsi 00:07:58.455 ************************************ 00:07:58.455 11:18:16 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:07:58.455 11:18:16 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:58.455 00:07:58.455 00:07:58.455 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.455 http://cunit.sourceforge.net/ 00:07:58.455 00:07:58.455 00:07:58.455 Suite: conn_suite 00:07:58.455 Test: read_task_split_in_order_case ...passed 00:07:58.455 Test: read_task_split_reverse_order_case ...passed 00:07:58.455 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:58.455 Test: process_non_read_task_completion_test ...passed 00:07:58.455 Test: free_tasks_on_connection ...passed 00:07:58.455 Test: free_tasks_with_queued_datain ...passed 00:07:58.455 Test: abort_queued_datain_task_test ...passed 00:07:58.455 Test: abort_queued_datain_tasks_test ...passed 00:07:58.456 00:07:58.456 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.456 suites 1 1 n/a 0 0 00:07:58.456 tests 8 8 8 0 0 00:07:58.456 asserts 230 230 230 0 n/a 00:07:58.456 00:07:58.456 Elapsed time = 0.000 seconds 00:07:58.456 11:18:16 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:58.456 00:07:58.456 00:07:58.456 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.456 http://cunit.sourceforge.net/ 00:07:58.456 00:07:58.456 00:07:58.456 Suite: iscsi_suite 00:07:58.456 Test: param_negotiation_test ...passed 00:07:58.456 Test: list_negotiation_test ...passed 00:07:58.456 Test: parse_valid_test ...passed 00:07:58.456 Test: parse_invalid_test ...[2024-11-26 11:18:16.579407] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:58.456 [2024-11-26 11:18:16.579712] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:58.456 [2024-11-26 11:18:16.579918] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:58.456 passed 00:07:58.456 00:07:58.456 [2024-11-26 11:18:16.579978] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:58.456 [2024-11-26 11:18:16.580150] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:58.456 [2024-11-26 11:18:16.580210] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:58.456 [2024-11-26 11:18:16.580298] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:58.456 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.456 suites 1 1 n/a 0 0 00:07:58.456 tests 4 4 4 0 0 00:07:58.456 asserts 161 161 161 0 n/a 00:07:58.456 00:07:58.456 Elapsed time = 0.006 seconds 00:07:58.456 11:18:16 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:58.456 00:07:58.456 00:07:58.456 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.456 http://cunit.sourceforge.net/ 00:07:58.456 00:07:58.456 00:07:58.456 Suite: iscsi_target_node_suite 00:07:58.456 Test: add_lun_test_cases ...passed 00:07:58.456 Test: allow_any_allowed ...passed 00:07:58.456 Test: allow_ipv6_allowed ...passed 00:07:58.456 Test: allow_ipv6_denied ...passed 00:07:58.456 Test: allow_ipv6_invalid ...passed 00:07:58.456 Test: allow_ipv4_allowed ...passed 00:07:58.456 Test: allow_ipv4_denied ...passed 00:07:58.456 Test: allow_ipv4_invalid ...passed 00:07:58.456 Test: node_access_allowed ...[2024-11-26 11:18:16.608098] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:58.456 [2024-11-26 11:18:16.608313] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:58.456 [2024-11-26 11:18:16.608361] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:58.456 [2024-11-26 11:18:16.608393] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:58.456 [2024-11-26 11:18:16.608428] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:58.456 passed 00:07:58.456 Test: node_access_denied_by_empty_netmask ...passed 00:07:58.456 Test: node_access_multi_initiator_groups_cases ...passed 00:07:58.456 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:58.456 Test: chap_param_test_cases ...[2024-11-26 11:18:16.609104] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:58.456 [2024-11-26 11:18:16.609154] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:58.456 passed 00:07:58.456 00:07:58.456 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.456 suites 1 1 n/a 0 0 00:07:58.456 tests 13 13 13 0 0 00:07:58.456 asserts 50 50 50 0 n/a 00:07:58.456 00:07:58.456 Elapsed time = 0.001 seconds 00:07:58.456 [2024-11-26 11:18:16.609185] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:58.456 [2024-11-26 11:18:16.609219] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:58.456 [2024-11-26 11:18:16.609248] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:58.456 11:18:16 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:58.456 00:07:58.456 00:07:58.456 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.456 http://cunit.sourceforge.net/ 00:07:58.456 00:07:58.456 00:07:58.456 Suite: iscsi_suite 00:07:58.456 Test: op_login_check_target_test ...passed 00:07:58.456 Test: op_login_session_normal_test ...[2024-11-26 11:18:16.642698] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:58.456 [2024-11-26 11:18:16.643007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:58.456 [2024-11-26 11:18:16.643059] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:58.456 [2024-11-26 11:18:16.643092] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:58.456 [2024-11-26 11:18:16.643152] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:58.456 [2024-11-26 11:18:16.643198] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:58.456 passed 00:07:58.456 Test: maxburstlength_test ...passed 00:07:58.456 Test: underflow_for_read_transfer_test ...[2024-11-26 11:18:16.643269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:58.456 [2024-11-26 11:18:16.643309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:58.456 [2024-11-26 11:18:16.643605] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:58.456 [2024-11-26 11:18:16.643668] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:58.456 passed 00:07:58.456 Test: underflow_for_zero_read_transfer_test ...passed 00:07:58.456 Test: underflow_for_request_sense_test ...passed 00:07:58.456 Test: underflow_for_check_condition_test ...passed 00:07:58.456 Test: add_transfer_task_test ...passed 00:07:58.456 Test: get_transfer_task_test ...passed 00:07:58.456 Test: del_transfer_task_test ...passed 00:07:58.456 Test: clear_all_transfer_tasks_test ...passed 00:07:58.456 Test: build_iovs_test ...passed 00:07:58.456 Test: build_iovs_with_md_test ...passed 00:07:58.456 Test: pdu_hdr_op_login_test ...[2024-11-26 11:18:16.645285] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:58.456 passed 00:07:58.456 Test: pdu_hdr_op_text_test ...passed 00:07:58.456 Test: pdu_hdr_op_logout_test ...[2024-11-26 11:18:16.645404] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:58.456 [2024-11-26 11:18:16.645459] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:58.456 [2024-11-26 11:18:16.645557] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:58.456 [2024-11-26 11:18:16.645625] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:58.457 [2024-11-26 11:18:16.645661] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:58.457 passed 00:07:58.457 Test: pdu_hdr_op_scsi_test ...[2024-11-26 11:18:16.645740] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:58.457 [2024-11-26 11:18:16.645864] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:58.457 [2024-11-26 11:18:16.645927] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:58.457 [2024-11-26 11:18:16.645962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:58.457 [2024-11-26 11:18:16.646048] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:58.457 [2024-11-26 11:18:16.646124] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:58.457 passed 00:07:58.457 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-26 11:18:16.646296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:58.457 [2024-11-26 11:18:16.646393] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:58.457 [2024-11-26 11:18:16.646471] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:58.457 passed 00:07:58.457 Test: pdu_hdr_op_nopout_test ...passed 00:07:58.457 Test: pdu_hdr_op_data_test ...[2024-11-26 11:18:16.646689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:58.457 [2024-11-26 11:18:16.646762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:58.457 [2024-11-26 11:18:16.646803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:58.457 [2024-11-26 11:18:16.646833] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:58.457 [2024-11-26 11:18:16.646907] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:58.457 [2024-11-26 11:18:16.646962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:58.457 [2024-11-26 11:18:16.647021] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:58.457 [2024-11-26 11:18:16.647054] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:58.457 [2024-11-26 11:18:16.647121] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:58.457 [2024-11-26 11:18:16.647174] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:58.457 [2024-11-26 11:18:16.647211] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:58.457 passed 00:07:58.457 Test: empty_text_with_cbit_test ...passed 00:07:58.457 Test: pdu_payload_read_test ...[2024-11-26 11:18:16.649388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:58.457 passed 00:07:58.457 Test: data_out_pdu_sequence_test ...passed 00:07:58.457 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:58.457 00:07:58.457 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.457 suites 1 1 n/a 0 0 00:07:58.457 tests 24 24 24 0 0 00:07:58.457 asserts 150253 150253 150253 0 n/a 00:07:58.457 00:07:58.457 Elapsed time = 0.017 seconds 00:07:58.457 11:18:16 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:58.716 00:07:58.716 00:07:58.716 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.716 http://cunit.sourceforge.net/ 00:07:58.716 00:07:58.716 00:07:58.716 Suite: init_grp_suite 00:07:58.716 Test: create_initiator_group_success_case ...passed 00:07:58.716 Test: find_initiator_group_success_case ...passed 00:07:58.716 Test: register_initiator_group_twice_case ...passed 00:07:58.716 Test: add_initiator_name_success_case ...passed 00:07:58.716 Test: add_initiator_name_fail_case ...passed 00:07:58.716 Test: delete_all_initiator_names_success_case ...passed 00:07:58.716 Test: add_netmask_success_case ...passed 00:07:58.716 Test: add_netmask_fail_case ...[2024-11-26 11:18:16.693536] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:58.716 passed 00:07:58.716 Test: delete_all_netmasks_success_case ...passed 00:07:58.716 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:58.716 Test: netmask_overwrite_all_to_any_case ...passed 00:07:58.716 Test: add_delete_initiator_names_case ...passed 00:07:58.716 Test: add_duplicated_initiator_names_case ...passed 00:07:58.716 Test: delete_nonexisting_initiator_names_case ...passed 00:07:58.716 Test: add_delete_netmasks_case ...passed 00:07:58.716 Test: add_duplicated_netmasks_case ...[2024-11-26 11:18:16.693924] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:58.716 passed 00:07:58.716 Test: delete_nonexisting_netmasks_case ...passed 00:07:58.716 00:07:58.716 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.716 suites 1 1 n/a 0 0 00:07:58.716 tests 17 17 17 0 0 00:07:58.716 asserts 108 108 108 0 n/a 00:07:58.716 00:07:58.716 Elapsed time = 0.001 seconds 00:07:58.716 11:18:16 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:58.716 00:07:58.716 00:07:58.716 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.716 http://cunit.sourceforge.net/ 00:07:58.716 00:07:58.716 00:07:58.716 Suite: portal_grp_suite 00:07:58.716 Test: portal_create_ipv4_normal_case ...passed 00:07:58.716 Test: portal_create_ipv6_normal_case ...passed 00:07:58.716 Test: portal_create_ipv4_wildcard_case ...passed 00:07:58.716 Test: portal_create_ipv6_wildcard_case ...passed 00:07:58.716 Test: portal_create_twice_case ...passed 00:07:58.716 Test: portal_grp_register_unregister_case ...passed 00:07:58.716 Test: portal_grp_register_twice_case ...passed 00:07:58.716 Test: portal_grp_add_delete_case ...[2024-11-26 11:18:16.722159] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:58.716 passed 00:07:58.716 Test: portal_grp_add_delete_twice_case ...passed 00:07:58.716 00:07:58.716 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.716 suites 1 1 n/a 0 0 00:07:58.716 tests 9 9 9 0 0 00:07:58.716 asserts 44 44 44 0 n/a 00:07:58.716 00:07:58.716 Elapsed time = 0.004 seconds 00:07:58.716 00:07:58.716 real 0m0.224s 00:07:58.716 user 0m0.117s 00:07:58.716 sys 0m0.110s 00:07:58.716 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.716 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.716 ************************************ 00:07:58.716 END TEST unittest_iscsi 00:07:58.716 ************************************ 00:07:58.716 11:18:16 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:07:58.716 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.716 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.716 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.716 ************************************ 00:07:58.716 START TEST unittest_json 00:07:58.716 ************************************ 00:07:58.716 11:18:16 -- common/autotest_common.sh@1114 -- # unittest_json 00:07:58.716 11:18:16 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:58.716 00:07:58.716 00:07:58.716 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.716 http://cunit.sourceforge.net/ 00:07:58.716 00:07:58.716 00:07:58.716 Suite: json 00:07:58.716 Test: test_parse_literal ...passed 00:07:58.716 Test: test_parse_string_simple ...passed 00:07:58.716 Test: test_parse_string_control_chars ...passed 00:07:58.716 Test: test_parse_string_utf8 ...passed 00:07:58.716 Test: test_parse_string_escapes_twochar ...passed 00:07:58.716 Test: test_parse_string_escapes_unicode ...passed 00:07:58.716 Test: test_parse_number ...passed 00:07:58.716 Test: test_parse_array ...passed 00:07:58.716 Test: test_parse_object ...passed 00:07:58.716 Test: test_parse_nesting ...passed 00:07:58.716 Test: test_parse_comment ...passed 00:07:58.716 00:07:58.716 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.716 suites 1 1 n/a 0 0 00:07:58.716 tests 11 11 11 0 0 00:07:58.716 asserts 1516 1516 1516 0 n/a 00:07:58.716 00:07:58.716 Elapsed time = 0.002 seconds 00:07:58.716 11:18:16 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:58.716 00:07:58.716 00:07:58.716 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.716 http://cunit.sourceforge.net/ 00:07:58.716 00:07:58.716 00:07:58.716 Suite: json 00:07:58.716 Test: test_strequal ...passed 00:07:58.716 Test: test_num_to_uint16 ...passed 00:07:58.716 Test: test_num_to_int32 ...passed 00:07:58.717 Test: test_num_to_uint64 ...passed 00:07:58.717 Test: test_decode_object ...passed 00:07:58.717 Test: test_decode_array ...passed 00:07:58.717 Test: test_decode_bool ...passed 00:07:58.717 Test: test_decode_uint16 ...passed 00:07:58.717 Test: test_decode_int32 ...passed 00:07:58.717 Test: test_decode_uint32 ...passed 00:07:58.717 Test: test_decode_uint64 ...passed 00:07:58.717 Test: test_decode_string ...passed 00:07:58.717 Test: test_decode_uuid ...passed 00:07:58.717 Test: test_find ...passed 00:07:58.717 Test: test_find_array ...passed 00:07:58.717 Test: test_iterating ...passed 00:07:58.717 Test: test_free_object ...passed 00:07:58.717 00:07:58.717 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.717 suites 1 1 n/a 0 0 00:07:58.717 tests 17 17 17 0 0 00:07:58.717 asserts 236 236 236 0 n/a 00:07:58.717 00:07:58.717 Elapsed time = 0.001 seconds 00:07:58.717 11:18:16 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:58.717 00:07:58.717 00:07:58.717 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.717 http://cunit.sourceforge.net/ 00:07:58.717 00:07:58.717 00:07:58.717 Suite: json 00:07:58.717 Test: test_write_literal ...passed 00:07:58.717 Test: test_write_string_simple ...passed 00:07:58.717 Test: test_write_string_escapes ...passed 00:07:58.717 Test: test_write_string_utf16le ...passed 00:07:58.717 Test: test_write_number_int32 ...passed 00:07:58.717 Test: test_write_number_uint32 ...passed 00:07:58.717 Test: test_write_number_uint128 ...passed 00:07:58.717 Test: test_write_string_number_uint128 ...passed 00:07:58.717 Test: test_write_number_int64 ...passed 00:07:58.717 Test: test_write_number_uint64 ...passed 00:07:58.717 Test: test_write_number_double ...passed 00:07:58.717 Test: test_write_uuid ...passed 00:07:58.717 Test: test_write_array ...passed 00:07:58.717 Test: test_write_object ...passed 00:07:58.717 Test: test_write_nesting ...passed 00:07:58.717 Test: test_write_val ...passed 00:07:58.717 00:07:58.717 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.717 suites 1 1 n/a 0 0 00:07:58.717 tests 16 16 16 0 0 00:07:58.717 asserts 918 918 918 0 n/a 00:07:58.717 00:07:58.717 Elapsed time = 0.005 seconds 00:07:58.717 11:18:16 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:58.717 00:07:58.717 00:07:58.717 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.717 http://cunit.sourceforge.net/ 00:07:58.717 00:07:58.717 00:07:58.717 Suite: jsonrpc 00:07:58.717 Test: test_parse_request ...passed 00:07:58.717 Test: test_parse_request_streaming ...passed 00:07:58.717 00:07:58.717 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.717 suites 1 1 n/a 0 0 00:07:58.717 tests 2 2 2 0 0 00:07:58.717 asserts 289 289 289 0 n/a 00:07:58.717 00:07:58.717 Elapsed time = 0.004 seconds 00:07:58.717 00:07:58.717 real 0m0.132s 00:07:58.717 user 0m0.062s 00:07:58.717 sys 0m0.072s 00:07:58.717 ************************************ 00:07:58.717 END TEST unittest_json 00:07:58.717 ************************************ 00:07:58.717 11:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.717 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.976 11:18:16 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:07:58.976 11:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.976 11:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.976 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.976 ************************************ 00:07:58.976 START TEST unittest_rpc 00:07:58.976 ************************************ 00:07:58.977 11:18:16 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:07:58.977 11:18:16 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:58.977 00:07:58.977 00:07:58.977 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.977 http://cunit.sourceforge.net/ 00:07:58.977 00:07:58.977 00:07:58.977 Suite: rpc 00:07:58.977 Test: test_jsonrpc_handler ...passed 00:07:58.977 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:58.977 Test: test_rpc_get_methods ...passed 00:07:58.977 Test: test_rpc_spdk_get_version ...passed 00:07:58.977 Test: test_spdk_rpc_listen_close ...passed 00:07:58.977 00:07:58.977 [2024-11-26 11:18:17.001335] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:58.977 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.977 suites 1 1 n/a 0 0 00:07:58.977 tests 5 5 5 0 0 00:07:58.977 asserts 20 20 20 0 n/a 00:07:58.977 00:07:58.977 Elapsed time = 0.000 seconds 00:07:58.977 00:07:58.977 real 0m0.029s 00:07:58.977 user 0m0.018s 00:07:58.977 sys 0m0.012s 00:07:58.977 11:18:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.977 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.977 ************************************ 00:07:58.977 END TEST unittest_rpc 00:07:58.977 ************************************ 00:07:58.977 11:18:17 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:58.977 11:18:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.977 11:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.977 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.977 ************************************ 00:07:58.977 START TEST unittest_notify 00:07:58.977 ************************************ 00:07:58.977 11:18:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:58.977 00:07:58.977 00:07:58.977 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.977 http://cunit.sourceforge.net/ 00:07:58.977 00:07:58.977 00:07:58.977 Suite: app_suite 00:07:58.977 Test: notify ...passed 00:07:58.977 00:07:58.977 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.977 suites 1 1 n/a 0 0 00:07:58.977 tests 1 1 1 0 0 00:07:58.977 asserts 13 13 13 0 n/a 00:07:58.977 00:07:58.977 Elapsed time = 0.000 seconds 00:07:58.977 00:07:58.977 real 0m0.032s 00:07:58.977 user 0m0.021s 00:07:58.977 sys 0m0.011s 00:07:58.977 11:18:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.977 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.977 ************************************ 00:07:58.977 END TEST unittest_notify 00:07:58.977 ************************************ 00:07:58.977 11:18:17 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:07:58.977 11:18:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.977 11:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.977 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.977 ************************************ 00:07:58.977 START TEST unittest_nvme 00:07:58.977 ************************************ 00:07:58.977 11:18:17 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:07:58.977 11:18:17 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:58.977 00:07:58.977 00:07:58.977 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.977 http://cunit.sourceforge.net/ 00:07:58.977 00:07:58.977 00:07:58.977 Suite: nvme 00:07:58.977 Test: test_opc_data_transfer ...passed 00:07:58.977 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:58.977 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:58.977 Test: test_trid_parse_and_compare ...passed 00:07:58.977 Test: test_trid_trtype_str ...passed 00:07:58.977 Test: test_trid_adrfam_str ...passed 00:07:58.977 Test: test_nvme_ctrlr_probe ...passed 00:07:58.977 Test: test_spdk_nvme_probe ...passed 00:07:58.977 Test: test_spdk_nvme_connect ...[2024-11-26 11:18:17.183756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:58.977 [2024-11-26 11:18:17.184034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.977 [2024-11-26 11:18:17.184084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:58.977 [2024-11-26 11:18:17.184133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.977 [2024-11-26 11:18:17.184172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:58.977 [2024-11-26 11:18:17.184214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.977 [2024-11-26 11:18:17.184494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:58.977 [2024-11-26 11:18:17.184590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:58.977 [2024-11-26 11:18:17.184639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:58.977 [2024-11-26 11:18:17.184767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:58.977 [2024-11-26 11:18:17.184809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:58.977 [2024-11-26 11:18:17.184895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:58.977 passed 00:07:58.977 Test: test_nvme_ctrlr_probe_internal ...[2024-11-26 11:18:17.185400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:58.977 [2024-11-26 11:18:17.185440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:58.977 [2024-11-26 11:18:17.185651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:58.977 passed 00:07:58.977 Test: test_nvme_init_controllers ...[2024-11-26 11:18:17.185712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:58.977 [2024-11-26 11:18:17.185866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:58.977 passed 00:07:58.977 Test: test_nvme_driver_init ...[2024-11-26 11:18:17.185991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:58.977 [2024-11-26 11:18:17.186052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:59.237 [2024-11-26 11:18:17.300049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:59.237 [2024-11-26 11:18:17.300204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:59.237 passed 00:07:59.237 Test: test_spdk_nvme_detach ...passed 00:07:59.237 Test: test_nvme_completion_poll_cb ...passed 00:07:59.237 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:59.237 Test: test_nvme_allocate_request_null ...passed 00:07:59.237 Test: test_nvme_allocate_request ...passed 00:07:59.237 Test: test_nvme_free_request ...passed 00:07:59.237 Test: test_nvme_allocate_request_user_copy ...passed 00:07:59.237 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:59.237 Test: test_nvme_request_check_timeout ...passed 00:07:59.237 Test: test_nvme_wait_for_completion ...passed 00:07:59.237 Test: test_spdk_nvme_parse_func ...passed 00:07:59.237 Test: test_spdk_nvme_detach_async ...passed 00:07:59.237 Test: test_nvme_parse_addr ...passed 00:07:59.237 00:07:59.237 [2024-11-26 11:18:17.301344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:59.237 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.237 suites 1 1 n/a 0 0 00:07:59.237 tests 25 25 25 0 0 00:07:59.237 asserts 326 326 326 0 n/a 00:07:59.237 00:07:59.237 Elapsed time = 0.007 seconds 00:07:59.237 11:18:17 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:59.237 00:07:59.237 00:07:59.237 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.237 http://cunit.sourceforge.net/ 00:07:59.237 00:07:59.237 00:07:59.237 Suite: nvme_ctrlr 00:07:59.237 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-26 11:18:17.335999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-26 11:18:17.337982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-26 11:18:17.339428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-26 11:18:17.340900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-26 11:18:17.342272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 [2024-11-26 11:18:17.343597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-26 11:18:17.345040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-26 11:18:17.346352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-26 11:18:17.349051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 [2024-11-26 11:18:17.351540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-26 11:18:17.352818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:59.237 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-26 11:18:17.355587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 [2024-11-26 11:18:17.356989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-26 11:18:17.359537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:59.237 Test: test_nvme_ctrlr_init_delay ...[2024-11-26 11:18:17.362454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_alloc_io_qpair_rr_1 ...[2024-11-26 11:18:17.364036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 [2024-11-26 11:18:17.364392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:59.237 [2024-11-26 11:18:17.364537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:59.237 [2024-11-26 11:18:17.364619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:59.237 [2024-11-26 11:18:17.364705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:59.237 passed 00:07:59.237 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:59.237 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:59.237 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-26 11:18:17.364947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 passed 00:07:59.237 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-26 11:18:17.365260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.237 [2024-11-26 11:18:17.365503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:59.237 passed 00:07:59.237 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-26 11:18:17.365846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:59.237 [2024-11-26 11:18:17.365994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:59.237 [2024-11-26 11:18:17.366142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:59.237 [2024-11-26 11:18:17.366249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:59.237 passed 00:07:59.237 Test: test_nvme_ctrlr_fail ...passed 00:07:59.237 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:59.237 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:59.237 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:59.237 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-26 11:18:17.366353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:59.237 [2024-11-26 11:18:17.366761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.496 passed 00:07:59.496 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:59.496 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:59.496 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:59.496 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-26 11:18:17.711464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.496 passed 00:07:59.496 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-26 11:18:17.719020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.496 passed 00:07:59.496 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-26 11:18:17.720326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.496 [2024-11-26 11:18:17.720400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:59.496 passed 00:07:59.496 Test: test_alloc_io_qpair_fail ...[2024-11-26 11:18:17.721680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.496 passed 00:07:59.496 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:59.496 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:59.496 Test: test_nvme_ctrlr_set_state ...passed 00:07:59.496 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-26 11:18:17.721843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:59.496 [2024-11-26 11:18:17.722077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:59.496 [2024-11-26 11:18:17.722140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-26 11:18:17.747357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-26 11:18:17.794227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_reset ...[2024-11-26 11:18:17.795857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_aer_callback ...[2024-11-26 11:18:17.796271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-26 11:18:17.797860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:59.755 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:59.755 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-26 11:18:17.799822] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:59.755 Test: test_nvme_ctrlr_ana_resize ...[2024-11-26 11:18:17.801383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:59.755 Test: test_nvme_transport_ctrlr_ready ...[2024-11-26 11:18:17.803145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:59.755 passed 00:07:59.755 Test: test_nvme_ctrlr_disable ...[2024-11-26 11:18:17.803256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:59.755 [2024-11-26 11:18:17.803350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:59.755 passed 00:07:59.755 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.755 suites 1 1 n/a 0 0 00:07:59.755 tests 43 43 43 0 0 00:07:59.755 asserts 10418 10418 10418 0 n/a 00:07:59.755 00:07:59.755 Elapsed time = 0.426 seconds 00:07:59.755 11:18:17 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:59.755 00:07:59.755 00:07:59.755 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.755 http://cunit.sourceforge.net/ 00:07:59.755 00:07:59.755 00:07:59.755 Suite: nvme_ctrlr_cmd 00:07:59.755 Test: test_get_log_pages ...passed 00:07:59.755 Test: test_set_feature_cmd ...passed 00:07:59.755 Test: test_set_feature_ns_cmd ...passed 00:07:59.755 Test: test_get_feature_cmd ...passed 00:07:59.755 Test: test_get_feature_ns_cmd ...passed 00:07:59.755 Test: test_abort_cmd ...passed 00:07:59.755 Test: test_set_host_id_cmds ...passed 00:07:59.755 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:59.755 Test: test_io_raw_cmd ...passed 00:07:59.755 Test: test_io_raw_cmd_with_md ...passed 00:07:59.755 Test: test_namespace_attach ...passed 00:07:59.755 Test: test_namespace_detach ...[2024-11-26 11:18:17.852935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:59.755 passed 00:07:59.755 Test: test_namespace_create ...passed 00:07:59.755 Test: test_namespace_delete ...passed 00:07:59.755 Test: test_doorbell_buffer_config ...passed 00:07:59.755 Test: test_format_nvme ...passed 00:07:59.755 Test: test_fw_commit ...passed 00:07:59.755 Test: test_fw_image_download ...passed 00:07:59.755 Test: test_sanitize ...passed 00:07:59.755 Test: test_directive ...passed 00:07:59.755 Test: test_nvme_request_add_abort ...passed 00:07:59.755 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:59.755 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:59.755 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:59.755 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.755 suites 1 1 n/a 0 0 00:07:59.755 tests 24 24 24 0 0 00:07:59.755 asserts 198 198 198 0 n/a 00:07:59.755 00:07:59.755 Elapsed time = 0.001 seconds 00:07:59.755 11:18:17 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:59.755 00:07:59.755 00:07:59.755 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.755 http://cunit.sourceforge.net/ 00:07:59.755 00:07:59.755 00:07:59.755 Suite: nvme_ctrlr_cmd 00:07:59.755 Test: test_geometry_cmd ...passed 00:07:59.755 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:59.755 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.755 suites 1 1 n/a 0 0 00:07:59.755 tests 2 2 2 0 0 00:07:59.755 asserts 7 7 7 0 n/a 00:07:59.755 00:07:59.755 Elapsed time = 0.000 seconds 00:07:59.755 11:18:17 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:59.756 00:07:59.756 00:07:59.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.756 http://cunit.sourceforge.net/ 00:07:59.756 00:07:59.756 00:07:59.756 Suite: nvme 00:07:59.756 Test: test_nvme_ns_construct ...passed 00:07:59.756 Test: test_nvme_ns_uuid ...passed 00:07:59.756 Test: test_nvme_ns_csi ...passed 00:07:59.756 Test: test_nvme_ns_data ...passed 00:07:59.756 Test: test_nvme_ns_set_identify_data ...passed 00:07:59.756 Test: test_spdk_nvme_ns_get_values ...passed 00:07:59.756 Test: test_spdk_nvme_ns_is_active ...passed 00:07:59.756 Test: spdk_nvme_ns_supports ...passed 00:07:59.756 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:59.756 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:59.756 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:59.756 Test: test_nvme_ns_find_id_desc ...passed 00:07:59.756 00:07:59.756 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.756 suites 1 1 n/a 0 0 00:07:59.756 tests 12 12 12 0 0 00:07:59.756 asserts 83 83 83 0 n/a 00:07:59.756 00:07:59.756 Elapsed time = 0.001 seconds 00:07:59.756 11:18:17 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:59.756 00:07:59.756 00:07:59.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.756 http://cunit.sourceforge.net/ 00:07:59.756 00:07:59.756 00:07:59.756 Suite: nvme_ns_cmd 00:07:59.756 Test: split_test ...passed 00:07:59.756 Test: split_test2 ...passed 00:07:59.756 Test: split_test3 ...passed 00:07:59.756 Test: split_test4 ...passed 00:07:59.756 Test: test_nvme_ns_cmd_flush ...passed 00:07:59.756 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:59.756 Test: test_nvme_ns_cmd_copy ...passed 00:07:59.756 Test: test_io_flags ...passed 00:07:59.756 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:59.756 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:59.756 Test: test_nvme_ns_cmd_reservation_register ...[2024-11-26 11:18:17.945197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:59.756 passed 00:07:59.756 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:59.756 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:59.756 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:59.756 Test: test_cmd_child_request ...passed 00:07:59.756 Test: test_nvme_ns_cmd_readv ...passed 00:07:59.756 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:59.756 Test: test_nvme_ns_cmd_writev ...passed 00:07:59.756 Test: test_nvme_ns_cmd_write_with_md ...[2024-11-26 11:18:17.946568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:59.756 passed 00:07:59.756 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:59.756 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:59.756 Test: test_nvme_ns_cmd_comparev ...passed 00:07:59.756 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:59.756 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:59.756 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:59.756 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:59.756 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:59.756 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:59.756 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:59.756 Test: test_nvme_ns_cmd_verify ...passed 00:07:59.756 Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-11-26 11:18:17.948351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:59.756 [2024-11-26 11:18:17.948468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:59.756 passed 00:07:59.756 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:59.756 00:07:59.756 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.756 suites 1 1 n/a 0 0 00:07:59.756 tests 32 32 32 0 0 00:07:59.756 asserts 550 550 550 0 n/a 00:07:59.756 00:07:59.756 Elapsed time = 0.005 seconds 00:07:59.756 11:18:17 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:59.756 00:07:59.756 00:07:59.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.756 http://cunit.sourceforge.net/ 00:07:59.756 00:07:59.756 00:07:59.756 Suite: nvme_ns_cmd 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:59.756 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:59.756 00:07:59.756 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.756 suites 1 1 n/a 0 0 00:07:59.756 tests 12 12 12 0 0 00:07:59.756 asserts 123 123 123 0 n/a 00:07:59.756 00:07:59.756 Elapsed time = 0.001 seconds 00:08:00.016 11:18:18 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:00.016 00:08:00.016 00:08:00.016 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.016 http://cunit.sourceforge.net/ 00:08:00.016 00:08:00.016 00:08:00.016 Suite: nvme_qpair 00:08:00.016 Test: test3 ...passed 00:08:00.016 Test: test_ctrlr_failed ...passed 00:08:00.016 Test: struct_packing ...passed 00:08:00.016 Test: test_nvme_qpair_process_completions ...[2024-11-26 11:18:18.018852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:00.016 [2024-11-26 11:18:18.019147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:00.016 passed 00:08:00.016 Test: test_nvme_completion_is_retry ...passed 00:08:00.016 Test: test_get_status_string ...passed 00:08:00.016 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:00.016 Test: test_nvme_qpair_submit_request ...[2024-11-26 11:18:18.019230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:00.016 [2024-11-26 11:18:18.019286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:00.016 passed 00:08:00.016 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:00.016 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:00.016 Test: test_nvme_qpair_init_deinit ...passed 00:08:00.016 Test: test_nvme_get_sgl_print_info ...passed 00:08:00.016 00:08:00.016 [2024-11-26 11:18:18.019902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:00.016 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.016 suites 1 1 n/a 0 0 00:08:00.016 tests 12 12 12 0 0 00:08:00.016 asserts 154 154 154 0 n/a 00:08:00.016 00:08:00.016 Elapsed time = 0.002 seconds 00:08:00.016 11:18:18 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:00.016 00:08:00.016 00:08:00.016 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.016 http://cunit.sourceforge.net/ 00:08:00.016 00:08:00.016 00:08:00.016 Suite: nvme_pcie 00:08:00.016 Test: test_prp_list_append ...[2024-11-26 11:18:18.051903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:00.016 [2024-11-26 11:18:18.052165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:00.016 [2024-11-26 11:18:18.052224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:00.016 passed 00:08:00.016 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:00.016 Test: test_shadow_doorbell_update ...passed 00:08:00.016 Test: test_build_contig_hw_sgl_request ...passed 00:08:00.016 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:00.016 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:00.016 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...[2024-11-26 11:18:18.052453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:00.016 [2024-11-26 11:18:18.052561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:00.016 passed 00:08:00.016 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:00.016 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:00.016 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:00.017 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:00.017 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:08:00.017 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:00.017 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:08:00.017 00:08:00.017 [2024-11-26 11:18:18.052981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:00.017 [2024-11-26 11:18:18.053142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:00.017 [2024-11-26 11:18:18.053256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:00.017 [2024-11-26 11:18:18.053314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:00.017 [2024-11-26 11:18:18.053374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:00.017 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.017 suites 1 1 n/a 0 0 00:08:00.017 tests 14 14 14 0 0 00:08:00.017 asserts 235 235 235 0 n/a 00:08:00.017 00:08:00.017 Elapsed time = 0.002 seconds 00:08:00.017 11:18:18 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:00.017 00:08:00.017 00:08:00.017 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.017 http://cunit.sourceforge.net/ 00:08:00.017 00:08:00.017 00:08:00.017 Suite: nvme_ns_cmd 00:08:00.017 Test: nvme_poll_group_create_test ...passed 00:08:00.017 Test: nvme_poll_group_add_remove_test ...passed 00:08:00.017 Test: nvme_poll_group_process_completions ...passed 00:08:00.017 Test: nvme_poll_group_destroy_test ...passed 00:08:00.017 Test: nvme_poll_group_get_free_stats ...passed 00:08:00.017 00:08:00.017 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.017 suites 1 1 n/a 0 0 00:08:00.017 tests 5 5 5 0 0 00:08:00.017 asserts 75 75 75 0 n/a 00:08:00.017 00:08:00.017 Elapsed time = 0.000 seconds 00:08:00.017 11:18:18 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:00.017 00:08:00.017 00:08:00.017 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.017 http://cunit.sourceforge.net/ 00:08:00.017 00:08:00.017 00:08:00.017 Suite: nvme_quirks 00:08:00.017 Test: test_nvme_quirks_striping ...passed 00:08:00.017 00:08:00.017 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.017 suites 1 1 n/a 0 0 00:08:00.017 tests 1 1 1 0 0 00:08:00.017 asserts 5 5 5 0 n/a 00:08:00.017 00:08:00.017 Elapsed time = 0.000 seconds 00:08:00.017 11:18:18 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:00.017 00:08:00.017 00:08:00.017 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.017 http://cunit.sourceforge.net/ 00:08:00.017 00:08:00.017 00:08:00.017 Suite: nvme_tcp 00:08:00.017 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:00.017 Test: test_nvme_tcp_build_iovs ...passed 00:08:00.017 Test: test_nvme_tcp_build_sgl_request ...passed 00:08:00.017 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:00.017 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:00.017 Test: test_nvme_tcp_req_complete_safe ...[2024-11-26 11:18:18.134899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7bcafbe0d2e0, and the iovcnt=16, remaining_size=28672 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_req_get ...passed 00:08:00.017 Test: test_nvme_tcp_req_init ...passed 00:08:00.017 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:00.017 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:00.017 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:00.017 Test: test_nvme_tcp_alloc_reqs ...[2024-11-26 11:18:18.135573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafb909030 is same with the state(6) to be set 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:08:00.017 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-26 11:18:18.136095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbd09070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7bcafbc0a6e0 00:08:00.017 [2024-11-26 11:18:18.136246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:00.017 [2024-11-26 11:18:18.136297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:00.017 [2024-11-26 11:18:18.136388] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:00.017 [2024-11-26 11:18:18.136497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.136689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-26 11:18:18.136762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbc0a070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.137047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:00.017 [2024-11-26 11:18:18.137093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:00.017 Test: test_nvme_tcp_c2h_payload_handle ...passed[2024-11-26 11:18:18.137488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:00.017 [2024-11-26 11:18:18.137631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7bcafbc0b540): PDU Sequence Error 00:08:00.017 00:08:00.017 Test: test_nvme_tcp_icresp_handle ...[2024-11-26 11:18:18.137717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:00.017 [2024-11-26 11:18:18.137761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:00.017 [2024-11-26 11:18:18.137816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbd0d070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.137854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:00.017 [2024-11-26 11:18:18.137920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbd0d070 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.137972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbd0d070 is same with the state(0) to be set 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:00.017 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:08:00.017 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:00.017 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-26 11:18:18.138044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7bcafbc0c540): PDU Sequence Error 00:08:00.017 [2024-11-26 11:18:18.138150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7bcafbd0f200 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-26 11:18:18.138404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7bcafbe25480, errno=0, rc=0 00:08:00.017 [2024-11-26 11:18:18.138464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbe25480 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.138523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bcafbe25480 is same with the state(5) to be set 00:08:00.017 [2024-11-26 11:18:18.138600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bcafbe25480 (0): Success 00:08:00.017 [2024-11-26 11:18:18.138652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bcafbe25480 (0): Success 00:08:00.017 passed 00:08:00.017 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-11-26 11:18:18.249318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:00.017 [2024-11-26 11:18:18.249420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:00.277 passed 00:08:00.277 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:08:00.277 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-26 11:18:18.249671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:00.277 [2024-11-26 11:18:18.249716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:00.277 [2024-11-26 11:18:18.250017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:00.277 [2024-11-26 11:18:18.250079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:00.277 passed 00:08:00.277 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-26 11:18:18.250182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:00.277 [2024-11-26 11:18:18.250246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:00.277 [2024-11-26 11:18:18.250404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513000001540 with addr=192.168.1.78, port=23 00:08:00.277 [2024-11-26 11:18:18.250485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:00.277 [2024-11-26 11:18:18.250648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x513000001a80, and the iovcnt=1, remaining_size=1024 00:08:00.277 [2024-11-26 11:18:18.250709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:00.277 passed 00:08:00.277 00:08:00.277 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.277 suites 1 1 n/a 0 0 00:08:00.277 tests 27 27 27 0 0 00:08:00.277 asserts 624 624 624 0 n/a 00:08:00.277 00:08:00.277 Elapsed time = 0.116 seconds 00:08:00.277 11:18:18 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:00.277 00:08:00.277 00:08:00.277 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.277 http://cunit.sourceforge.net/ 00:08:00.277 00:08:00.277 00:08:00.277 Suite: nvme_transport 00:08:00.277 Test: test_nvme_get_transport ...passed 00:08:00.277 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:00.277 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:00.277 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:00.277 Test: test_ctrlr_get_memory_domains ...passed 00:08:00.277 00:08:00.277 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.277 suites 1 1 n/a 0 0 00:08:00.277 tests 5 5 5 0 0 00:08:00.277 asserts 28 28 28 0 n/a 00:08:00.277 00:08:00.277 Elapsed time = 0.000 seconds 00:08:00.277 11:18:18 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:00.277 00:08:00.277 00:08:00.277 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.277 http://cunit.sourceforge.net/ 00:08:00.277 00:08:00.277 00:08:00.277 Suite: nvme_io_msg 00:08:00.277 Test: test_nvme_io_msg_send ...passed 00:08:00.277 Test: test_nvme_io_msg_process ...passed 00:08:00.277 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:00.277 00:08:00.277 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.277 suites 1 1 n/a 0 0 00:08:00.277 tests 3 3 3 0 0 00:08:00.277 asserts 56 56 56 0 n/a 00:08:00.277 00:08:00.277 Elapsed time = 0.000 seconds 00:08:00.277 11:18:18 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:00.277 00:08:00.277 00:08:00.278 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.278 http://cunit.sourceforge.net/ 00:08:00.278 00:08:00.278 00:08:00.278 Suite: nvme_pcie_common 00:08:00.278 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:08:00.278 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-26 11:18:18.355843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:00.278 passed 00:08:00.278 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:00.278 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:08:00.278 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-11-26 11:18:18.356690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:00.278 [2024-11-26 11:18:18.356756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:00.278 [2024-11-26 11:18:18.356804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:00.278 passed 00:08:00.278 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:08:00.278 00:08:00.278 [2024-11-26 11:18:18.357298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:00.278 [2024-11-26 11:18:18.357334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:00.278 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.278 suites 1 1 n/a 0 0 00:08:00.278 tests 6 6 6 0 0 00:08:00.278 asserts 148 148 148 0 n/a 00:08:00.278 00:08:00.278 Elapsed time = 0.002 seconds 00:08:00.278 11:18:18 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:00.278 00:08:00.278 00:08:00.278 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.278 http://cunit.sourceforge.net/ 00:08:00.278 00:08:00.278 00:08:00.278 Suite: nvme_fabric 00:08:00.278 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:00.278 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:00.278 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:00.278 Test: test_nvme_fabric_discover_probe ...passed 00:08:00.278 Test: test_nvme_fabric_qpair_connect ...[2024-11-26 11:18:18.390166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:00.278 passed 00:08:00.278 00:08:00.278 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.278 suites 1 1 n/a 0 0 00:08:00.278 tests 5 5 5 0 0 00:08:00.278 asserts 60 60 60 0 n/a 00:08:00.278 00:08:00.278 Elapsed time = 0.001 seconds 00:08:00.278 11:18:18 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:00.278 00:08:00.278 00:08:00.278 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.278 http://cunit.sourceforge.net/ 00:08:00.278 00:08:00.278 00:08:00.278 Suite: nvme_opal 00:08:00.278 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:00.278 Test: test_opal_add_short_atom_header ...passed 00:08:00.278 00:08:00.278 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.278 suites 1 1 n/a 0 0 00:08:00.278 tests 2 2 2 0 0 00:08:00.278 asserts 22 22 22 0 n/a 00:08:00.278 00:08:00.278 Elapsed time = 0.000 seconds 00:08:00.278 [2024-11-26 11:18:18.420811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:00.278 00:08:00.278 real 0m1.269s 00:08:00.278 user 0m0.642s 00:08:00.278 sys 0m0.480s 00:08:00.278 11:18:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.278 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:08:00.278 ************************************ 00:08:00.278 END TEST unittest_nvme 00:08:00.278 ************************************ 00:08:00.278 11:18:18 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:00.278 11:18:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.278 11:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.278 11:18:18 -- common/autotest_common.sh@10 -- # set +x 00:08:00.278 ************************************ 00:08:00.278 START TEST unittest_log 00:08:00.278 ************************************ 00:08:00.278 11:18:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:00.278 00:08:00.278 00:08:00.278 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.278 http://cunit.sourceforge.net/ 00:08:00.278 00:08:00.278 00:08:00.278 Suite: log 00:08:00.278 Test: log_test ...passed 00:08:00.278 Test: deprecation ...[2024-11-26 11:18:18.503361] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:08:00.278 [2024-11-26 11:18:18.503577] log_ut.c: 55:log_test: *DEBUG*: log test 00:08:00.278 log dump test: 00:08:00.278 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:00.278 spdk dump test: 00:08:00.278 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:00.278 spdk dump test: 00:08:00.278 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:00.278 00000010 65 20 63 68 61 72 73 e chars 00:08:01.654 passed 00:08:01.654 00:08:01.654 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.654 suites 1 1 n/a 0 0 00:08:01.654 tests 2 2 2 0 0 00:08:01.654 asserts 73 73 73 0 n/a 00:08:01.654 00:08:01.654 Elapsed time = 0.001 seconds 00:08:01.654 00:08:01.654 real 0m1.034s 00:08:01.654 user 0m0.015s 00:08:01.654 sys 0m0.020s 00:08:01.654 ************************************ 00:08:01.654 END TEST unittest_log 00:08:01.654 ************************************ 00:08:01.654 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.654 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.654 11:18:19 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:01.654 11:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.654 11:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.655 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.655 ************************************ 00:08:01.655 START TEST unittest_lvol 00:08:01.655 ************************************ 00:08:01.655 11:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:01.655 00:08:01.655 00:08:01.655 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.655 http://cunit.sourceforge.net/ 00:08:01.655 00:08:01.655 00:08:01.655 Suite: lvol 00:08:01.655 Test: lvs_init_unload_success ...[2024-11-26 11:18:19.604181] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:01.655 passed 00:08:01.655 Test: lvs_init_destroy_success ...[2024-11-26 11:18:19.604749] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:01.655 passed 00:08:01.655 Test: lvs_init_opts_success ...passed 00:08:01.655 Test: lvs_unload_lvs_is_null_fail ...passed 00:08:01.655 Test: lvs_names ...[2024-11-26 11:18:19.605061] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:01.655 [2024-11-26 11:18:19.605127] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:01.655 [2024-11-26 11:18:19.605168] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:01.655 [2024-11-26 11:18:19.605322] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:01.655 passed 00:08:01.655 Test: lvol_create_destroy_success ...passed 00:08:01.655 Test: lvol_create_fail ...[2024-11-26 11:18:19.605841] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:01.655 passed 00:08:01.655 Test: lvol_destroy_fail ...[2024-11-26 11:18:19.605972] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:01.655 [2024-11-26 11:18:19.606272] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:01.655 passed 00:08:01.655 Test: lvol_close ...[2024-11-26 11:18:19.606460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:01.655 [2024-11-26 11:18:19.606515] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:01.655 passed 00:08:01.655 Test: lvol_resize ...passed 00:08:01.655 Test: lvol_set_read_only ...passed 00:08:01.655 Test: test_lvs_load ...[2024-11-26 11:18:19.607286] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:01.655 passed 00:08:01.655 Test: lvols_load ...[2024-11-26 11:18:19.607345] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:01.655 [2024-11-26 11:18:19.607532] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:01.655 passed 00:08:01.655 Test: lvol_open ...[2024-11-26 11:18:19.607666] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:01.655 passed 00:08:01.655 Test: lvol_snapshot ...passed 00:08:01.655 Test: lvol_snapshot_fail ...passed 00:08:01.655 Test: lvol_clone ...[2024-11-26 11:18:19.608370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:01.655 passed 00:08:01.655 Test: lvol_clone_fail ...passed 00:08:01.655 Test: lvol_iter_clones ...[2024-11-26 11:18:19.608823] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:01.655 passed 00:08:01.655 Test: lvol_refcnt ...[2024-11-26 11:18:19.609255] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2e8b53bb-9c7c-40f2-9658-f09ab371ab8e because it is still open 00:08:01.655 passed 00:08:01.655 Test: lvol_names ...[2024-11-26 11:18:19.609405] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:01.655 [2024-11-26 11:18:19.609473] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:01.655 [2024-11-26 11:18:19.609647] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:01.655 passed 00:08:01.655 Test: lvol_create_thin_provisioned ...passed 00:08:01.655 Test: lvol_rename ...passed 00:08:01.655 Test: lvs_rename ...[2024-11-26 11:18:19.610047] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:01.655 [2024-11-26 11:18:19.610130] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:01.655 passed 00:08:01.655 Test: lvol_inflate ...[2024-11-26 11:18:19.610339] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:01.655 [2024-11-26 11:18:19.610476] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:01.655 passed 00:08:01.655 Test: lvol_decouple_parent ...passed 00:08:01.655 Test: lvol_get_xattr ...[2024-11-26 11:18:19.610625] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:01.655 passed 00:08:01.655 Test: lvol_esnap_reload ...passed 00:08:01.655 Test: lvol_esnap_create_bad_args ...[2024-11-26 11:18:19.610989] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:01.655 [2024-11-26 11:18:19.611022] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:01.655 [2024-11-26 11:18:19.611064] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:01.655 [2024-11-26 11:18:19.611108] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:01.655 [2024-11-26 11:18:19.611195] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:01.655 passed 00:08:01.655 Test: lvol_esnap_create_delete ...passed 00:08:01.655 Test: lvol_esnap_load_esnaps ...passed 00:08:01.655 Test: lvol_esnap_missing ...[2024-11-26 11:18:19.611412] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:01.655 [2024-11-26 11:18:19.611579] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:01.655 [2024-11-26 11:18:19.611616] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:01.655 passed 00:08:01.655 Test: lvol_esnap_hotplug ... 00:08:01.655 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:01.655 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:01.655 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:01.655 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:01.655 [2024-11-26 11:18:19.612326] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 64e0260f-bd78-4f78-ab35-ffa4539f88e5: failed to create esnap bs_dev: error -12 00:08:01.655 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:01.655 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:01.655 [2024-11-26 11:18:19.612556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f24f944d-e148-4dd0-be58-28935a4270e9: failed to create esnap bs_dev: error -12 00:08:01.655 [2024-11-26 11:18:19.612685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 99f42f5c-d5ce-412a-be6a-7a082cfe62e0: failed to create esnap bs_dev: error -12 00:08:01.655 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:01.655 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:01.655 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:01.655 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:01.655 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:01.655 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:01.655 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:01.655 passed 00:08:01.655 Test: lvol_get_by ...passed 00:08:01.655 00:08:01.655 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.655 suites 1 1 n/a 0 0 00:08:01.655 tests 34 34 34 0 0 00:08:01.655 asserts 1439 1439 1439 0 n/a 00:08:01.655 00:08:01.655 Elapsed time = 0.010 seconds 00:08:01.655 00:08:01.655 real 0m0.051s 00:08:01.655 user 0m0.029s 00:08:01.655 sys 0m0.022s 00:08:01.655 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.655 ************************************ 00:08:01.655 END TEST unittest_lvol 00:08:01.655 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.655 ************************************ 00:08:01.655 11:18:19 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:01.655 11:18:19 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:01.655 11:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.655 11:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.655 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.655 ************************************ 00:08:01.655 START TEST unittest_nvme_rdma 00:08:01.655 ************************************ 00:08:01.655 11:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:01.655 00:08:01.655 00:08:01.655 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.655 http://cunit.sourceforge.net/ 00:08:01.655 00:08:01.655 00:08:01.655 Suite: nvme_rdma 00:08:01.655 Test: test_nvme_rdma_build_sgl_request ...passed 00:08:01.655 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:01.655 Test: test_nvme_rdma_build_contig_request ...passed 00:08:01.655 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:01.655 Test: test_nvme_rdma_create_reqs ...[2024-11-26 11:18:19.710674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:01.655 [2024-11-26 11:18:19.710914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:01.655 [2024-11-26 11:18:19.710985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:01.655 [2024-11-26 11:18:19.711079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:01.655 passed 00:08:01.656 Test: test_nvme_rdma_create_rsps ...[2024-11-26 11:18:19.711172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:01.656 [2024-11-26 11:18:19.711533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:01.656 passed 00:08:01.656 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:08:01.656 Test: test_nvme_rdma_poller_create ...[2024-11-26 11:18:19.711747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:01.656 [2024-11-26 11:18:19.711781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:01.656 passed 00:08:01.656 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:01.656 Test: test_nvme_rdma_ctrlr_construct ...[2024-11-26 11:18:19.711961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:01.656 passed 00:08:01.656 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:01.656 Test: test_nvme_rdma_req_init ...passed 00:08:01.656 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:01.656 Test: test_nvme_rdma_qpair_init ...[2024-11-26 11:18:19.712269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:01.656 [2024-11-26 11:18:19.712308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:01.656 passed 00:08:01.656 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:01.656 Test: test_nvme_rdma_memory_domain ...passed 00:08:01.656 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:01.656 Test: test_rdma_get_memory_translation ...passed 00:08:01.656 Test: test_get_rdma_qpair_from_wc ...passed 00:08:01.656 Test: test_nvme_rdma_ctrlr_get_max_sges ...[2024-11-26 11:18:19.712498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:01.656 [2024-11-26 11:18:19.712590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:01.656 [2024-11-26 11:18:19.712622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:01.656 passed 00:08:01.656 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:08:01.656 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-26 11:18:19.712736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.656 [2024-11-26 11:18:19.712799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.656 [2024-11-26 11:18:19.712981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:01.656 [2024-11-26 11:18:19.713027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:01.656 [2024-11-26 11:18:19.713057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7d1dc4e0a030 on poll group 0x50b000000040 00:08:01.656 [2024-11-26 11:18:19.713113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:01.656 [2024-11-26 11:18:19.713150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:01.656 [2024-11-26 11:18:19.713182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7d1dc4e0a030 on poll group 0x50b000000040 00:08:01.656 [2024-11-26 11:18:19.713262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:01.656 passed 00:08:01.656 00:08:01.656 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.656 suites 1 1 n/a 0 0 00:08:01.656 tests 22 22 22 0 0 00:08:01.656 asserts 412 412 412 0 n/a 00:08:01.656 00:08:01.656 Elapsed time = 0.003 seconds 00:08:01.656 00:08:01.656 real 0m0.036s 00:08:01.656 user 0m0.014s 00:08:01.656 sys 0m0.022s 00:08:01.656 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.656 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.656 ************************************ 00:08:01.656 END TEST unittest_nvme_rdma 00:08:01.656 ************************************ 00:08:01.656 11:18:19 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:01.656 11:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.656 11:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.656 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.656 ************************************ 00:08:01.656 START TEST unittest_nvmf_transport 00:08:01.656 ************************************ 00:08:01.656 11:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:01.656 00:08:01.656 00:08:01.656 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.656 http://cunit.sourceforge.net/ 00:08:01.656 00:08:01.656 00:08:01.656 Suite: nvmf 00:08:01.656 Test: test_spdk_nvmf_transport_create ...[2024-11-26 11:18:19.809791] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:01.656 [2024-11-26 11:18:19.810088] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:01.656 [2024-11-26 11:18:19.810168] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:01.656 [2024-11-26 11:18:19.810267] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:01.656 passed 00:08:01.656 Test: test_nvmf_transport_poll_group_create ...passed 00:08:01.656 Test: test_spdk_nvmf_transport_opts_init ...passed 00:08:01.656 Test: test_spdk_nvmf_transport_listen_ext ...[2024-11-26 11:18:19.810672] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:01.656 [2024-11-26 11:18:19.810736] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:01.656 [2024-11-26 11:18:19.810792] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:01.656 passed 00:08:01.656 00:08:01.656 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.656 suites 1 1 n/a 0 0 00:08:01.656 tests 4 4 4 0 0 00:08:01.656 asserts 49 49 49 0 n/a 00:08:01.656 00:08:01.656 Elapsed time = 0.001 seconds 00:08:01.656 00:08:01.656 real 0m0.040s 00:08:01.656 user 0m0.024s 00:08:01.656 sys 0m0.016s 00:08:01.656 ************************************ 00:08:01.656 END TEST unittest_nvmf_transport 00:08:01.656 ************************************ 00:08:01.656 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.656 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.656 11:18:19 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:01.656 11:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.656 11:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.656 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.656 ************************************ 00:08:01.656 START TEST unittest_rdma 00:08:01.656 ************************************ 00:08:01.656 11:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:01.915 00:08:01.915 00:08:01.915 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.915 http://cunit.sourceforge.net/ 00:08:01.915 00:08:01.915 00:08:01.915 Suite: rdma_common 00:08:01.916 Test: test_spdk_rdma_pd ...[2024-11-26 11:18:19.892354] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:01.916 [2024-11-26 11:18:19.892709] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:01.916 passed 00:08:01.916 00:08:01.916 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.916 suites 1 1 n/a 0 0 00:08:01.916 tests 1 1 1 0 0 00:08:01.916 asserts 31 31 31 0 n/a 00:08:01.916 00:08:01.916 Elapsed time = 0.001 seconds 00:08:01.916 00:08:01.916 real 0m0.028s 00:08:01.916 user 0m0.012s 00:08:01.916 sys 0m0.016s 00:08:01.916 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.916 ************************************ 00:08:01.916 END TEST unittest_rdma 00:08:01.916 ************************************ 00:08:01.916 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.916 11:18:19 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:01.916 11:18:19 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:01.916 11:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.916 11:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.916 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.916 ************************************ 00:08:01.916 START TEST unittest_nvme_cuse 00:08:01.916 ************************************ 00:08:01.916 11:18:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:01.916 00:08:01.916 00:08:01.916 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.916 http://cunit.sourceforge.net/ 00:08:01.916 00:08:01.916 00:08:01.916 Suite: nvme_cuse 00:08:01.916 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:01.916 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:01.916 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:01.916 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:01.916 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:01.916 Test: test_cuse_nvme_submit_io ...passed 00:08:01.916 Test: test_cuse_nvme_reset ...[2024-11-26 11:18:19.981100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:01.916 [2024-11-26 11:18:19.981414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:01.916 passed 00:08:01.916 Test: test_nvme_cuse_stop ...passed 00:08:01.916 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:01.916 00:08:01.916 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.916 suites 1 1 n/a 0 0 00:08:01.916 tests 9 9 9 0 0 00:08:01.916 asserts 121 121 121 0 n/a 00:08:01.916 00:08:01.916 Elapsed time = 0.002 seconds 00:08:01.916 00:08:01.916 real 0m0.037s 00:08:01.916 user 0m0.018s 00:08:01.916 sys 0m0.020s 00:08:01.916 11:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.916 ************************************ 00:08:01.916 END TEST unittest_nvme_cuse 00:08:01.916 ************************************ 00:08:01.916 11:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:01.916 11:18:20 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:08:01.916 11:18:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.916 11:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.916 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:01.916 ************************************ 00:08:01.916 START TEST unittest_nvmf 00:08:01.916 ************************************ 00:08:01.916 11:18:20 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:08:01.916 11:18:20 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:01.916 00:08:01.916 00:08:01.916 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.916 http://cunit.sourceforge.net/ 00:08:01.916 00:08:01.916 00:08:01.916 Suite: nvmf 00:08:01.916 Test: test_get_log_page ...passed 00:08:01.916 Test: test_process_fabrics_cmd ...passed 00:08:01.916 Test: test_connect ...[2024-11-26 11:18:20.075767] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:01.916 [2024-11-26 11:18:20.077188] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:01.916 [2024-11-26 11:18:20.077307] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:01.916 [2024-11-26 11:18:20.077379] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:01.916 [2024-11-26 11:18:20.077453] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:01.916 [2024-11-26 11:18:20.077522] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:01.916 [2024-11-26 11:18:20.077612] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:01.916 [2024-11-26 11:18:20.077722] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:01.916 [2024-11-26 11:18:20.077779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:01.916 [2024-11-26 11:18:20.077984] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:01.916 [2024-11-26 11:18:20.078103] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:01.916 [2024-11-26 11:18:20.078567] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:01.916 [2024-11-26 11:18:20.078701] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:01.916 [2024-11-26 11:18:20.078851] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:01.916 [2024-11-26 11:18:20.079001] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:01.916 [2024-11-26 11:18:20.079180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:01.916 [2024-11-26 11:18:20.079410] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:01.916 passed 00:08:01.916 Test: test_get_ns_id_desc_list ...passed 00:08:01.916 Test: test_identify_ns ...[2024-11-26 11:18:20.079895] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:01.916 [2024-11-26 11:18:20.080173] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:01.916 [2024-11-26 11:18:20.080358] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:01.916 passed 00:08:01.916 Test: test_identify_ns_iocs_specific ...[2024-11-26 11:18:20.080573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:01.916 [2024-11-26 11:18:20.080956] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:01.916 passed 00:08:01.916 Test: test_reservation_write_exclusive ...passed 00:08:01.916 Test: test_reservation_exclusive_access ...passed 00:08:01.916 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:01.916 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:01.916 Test: test_reservation_notification_log_page ...passed 00:08:01.916 Test: test_get_dif_ctx ...passed 00:08:01.916 Test: test_set_get_features ...[2024-11-26 11:18:20.081549] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:01.916 [2024-11-26 11:18:20.081616] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:01.916 [2024-11-26 11:18:20.081672] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:01.916 passed 00:08:01.916 Test: test_identify_ctrlr ...passed 00:08:01.916 Test: test_identify_ctrlr_iocs_specific ...[2024-11-26 11:18:20.081730] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:01.916 passed 00:08:01.916 Test: test_custom_admin_cmd ...passed 00:08:01.916 Test: test_fused_compare_and_write ...passed 00:08:01.916 Test: test_multi_async_event_reqs ...[2024-11-26 11:18:20.082446] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:01.916 [2024-11-26 11:18:20.082537] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:01.916 [2024-11-26 11:18:20.082599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:01.916 passed 00:08:01.916 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:01.916 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:01.916 Test: test_multi_async_events ...passed 00:08:01.916 Test: test_rae ...passed 00:08:01.916 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:01.916 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:01.916 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:08:01.916 Test: test_zcopy_read ...passed 00:08:01.916 Test: test_zcopy_write ...passed 00:08:01.916 Test: test_nvmf_property_set ...[2024-11-26 11:18:20.083386] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:01.916 passed 00:08:01.916 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:08:01.916 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-26 11:18:20.083721] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:01.916 [2024-11-26 11:18:20.083816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:01.916 [2024-11-26 11:18:20.083944] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:01.916 passed 00:08:01.916 00:08:01.917 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.917 suites 1 1 n/a 0 0 00:08:01.917 tests 30 30 30 0 0 00:08:01.917 asserts 885 885 885 0 n/a 00:08:01.917 00:08:01.917 Elapsed time = 0.009 seconds 00:08:01.917 [2024-11-26 11:18:20.084042] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:01.917 [2024-11-26 11:18:20.084106] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:01.917 11:18:20 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:01.917 00:08:01.917 00:08:01.917 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.917 http://cunit.sourceforge.net/ 00:08:01.917 00:08:01.917 00:08:01.917 Suite: nvmf 00:08:01.917 Test: test_get_rw_params ...passed 00:08:01.917 Test: test_lba_in_range ...passed 00:08:01.917 Test: test_get_dif_ctx ...passed 00:08:01.917 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:01.917 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:08:01.917 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:08:01.917 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-26 11:18:20.124284] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:01.917 [2024-11-26 11:18:20.124562] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:01.917 [2024-11-26 11:18:20.124615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:01.917 [2024-11-26 11:18:20.124674] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:01.917 [2024-11-26 11:18:20.124719] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:01.917 [2024-11-26 11:18:20.124774] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:01.917 passed 00:08:01.917 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:01.917 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:01.917 00:08:01.917 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.917 suites 1 1 n/a 0 0 00:08:01.917 tests 9 9 9 0 0 00:08:01.917 asserts 157 157 157 0 n/a 00:08:01.917 00:08:01.917 Elapsed time = 0.001 seconds 00:08:01.917 [2024-11-26 11:18:20.124810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:01.917 [2024-11-26 11:18:20.124844] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:01.917 [2024-11-26 11:18:20.124929] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:01.917 11:18:20 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:02.177 00:08:02.177 00:08:02.177 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.177 http://cunit.sourceforge.net/ 00:08:02.177 00:08:02.177 00:08:02.177 Suite: nvmf 00:08:02.177 Test: test_discovery_log ...passed 00:08:02.177 Test: test_discovery_log_with_filters ...passed 00:08:02.177 00:08:02.177 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.177 suites 1 1 n/a 0 0 00:08:02.177 tests 2 2 2 0 0 00:08:02.177 asserts 238 238 238 0 n/a 00:08:02.177 00:08:02.177 Elapsed time = 0.003 seconds 00:08:02.177 11:18:20 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:02.177 00:08:02.177 00:08:02.177 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.177 http://cunit.sourceforge.net/ 00:08:02.177 00:08:02.177 00:08:02.177 Suite: nvmf 00:08:02.177 Test: nvmf_test_create_subsystem ...[2024-11-26 11:18:20.205155] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:02.177 [2024-11-26 11:18:20.205565] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:02.177 [2024-11-26 11:18:20.205632] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:02.177 [2024-11-26 11:18:20.205663] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:02.177 [2024-11-26 11:18:20.205722] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:02.177 [2024-11-26 11:18:20.205769] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:02.177 [2024-11-26 11:18:20.205975] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:02.177 [2024-11-26 11:18:20.206110] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:02.177 [2024-11-26 11:18:20.206260] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:02.177 [2024-11-26 11:18:20.206304] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:02.177 passed 00:08:02.177 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-26 11:18:20.206360] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:02.177 passed 00:08:02.177 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:02.177 Test: test_reservation_register ...[2024-11-26 11:18:20.206709] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:02.177 [2024-11-26 11:18:20.206773] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:02.177 [2024-11-26 11:18:20.207195] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_register_with_ptpl ...[2024-11-26 11:18:20.207417] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:02.177 passed 00:08:02.177 Test: test_reservation_acquire_preempt_1 ...passed 00:08:02.177 Test: test_reservation_acquire_release_with_ptpl ...[2024-11-26 11:18:20.208804] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_release ...passed 00:08:02.177 Test: test_reservation_unregister_notification ...[2024-11-26 11:18:20.210970] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_release_notification ...[2024-11-26 11:18:20.211240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_release_notification_write_exclusive ...[2024-11-26 11:18:20.211534] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_clear_notification ...[2024-11-26 11:18:20.211932] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_reservation_preempt_notification ...[2024-11-26 11:18:20.212300] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 [2024-11-26 11:18:20.212544] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.177 passed 00:08:02.177 Test: test_spdk_nvmf_ns_event ...passed 00:08:02.177 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:02.177 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:02.177 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:08:02.177 Test: test_nvmf_ns_reservation_report ...[2024-11-26 11:18:20.213489] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:02.177 [2024-11-26 11:18:20.213636] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:02.177 passed 00:08:02.177 Test: test_nvmf_nqn_is_valid ...[2024-11-26 11:18:20.213856] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:02.177 [2024-11-26 11:18:20.214004] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:02.177 [2024-11-26 11:18:20.214072] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:089588a5-6837-4c26-963b-d4cf4d2f372": uuid is not the correct length 00:08:02.177 passed 00:08:02.177 Test: test_nvmf_ns_reservation_restore ...passed 00:08:02.177 Test: test_nvmf_subsystem_state_change ...[2024-11-26 11:18:20.214123] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:02.177 [2024-11-26 11:18:20.214268] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:02.177 passed 00:08:02.177 Test: test_nvmf_reservation_custom_ops ...passed 00:08:02.177 00:08:02.177 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.177 suites 1 1 n/a 0 0 00:08:02.177 tests 22 22 22 0 0 00:08:02.177 asserts 407 407 407 0 n/a 00:08:02.177 00:08:02.177 Elapsed time = 0.010 seconds 00:08:02.177 11:18:20 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:02.177 00:08:02.177 00:08:02.177 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.177 http://cunit.sourceforge.net/ 00:08:02.177 00:08:02.177 00:08:02.177 Suite: nvmf 00:08:02.177 Test: test_nvmf_tcp_create ...[2024-11-26 11:18:20.282807] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:02.177 passed 00:08:02.177 Test: test_nvmf_tcp_destroy ...passed 00:08:02.178 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:02.178 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:02.178 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:02.178 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:02.437 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:02.437 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-26 11:18:20.410764] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 passed 00:08:02.437 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-11-26 11:18:20.411017] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130b020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.411089] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130b020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.411149] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.411193] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130b020 is same with the state(5) to be set 00:08:02.437 passed 00:08:02.437 Test: test_nvmf_tcp_icreq_handle ...[2024-11-26 11:18:20.411316] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:02.437 [2024-11-26 11:18:20.411376] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.411429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130d180 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.411479] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:02.437 [2024-11-26 11:18:20.411550] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130d180 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.411595] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.411643] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130d180 is same with the state(5) to be set 00:08:02.437 passed 00:08:02.437 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:02.437 Test: test_nvmf_tcp_invalid_sgl ...[2024-11-26 11:18:20.411726] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.411799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307130d180 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.411998] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:02.437 [2024-11-26 11:18:20.412077] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 passed 00:08:02.437 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-26 11:18:20.412136] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e30713116a0 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.412228] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7e307120c8c0 00:08:02.437 [2024-11-26 11:18:20.412284] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.412350] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.412404] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7e307120c020 00:08:02.437 [2024-11-26 11:18:20.412445] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.412510] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.412597] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:02.437 [2024-11-26 11:18:20.412645] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.412704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.412751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:02.437 [2024-11-26 11:18:20.412818] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.412862] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.412982] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.437 [2024-11-26 11:18:20.413028] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.437 [2024-11-26 11:18:20.413099] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.438 [2024-11-26 11:18:20.413152] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.438 [2024-11-26 11:18:20.413251] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.438 [2024-11-26 11:18:20.413304] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.438 [2024-11-26 11:18:20.413368] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.438 [2024-11-26 11:18:20.413408] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.438 [2024-11-26 11:18:20.413444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.438 [2024-11-26 11:18:20.413468] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.438 [2024-11-26 11:18:20.413529] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.438 passed 00:08:02.438 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-26 11:18:20.413566] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e307120c020 is same with the state(5) to be set 00:08:02.438 passed 00:08:02.438 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-26 11:18:20.459688] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:02.438 passed 00:08:02.438 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-26 11:18:20.459782] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:02.438 [2024-11-26 11:18:20.461031] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:02.438 [2024-11-26 11:18:20.461098] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:02.438 passed 00:08:02.438 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:08:02.438 00:08:02.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.438 suites 1 1 n/a 0 0 00:08:02.438 tests 17 17 17 0 0 00:08:02.438 asserts 222 222 222 0 n/a 00:08:02.438 00:08:02.438 Elapsed time = 0.202 seconds 00:08:02.438 [2024-11-26 11:18:20.461867] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:02.438 [2024-11-26 11:18:20.461923] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:02.438 11:18:20 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:02.438 00:08:02.438 00:08:02.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.438 http://cunit.sourceforge.net/ 00:08:02.438 00:08:02.438 00:08:02.438 Suite: nvmf 00:08:02.438 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:02.438 00:08:02.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.438 suites 1 1 n/a 0 0 00:08:02.438 tests 1 1 1 0 0 00:08:02.438 asserts 17 17 17 0 n/a 00:08:02.438 00:08:02.438 Elapsed time = 0.025 seconds 00:08:02.438 00:08:02.438 real 0m0.575s 00:08:02.438 user 0m0.245s 00:08:02.438 sys 0m0.327s 00:08:02.438 11:18:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.438 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.438 ************************************ 00:08:02.438 END TEST unittest_nvmf 00:08:02.438 ************************************ 00:08:02.697 11:18:20 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:02.698 11:18:20 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:02.698 11:18:20 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:02.698 11:18:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.698 11:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.698 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.698 ************************************ 00:08:02.698 START TEST unittest_nvmf_rdma 00:08:02.698 ************************************ 00:08:02.698 11:18:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:02.698 00:08:02.698 00:08:02.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.698 http://cunit.sourceforge.net/ 00:08:02.698 00:08:02.698 00:08:02.698 Suite: nvmf 00:08:02.698 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-26 11:18:20.713446] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:02.698 [2024-11-26 11:18:20.713715] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:02.698 [2024-11-26 11:18:20.713783] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:02.698 passed 00:08:02.698 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:02.698 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:02.698 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:02.698 Test: test_nvmf_rdma_opts_init ...passed 00:08:02.698 Test: test_nvmf_rdma_request_free_data ...passed 00:08:02.698 Test: test_nvmf_rdma_update_ibv_state ...passed 00:08:02.698 Test: test_nvmf_rdma_resources_create ...[2024-11-26 11:18:20.715414] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:02.698 [2024-11-26 11:18:20.715479] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:02.698 passed 00:08:02.698 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:02.698 Test: test_nvmf_rdma_resize_cq ...passed 00:08:02.698 00:08:02.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.698 suites 1 1 n/a 0 0 00:08:02.698 tests 10 10 10 0 0 00:08:02.698 asserts 584 584 584 0 n/a 00:08:02.698 00:08:02.698 Elapsed time = 0.004 seconds 00:08:02.698 [2024-11-26 11:18:20.716988] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:02.698 Using CQ of insufficient size may lead to CQ overrun 00:08:02.698 [2024-11-26 11:18:20.717045] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:02.698 [2024-11-26 11:18:20.717114] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:02.698 00:08:02.698 real 0m0.045s 00:08:02.698 user 0m0.020s 00:08:02.698 sys 0m0.025s 00:08:02.698 ************************************ 00:08:02.698 END TEST unittest_nvmf_rdma 00:08:02.698 11:18:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.698 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.698 ************************************ 00:08:02.698 11:18:20 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:02.698 11:18:20 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:08:02.698 11:18:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.698 11:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.698 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.698 ************************************ 00:08:02.698 START TEST unittest_scsi 00:08:02.698 ************************************ 00:08:02.698 11:18:20 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:08:02.698 11:18:20 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:02.698 00:08:02.698 00:08:02.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.698 http://cunit.sourceforge.net/ 00:08:02.698 00:08:02.698 00:08:02.698 Suite: dev_suite 00:08:02.698 Test: dev_destruct_null_dev ...passed 00:08:02.698 Test: dev_destruct_zero_luns ...passed 00:08:02.698 Test: dev_destruct_null_lun ...passed 00:08:02.698 Test: dev_destruct_success ...passed 00:08:02.698 Test: dev_construct_num_luns_zero ...passed 00:08:02.698 Test: dev_construct_no_lun_zero ...passed 00:08:02.698 Test: dev_construct_null_lun ...passed 00:08:02.698 Test: dev_construct_name_too_long ...[2024-11-26 11:18:20.811026] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:02.698 [2024-11-26 11:18:20.811257] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:02.698 [2024-11-26 11:18:20.811336] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:02.698 passed 00:08:02.698 Test: dev_construct_success ...[2024-11-26 11:18:20.811411] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:02.698 passed 00:08:02.698 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:02.698 Test: dev_queue_mgmt_task_success ...passed 00:08:02.698 Test: dev_queue_task_success ...passed 00:08:02.698 Test: dev_stop_success ...passed 00:08:02.698 Test: dev_add_port_max_ports ...passed 00:08:02.698 Test: dev_add_port_construct_failure1 ...[2024-11-26 11:18:20.811754] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:02.698 passed 00:08:02.698 Test: dev_add_port_construct_failure2 ...passed 00:08:02.698 Test: dev_add_port_success1 ...passed 00:08:02.698 Test: dev_add_port_success2 ...passed 00:08:02.698 Test: dev_add_port_success3 ...passed 00:08:02.698 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:02.698 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:02.698 Test: dev_find_port_by_id_success ...passed 00:08:02.698 Test: dev_add_lun_bdev_not_found ...passed 00:08:02.698 Test: dev_add_lun_no_free_lun_id ...[2024-11-26 11:18:20.811806] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:02.698 [2024-11-26 11:18:20.811870] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:02.698 passed 00:08:02.698 Test: dev_add_lun_success1 ...passed 00:08:02.698 Test: dev_add_lun_success2 ...passed 00:08:02.698 Test: dev_check_pending_tasks ...passed 00:08:02.698 Test: dev_iterate_luns ...passed 00:08:02.698 Test: dev_find_free_lun ...[2024-11-26 11:18:20.812394] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:02.698 passed 00:08:02.698 00:08:02.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.698 suites 1 1 n/a 0 0 00:08:02.698 tests 29 29 29 0 0 00:08:02.698 asserts 97 97 97 0 n/a 00:08:02.698 00:08:02.698 Elapsed time = 0.002 seconds 00:08:02.698 11:18:20 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:02.698 00:08:02.698 00:08:02.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.698 http://cunit.sourceforge.net/ 00:08:02.698 00:08:02.698 00:08:02.698 Suite: lun_suite 00:08:02.698 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:08:02.698 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-26 11:18:20.843920] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:02.698 passed 00:08:02.698 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:02.698 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:02.698 Test: lun_task_mgmt_execute_invalid_case ...[2024-11-26 11:18:20.844298] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:02.698 passed 00:08:02.698 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:02.698 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:02.698 Test: lun_append_task_null_lun_not_supported ...passed[2024-11-26 11:18:20.844510] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:02.698 00:08:02.698 Test: lun_execute_scsi_task_pending ...passed 00:08:02.698 Test: lun_execute_scsi_task_complete ...passed 00:08:02.698 Test: lun_execute_scsi_task_resize ...passed 00:08:02.698 Test: lun_destruct_success ...passed 00:08:02.698 Test: lun_construct_null_ctx ...passed 00:08:02.698 Test: lun_construct_success ...passed 00:08:02.698 Test: lun_reset_task_wait_scsi_task_complete ...[2024-11-26 11:18:20.844903] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:02.698 passed 00:08:02.698 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:02.698 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:02.698 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:02.698 00:08:02.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.698 suites 1 1 n/a 0 0 00:08:02.698 tests 18 18 18 0 0 00:08:02.698 asserts 153 153 153 0 n/a 00:08:02.698 00:08:02.698 Elapsed time = 0.002 seconds 00:08:02.698 11:18:20 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:02.698 00:08:02.698 00:08:02.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.698 http://cunit.sourceforge.net/ 00:08:02.698 00:08:02.698 00:08:02.699 Suite: scsi_suite 00:08:02.699 Test: scsi_init ...passed 00:08:02.699 00:08:02.699 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.699 suites 1 1 n/a 0 0 00:08:02.699 tests 1 1 1 0 0 00:08:02.699 asserts 1 1 1 0 n/a 00:08:02.699 00:08:02.699 Elapsed time = 0.000 seconds 00:08:02.699 11:18:20 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:02.699 00:08:02.699 00:08:02.699 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.699 http://cunit.sourceforge.net/ 00:08:02.699 00:08:02.699 00:08:02.699 Suite: translation_suite 00:08:02.699 Test: mode_select_6_test ...passed 00:08:02.699 Test: mode_select_6_test2 ...passed 00:08:02.699 Test: mode_sense_6_test ...passed 00:08:02.699 Test: mode_sense_10_test ...passed 00:08:02.699 Test: inquiry_evpd_test ...passed 00:08:02.699 Test: inquiry_standard_test ...passed 00:08:02.699 Test: inquiry_overflow_test ...passed 00:08:02.699 Test: task_complete_test ...passed 00:08:02.699 Test: lba_range_test ...passed 00:08:02.699 Test: xfer_len_test ...passed 00:08:02.699 Test: xfer_test ...[2024-11-26 11:18:20.918991] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:02.699 passed 00:08:02.699 Test: scsi_name_padding_test ...passed 00:08:02.699 Test: get_dif_ctx_test ...passed 00:08:02.699 Test: unmap_split_test ...passed 00:08:02.699 00:08:02.699 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.699 suites 1 1 n/a 0 0 00:08:02.699 tests 14 14 14 0 0 00:08:02.699 asserts 1200 1200 1200 0 n/a 00:08:02.699 00:08:02.699 Elapsed time = 0.006 seconds 00:08:02.958 11:18:20 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:02.958 00:08:02.958 00:08:02.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.959 http://cunit.sourceforge.net/ 00:08:02.959 00:08:02.959 00:08:02.959 Suite: reservation_suite 00:08:02.959 Test: test_reservation_register ...passed 00:08:02.959 Test: test_reservation_reserve ...passed 00:08:02.959 Test: test_reservation_preempt_non_all_regs ...[2024-11-26 11:18:20.956487] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 [2024-11-26 11:18:20.956793] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 [2024-11-26 11:18:20.956955] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:02.959 [2024-11-26 11:18:20.957035] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:02.959 [2024-11-26 11:18:20.957155] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 passed 00:08:02.959 Test: test_reservation_preempt_all_regs ...passed 00:08:02.959 Test: test_reservation_cmds_conflict ...[2024-11-26 11:18:20.957244] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:02.959 [2024-11-26 11:18:20.957418] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 [2024-11-26 11:18:20.957588] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 [2024-11-26 11:18:20.957694] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:02.959 [2024-11-26 11:18:20.957740] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:02.959 passed 00:08:02.959 Test: test_scsi2_reserve_release ...passed 00:08:02.959 Test: test_pr_with_scsi2_reserve_release ...[2024-11-26 11:18:20.957808] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:02.959 [2024-11-26 11:18:20.957859] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:02.959 [2024-11-26 11:18:20.957949] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:02.959 passed 00:08:02.959 00:08:02.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.959 suites 1 1 n/a 0 0 00:08:02.959 tests 7 7 7 0 0 00:08:02.959 asserts 257 257 257 0 n/a 00:08:02.959 00:08:02.959 Elapsed time = 0.002 seconds 00:08:02.959 [2024-11-26 11:18:20.958056] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:02.959 00:08:02.959 real 0m0.176s 00:08:02.959 user 0m0.085s 00:08:02.959 sys 0m0.091s 00:08:02.959 11:18:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.959 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 END TEST unittest_scsi 00:08:02.959 ************************************ 00:08:02.959 11:18:21 -- unit/unittest.sh@252 -- # uname -s 00:08:02.959 11:18:21 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:08:02.959 11:18:21 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:08:02.959 11:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.959 11:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.959 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 START TEST unittest_sock 00:08:02.959 ************************************ 00:08:02.959 11:18:21 -- common/autotest_common.sh@1114 -- # unittest_sock 00:08:02.959 11:18:21 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:02.959 00:08:02.959 00:08:02.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.959 http://cunit.sourceforge.net/ 00:08:02.959 00:08:02.959 00:08:02.959 Suite: sock 00:08:02.959 Test: posix_sock ...passed 00:08:02.959 Test: ut_sock ...passed 00:08:02.959 Test: posix_sock_group ...passed 00:08:02.959 Test: ut_sock_group ...passed 00:08:02.959 Test: posix_sock_group_fairness ...passed 00:08:02.959 Test: _posix_sock_close ...passed 00:08:02.959 Test: sock_get_default_opts ...passed 00:08:02.959 Test: ut_sock_impl_get_set_opts ...passed 00:08:02.959 Test: posix_sock_impl_get_set_opts ...passed 00:08:02.959 Test: ut_sock_map ...passed 00:08:02.959 Test: override_impl_opts ...passed 00:08:02.959 Test: ut_sock_group_get_ctx ...passed 00:08:02.959 00:08:02.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.959 suites 1 1 n/a 0 0 00:08:02.959 tests 12 12 12 0 0 00:08:02.959 asserts 349 349 349 0 n/a 00:08:02.959 00:08:02.959 Elapsed time = 0.009 seconds 00:08:02.959 11:18:21 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:02.959 00:08:02.959 00:08:02.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.959 http://cunit.sourceforge.net/ 00:08:02.959 00:08:02.959 00:08:02.959 Suite: posix 00:08:02.959 Test: flush ...passed 00:08:02.959 00:08:02.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.959 suites 1 1 n/a 0 0 00:08:02.959 tests 1 1 1 0 0 00:08:02.959 asserts 28 28 28 0 n/a 00:08:02.959 00:08:02.959 Elapsed time = 0.000 seconds 00:08:02.959 11:18:21 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:02.959 00:08:02.959 real 0m0.101s 00:08:02.959 user 0m0.035s 00:08:02.959 sys 0m0.043s 00:08:02.959 11:18:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.959 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 END TEST unittest_sock 00:08:02.959 ************************************ 00:08:02.959 11:18:21 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:02.959 11:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.959 11:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.959 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 START TEST unittest_thread 00:08:02.959 ************************************ 00:08:02.959 11:18:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:03.218 00:08:03.218 00:08:03.218 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.218 http://cunit.sourceforge.net/ 00:08:03.218 00:08:03.218 00:08:03.218 Suite: io_channel 00:08:03.219 Test: thread_alloc ...passed 00:08:03.219 Test: thread_send_msg ...passed 00:08:03.219 Test: thread_poller ...passed 00:08:03.219 Test: poller_pause ...passed 00:08:03.219 Test: thread_for_each ...passed 00:08:03.219 Test: for_each_channel_remove ...passed 00:08:03.219 Test: for_each_channel_unreg ...[2024-11-26 11:18:21.221726] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x722e12d09640 already registered (old:0x513000000200 new:0x5130000003c0) 00:08:03.219 passed 00:08:03.219 Test: thread_name ...passed 00:08:03.219 Test: channel ...[2024-11-26 11:18:21.226730] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x6283d3b04120 00:08:03.219 passed 00:08:03.219 Test: channel_destroy_races ...passed 00:08:03.219 Test: thread_exit_test ...[2024-11-26 11:18:21.233314] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x518000005c80 got timeout, and move it to the exited state forcefully 00:08:03.219 passed 00:08:03.219 Test: thread_update_stats_test ...passed 00:08:03.219 Test: nested_channel ...passed 00:08:03.219 Test: device_unregister_and_thread_exit_race ...passed 00:08:03.219 Test: cache_closest_timed_poller ...passed 00:08:03.219 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:03.219 Test: io_device_lookup ...passed 00:08:03.219 Test: spdk_spin ...[2024-11-26 11:18:21.247146] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:03.219 [2024-11-26 11:18:21.247205] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x722e12d0a020 00:08:03.219 [2024-11-26 11:18:21.247229] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:03.219 [2024-11-26 11:18:21.249591] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:03.219 [2024-11-26 11:18:21.249653] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x722e12d0a020 00:08:03.219 [2024-11-26 11:18:21.249673] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:03.219 [2024-11-26 11:18:21.249691] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x722e12d0a020 00:08:03.219 [2024-11-26 11:18:21.249717] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:03.219 [2024-11-26 11:18:21.249757] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x722e12d0a020 00:08:03.219 [2024-11-26 11:18:21.249784] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:03.219 [2024-11-26 11:18:21.249826] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x722e12d0a020 00:08:03.219 passed 00:08:03.219 Test: for_each_channel_and_thread_exit_race ...passed 00:08:03.219 Test: for_each_thread_and_thread_exit_race ...passed 00:08:03.219 00:08:03.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.219 suites 1 1 n/a 0 0 00:08:03.219 tests 20 20 20 0 0 00:08:03.219 asserts 409 409 409 0 n/a 00:08:03.219 00:08:03.219 Elapsed time = 0.063 seconds 00:08:03.219 00:08:03.219 real 0m0.101s 00:08:03.219 user 0m0.069s 00:08:03.219 sys 0m0.032s 00:08:03.219 11:18:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.219 ************************************ 00:08:03.219 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.219 END TEST unittest_thread 00:08:03.219 ************************************ 00:08:03.219 11:18:21 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:03.219 11:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.219 11:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.219 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.219 ************************************ 00:08:03.219 START TEST unittest_iobuf 00:08:03.219 ************************************ 00:08:03.219 11:18:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:03.219 00:08:03.219 00:08:03.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.219 http://cunit.sourceforge.net/ 00:08:03.219 00:08:03.219 00:08:03.219 Suite: io_channel 00:08:03.219 Test: iobuf ...passed 00:08:03.219 Test: iobuf_cache ...[2024-11-26 11:18:21.358538] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:03.219 [2024-11-26 11:18:21.358792] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:03.219 [2024-11-26 11:18:21.358927] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:03.219 [2024-11-26 11:18:21.358968] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:03.219 [2024-11-26 11:18:21.359072] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:03.219 [2024-11-26 11:18:21.359112] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:03.219 passed 00:08:03.219 00:08:03.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.219 suites 1 1 n/a 0 0 00:08:03.219 tests 2 2 2 0 0 00:08:03.219 asserts 107 107 107 0 n/a 00:08:03.219 00:08:03.219 Elapsed time = 0.007 seconds 00:08:03.219 00:08:03.219 real 0m0.044s 00:08:03.219 user 0m0.029s 00:08:03.219 sys 0m0.015s 00:08:03.219 11:18:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.219 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.219 ************************************ 00:08:03.219 END TEST unittest_iobuf 00:08:03.219 ************************************ 00:08:03.219 11:18:21 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:08:03.219 11:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.219 11:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.219 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.219 ************************************ 00:08:03.219 START TEST unittest_util 00:08:03.219 ************************************ 00:08:03.219 11:18:21 -- common/autotest_common.sh@1114 -- # unittest_util 00:08:03.219 11:18:21 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:03.219 00:08:03.219 00:08:03.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.219 http://cunit.sourceforge.net/ 00:08:03.219 00:08:03.219 00:08:03.219 Suite: base64 00:08:03.219 Test: test_base64_get_encoded_strlen ...passed 00:08:03.219 Test: test_base64_get_decoded_len ...passed 00:08:03.219 Test: test_base64_encode ...passed 00:08:03.219 Test: test_base64_decode ...passed 00:08:03.219 Test: test_base64_urlsafe_encode ...passed 00:08:03.219 Test: test_base64_urlsafe_decode ...passed 00:08:03.219 00:08:03.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.219 suites 1 1 n/a 0 0 00:08:03.219 tests 6 6 6 0 0 00:08:03.219 asserts 112 112 112 0 n/a 00:08:03.219 00:08:03.219 Elapsed time = 0.000 seconds 00:08:03.479 11:18:21 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:03.479 00:08:03.479 00:08:03.479 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.479 http://cunit.sourceforge.net/ 00:08:03.479 00:08:03.479 00:08:03.479 Suite: bit_array 00:08:03.479 Test: test_1bit ...passed 00:08:03.479 Test: test_64bit ...passed 00:08:03.479 Test: test_find ...passed 00:08:03.479 Test: test_resize ...passed 00:08:03.479 Test: test_errors ...passed 00:08:03.479 Test: test_count ...passed 00:08:03.479 Test: test_mask_store_load ...passed 00:08:03.479 Test: test_mask_clear ...passed 00:08:03.479 00:08:03.479 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.479 suites 1 1 n/a 0 0 00:08:03.479 tests 8 8 8 0 0 00:08:03.479 asserts 5075 5075 5075 0 n/a 00:08:03.479 00:08:03.479 Elapsed time = 0.002 seconds 00:08:03.479 11:18:21 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:03.479 00:08:03.479 00:08:03.479 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.479 http://cunit.sourceforge.net/ 00:08:03.479 00:08:03.479 00:08:03.479 Suite: cpuset 00:08:03.479 Test: test_cpuset ...passed 00:08:03.479 Test: test_cpuset_parse ...[2024-11-26 11:18:21.512149] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:03.479 [2024-11-26 11:18:21.512468] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:03.479 [2024-11-26 11:18:21.512511] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:03.479 [2024-11-26 11:18:21.512549] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:03.479 [2024-11-26 11:18:21.512578] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:03.479 [2024-11-26 11:18:21.512612] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:03.479 [2024-11-26 11:18:21.512651] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:03.479 [2024-11-26 11:18:21.512712] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:03.479 passed 00:08:03.479 Test: test_cpuset_fmt ...passed 00:08:03.479 00:08:03.479 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.479 suites 1 1 n/a 0 0 00:08:03.479 tests 3 3 3 0 0 00:08:03.479 asserts 65 65 65 0 n/a 00:08:03.479 00:08:03.479 Elapsed time = 0.002 seconds 00:08:03.479 11:18:21 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:03.479 00:08:03.479 00:08:03.479 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.479 http://cunit.sourceforge.net/ 00:08:03.479 00:08:03.479 00:08:03.480 Suite: crc16 00:08:03.480 Test: test_crc16_t10dif ...passed 00:08:03.480 Test: test_crc16_t10dif_seed ...passed 00:08:03.480 Test: test_crc16_t10dif_copy ...passed 00:08:03.480 00:08:03.480 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.480 suites 1 1 n/a 0 0 00:08:03.480 tests 3 3 3 0 0 00:08:03.480 asserts 5 5 5 0 n/a 00:08:03.480 00:08:03.480 Elapsed time = 0.000 seconds 00:08:03.480 11:18:21 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:03.480 00:08:03.480 00:08:03.480 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.480 http://cunit.sourceforge.net/ 00:08:03.480 00:08:03.480 00:08:03.480 Suite: crc32_ieee 00:08:03.480 Test: test_crc32_ieee ...passed 00:08:03.480 00:08:03.480 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.480 suites 1 1 n/a 0 0 00:08:03.480 tests 1 1 1 0 0 00:08:03.480 asserts 1 1 1 0 n/a 00:08:03.480 00:08:03.480 Elapsed time = 0.000 seconds 00:08:03.480 11:18:21 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:03.480 00:08:03.480 00:08:03.480 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.480 http://cunit.sourceforge.net/ 00:08:03.480 00:08:03.480 00:08:03.480 Suite: crc32c 00:08:03.480 Test: test_crc32c ...passed 00:08:03.480 Test: test_crc32c_nvme ...passed 00:08:03.480 00:08:03.480 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.480 suites 1 1 n/a 0 0 00:08:03.480 tests 2 2 2 0 0 00:08:03.480 asserts 16 16 16 0 n/a 00:08:03.480 00:08:03.480 Elapsed time = 0.001 seconds 00:08:03.480 11:18:21 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:03.480 00:08:03.480 00:08:03.480 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.480 http://cunit.sourceforge.net/ 00:08:03.480 00:08:03.480 00:08:03.480 Suite: crc64 00:08:03.480 Test: test_crc64_nvme ...passed 00:08:03.480 00:08:03.480 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.480 suites 1 1 n/a 0 0 00:08:03.480 tests 1 1 1 0 0 00:08:03.480 asserts 4 4 4 0 n/a 00:08:03.480 00:08:03.480 Elapsed time = 0.000 seconds 00:08:03.480 11:18:21 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:03.480 00:08:03.480 00:08:03.480 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.480 http://cunit.sourceforge.net/ 00:08:03.480 00:08:03.480 00:08:03.480 Suite: string 00:08:03.480 Test: test_parse_ip_addr ...passed 00:08:03.480 Test: test_str_chomp ...passed 00:08:03.480 Test: test_parse_capacity ...passed 00:08:03.480 Test: test_sprintf_append_realloc ...passed 00:08:03.480 Test: test_strtol ...passed 00:08:03.480 Test: test_strtoll ...passed 00:08:03.480 Test: test_strarray ...passed 00:08:03.480 Test: test_strcpy_replace ...passed 00:08:03.480 00:08:03.480 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.480 suites 1 1 n/a 0 0 00:08:03.480 tests 8 8 8 0 0 00:08:03.480 asserts 161 161 161 0 n/a 00:08:03.480 00:08:03.480 Elapsed time = 0.001 seconds 00:08:03.480 11:18:21 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:03.480 00:08:03.480 00:08:03.480 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.480 http://cunit.sourceforge.net/ 00:08:03.480 00:08:03.480 00:08:03.480 Suite: dif 00:08:03.480 Test: dif_generate_and_verify_test ...[2024-11-26 11:18:21.684290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:03.480 [2024-11-26 11:18:21.684710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:03.480 [2024-11-26 11:18:21.685058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:03.480 [2024-11-26 11:18:21.685369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:03.480 [2024-11-26 11:18:21.685694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:03.480 [2024-11-26 11:18:21.686101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:03.480 passed 00:08:03.480 Test: dif_disable_check_test ...[2024-11-26 11:18:21.687195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:03.480 [2024-11-26 11:18:21.687513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:03.480 [2024-11-26 11:18:21.687841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:03.480 passed 00:08:03.480 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-26 11:18:21.688948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:03.480 [2024-11-26 11:18:21.689295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:03.480 [2024-11-26 11:18:21.689626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:03.480 [2024-11-26 11:18:21.689990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:03.480 [2024-11-26 11:18:21.690303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:03.480 [2024-11-26 11:18:21.690654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:03.480 [2024-11-26 11:18:21.691045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:03.480 [2024-11-26 11:18:21.691374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:03.480 [2024-11-26 11:18:21.691749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:03.480 [2024-11-26 11:18:21.692152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:03.480 [2024-11-26 11:18:21.692494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:03.480 passed 00:08:03.480 Test: dif_apptag_mask_test ...[2024-11-26 11:18:21.692862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:03.480 [2024-11-26 11:18:21.693197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:03.480 passed 00:08:03.480 Test: dif_sec_512_md_0_error_test ...passed 00:08:03.480 Test: dif_sec_4096_md_0_error_test ...[2024-11-26 11:18:21.693429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:03.480 [2024-11-26 11:18:21.693502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:03.480 passed 00:08:03.480 Test: dif_sec_4100_md_128_error_test ...[2024-11-26 11:18:21.693538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:03.480 [2024-11-26 11:18:21.693575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:03.480 passed 00:08:03.480 Test: dif_guard_seed_test ...[2024-11-26 11:18:21.693611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:03.480 passed 00:08:03.480 Test: dif_guard_value_test ...passed 00:08:03.480 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:03.480 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:03.480 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:03.480 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:03.742 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:03.742 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-26 11:18:21.738653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=f94c, Actual=fd4c 00:08:03.742 [2024-11-26 11:18:21.741196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fa21, Actual=fe21 00:08:03.742 [2024-11-26 11:18:21.743773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.742 [2024-11-26 11:18:21.746291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.742 [2024-11-26 11:18:21.748760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.742 [2024-11-26 11:18:21.751245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.742 [2024-11-26 11:18:21.753721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=dbc4 00:08:03.742 [2024-11-26 11:18:21.755923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe21, Actual=13de 00:08:03.742 [2024-11-26 11:18:21.758093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab757ed, Actual=1ab753ed 00:08:03.742 [2024-11-26 11:18:21.760589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574260, Actual=38574660 00:08:03.742 [2024-11-26 11:18:21.763076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.742 [2024-11-26 11:18:21.765571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.742 [2024-11-26 11:18:21.768055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.742 [2024-11-26 11:18:21.770543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.742 [2024-11-26 11:18:21.773033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.742 [2024-11-26 11:18:21.775225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574660, Actual=3dd831eb 00:08:03.742 [2024-11-26 11:18:21.777399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.742 [2024-11-26 11:18:21.779908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.742 [2024-11-26 11:18:21.782379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.784887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.787355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:03.743 [2024-11-26 11:18:21.789855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:03.743 [2024-11-26 11:18:21.792321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.743 [2024-11-26 11:18:21.794487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.743 passed 00:08:03.743 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-26 11:18:21.795779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:03.743 [2024-11-26 11:18:21.796120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:03.743 [2024-11-26 11:18:21.796455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.796768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.797109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.797454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.797786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dbc4 00:08:03.743 [2024-11-26 11:18:21.798109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=13de 00:08:03.743 [2024-11-26 11:18:21.798394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:08:03.743 [2024-11-26 11:18:21.798724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:08:03.743 [2024-11-26 11:18:21.799081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.799426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.799759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.800113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.800448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.743 [2024-11-26 11:18:21.800742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=3dd831eb 00:08:03.743 [2024-11-26 11:18:21.801061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.743 [2024-11-26 11:18:21.801391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.743 [2024-11-26 11:18:21.801719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.802057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.802380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.802764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.803111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.743 [2024-11-26 11:18:21.803401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.743 passed 00:08:03.743 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-26 11:18:21.803747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:03.743 [2024-11-26 11:18:21.804119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:03.743 [2024-11-26 11:18:21.804466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.804798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.805134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.805478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.805793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dbc4 00:08:03.743 [2024-11-26 11:18:21.806079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=13de 00:08:03.743 [2024-11-26 11:18:21.806378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:08:03.743 [2024-11-26 11:18:21.806708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:08:03.743 [2024-11-26 11:18:21.807024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.807362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.807687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.808056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.808369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.743 [2024-11-26 11:18:21.808666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=3dd831eb 00:08:03.743 [2024-11-26 11:18:21.808957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.743 [2024-11-26 11:18:21.809289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.743 [2024-11-26 11:18:21.809604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.809935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.810249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.810568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.810869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.743 [2024-11-26 11:18:21.811181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.743 passed 00:08:03.743 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-26 11:18:21.811533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:03.743 [2024-11-26 11:18:21.811894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:03.743 [2024-11-26 11:18:21.812216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.812557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.812914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.813268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.813593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dbc4 00:08:03.743 [2024-11-26 11:18:21.813893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=13de 00:08:03.743 [2024-11-26 11:18:21.814183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:08:03.743 [2024-11-26 11:18:21.814495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:08:03.743 [2024-11-26 11:18:21.814823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.815154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.815476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.815792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.743 [2024-11-26 11:18:21.816149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.743 [2024-11-26 11:18:21.816442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=3dd831eb 00:08:03.743 [2024-11-26 11:18:21.816731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.743 [2024-11-26 11:18:21.817078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.743 [2024-11-26 11:18:21.817405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.817738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.743 [2024-11-26 11:18:21.818078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.818434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.743 [2024-11-26 11:18:21.818759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.744 [2024-11-26 11:18:21.819077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.744 passed 00:08:03.744 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-26 11:18:21.819409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:03.744 [2024-11-26 11:18:21.819751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:03.744 [2024-11-26 11:18:21.820103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.820440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.820770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.821115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.821431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dbc4 00:08:03.744 [2024-11-26 11:18:21.821704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=13de 00:08:03.744 passed 00:08:03.744 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-26 11:18:21.822070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:08:03.744 [2024-11-26 11:18:21.822408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:08:03.744 [2024-11-26 11:18:21.822727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.823077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.823391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.823747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.824083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.744 [2024-11-26 11:18:21.824354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=3dd831eb 00:08:03.744 [2024-11-26 11:18:21.824672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.744 [2024-11-26 11:18:21.825021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.744 [2024-11-26 11:18:21.825350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.825672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.826009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.744 [2024-11-26 11:18:21.826351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.744 [2024-11-26 11:18:21.826669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.744 [2024-11-26 11:18:21.826972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.744 passed 00:08:03.744 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-26 11:18:21.827297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:03.744 [2024-11-26 11:18:21.827640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:03.744 [2024-11-26 11:18:21.827978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.828312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.828626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.828983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.829285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dbc4 00:08:03.744 [2024-11-26 11:18:21.829574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=13de 00:08:03.744 passed 00:08:03.744 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-26 11:18:21.829908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:08:03.744 [2024-11-26 11:18:21.830245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:08:03.744 [2024-11-26 11:18:21.830576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.830913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.831224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.831544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:08:03.744 [2024-11-26 11:18:21.831895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.744 [2024-11-26 11:18:21.832195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=3dd831eb 00:08:03.744 [2024-11-26 11:18:21.832530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.744 [2024-11-26 11:18:21.832848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:08:03.744 [2024-11-26 11:18:21.833190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.833517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.833845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.744 [2024-11-26 11:18:21.834170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:08:03.744 [2024-11-26 11:18:21.834496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.744 [2024-11-26 11:18:21.834792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7c83b46d6f25bbcc 00:08:03.744 passed 00:08:03.744 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:03.744 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:03.744 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:03.744 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:03.744 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-26 11:18:21.879201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=f94c, Actual=fd4c 00:08:03.744 [2024-11-26 11:18:21.880355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e549, Actual=e149 00:08:03.744 [2024-11-26 11:18:21.881499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.882637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.883793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.744 [2024-11-26 11:18:21.884953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.744 [2024-11-26 11:18:21.886099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=dbc4 00:08:03.744 [2024-11-26 11:18:21.887232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=e8a5 00:08:03.744 [2024-11-26 11:18:21.888376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab757ed, Actual=1ab753ed 00:08:03.744 [2024-11-26 11:18:21.889516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e0f1342f, Actual=e0f1302f 00:08:03.744 [2024-11-26 11:18:21.890661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.891812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.892957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.744 [2024-11-26 11:18:21.894095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.744 [2024-11-26 11:18:21.895247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.744 [2024-11-26 11:18:21.896395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=bd843391 00:08:03.744 [2024-11-26 11:18:21.897525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.744 [2024-11-26 11:18:21.898662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=8bff3e5d80ee1629, Actual=8bff3e5d80ee1229 00:08:03.744 [2024-11-26 11:18:21.899825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.900991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.744 [2024-11-26 11:18:21.902139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:03.744 [2024-11-26 11:18:21.903274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:03.745 [2024-11-26 11:18:21.904435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.745 passed 00:08:03.745 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-26 11:18:21.905572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=a9d5c5587785a300 00:08:03.745 [2024-11-26 11:18:21.905941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f94c, Actual=fd4c 00:08:03.745 [2024-11-26 11:18:21.906263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=72d3, Actual=76d3 00:08:03.745 [2024-11-26 11:18:21.906554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.906864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.907194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:03.745 [2024-11-26 11:18:21.907515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:03.745 [2024-11-26 11:18:21.907836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=dbc4 00:08:03.745 [2024-11-26 11:18:21.908166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7f3f 00:08:03.745 [2024-11-26 11:18:21.908476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab757ed, Actual=1ab753ed 00:08:03.745 [2024-11-26 11:18:21.908781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c3472058, Actual=c3472458 00:08:03.745 [2024-11-26 11:18:21.909118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.909414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.909725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:03.745 [2024-11-26 11:18:21.910044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:03.745 [2024-11-26 11:18:21.910338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=cc5e0a07 00:08:03.745 [2024-11-26 11:18:21.910633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=9e3227e6 00:08:03.745 [2024-11-26 11:18:21.910967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:03.745 [2024-11-26 11:18:21.911262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9e80a52e993d99ec, Actual=9e80a52e993d9dec 00:08:03.745 [2024-11-26 11:18:21.911548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.911857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.912178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:03.745 [2024-11-26 11:18:21.912459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:03.745 [2024-11-26 11:18:21.912751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:03.745 [2024-11-26 11:18:21.913056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=bcaa5e2b6e562cc5 00:08:03.745 passed 00:08:03.745 Test: dix_sec_512_md_0_error ...passed 00:08:03.745 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-11-26 11:18:21.913119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:03.745 passed 00:08:03.745 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:03.745 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:03.745 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:03.745 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:03.745 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:03.745 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:03.745 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:03.745 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:03.745 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-26 11:18:21.957027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=f94c, Actual=fd4c 00:08:03.745 [2024-11-26 11:18:21.958236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e549, Actual=e149 00:08:03.745 [2024-11-26 11:18:21.959362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.960528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.961666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.745 [2024-11-26 11:18:21.962819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.745 [2024-11-26 11:18:21.963962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=dbc4 00:08:03.745 [2024-11-26 11:18:21.965087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=e8a5 00:08:03.745 [2024-11-26 11:18:21.966241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab757ed, Actual=1ab753ed 00:08:03.745 [2024-11-26 11:18:21.967382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e0f1342f, Actual=e0f1302f 00:08:03.745 [2024-11-26 11:18:21.968529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.969679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:03.745 [2024-11-26 11:18:21.970814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.745 [2024-11-26 11:18:21.971976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=45f 00:08:03.745 [2024-11-26 11:18:21.973116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=cc5e0a07 00:08:04.005 [2024-11-26 11:18:21.974255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=bd843391 00:08:04.006 [2024-11-26 11:18:21.975399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:04.006 [2024-11-26 11:18:21.976544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=8bff3e5d80ee1629, Actual=8bff3e5d80ee1229 00:08:04.006 [2024-11-26 11:18:21.977687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.978822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.979982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:04.006 [2024-11-26 11:18:21.981119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000005f 00:08:04.006 [2024-11-26 11:18:21.982254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:04.006 passed 00:08:04.006 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-26 11:18:21.983376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=a9d5c5587785a300 00:08:04.006 [2024-11-26 11:18:21.983765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f94c, Actual=fd4c 00:08:04.006 [2024-11-26 11:18:21.984081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=72d3, Actual=76d3 00:08:04.006 [2024-11-26 11:18:21.984381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.984691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.985023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:04.006 [2024-11-26 11:18:21.985312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:04.006 [2024-11-26 11:18:21.985618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=dbc4 00:08:04.006 [2024-11-26 11:18:21.985937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7f3f 00:08:04.006 [2024-11-26 11:18:21.986239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab757ed, Actual=1ab753ed 00:08:04.006 [2024-11-26 11:18:21.986531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c3472058, Actual=c3472458 00:08:04.006 [2024-11-26 11:18:21.986816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.987110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.987437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:04.006 [2024-11-26 11:18:21.987750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=459 00:08:04.006 [2024-11-26 11:18:21.988062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=cc5e0a07 00:08:04.006 [2024-11-26 11:18:21.988346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=9e3227e6 00:08:04.006 [2024-11-26 11:18:21.988643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:08:04.006 [2024-11-26 11:18:21.988950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9e80a52e993d99ec, Actual=9e80a52e993d9dec 00:08:04.006 [2024-11-26 11:18:21.989248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.989532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=488 00:08:04.006 [2024-11-26 11:18:21.989825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:04.006 [2024-11-26 11:18:21.990141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000059 00:08:04.006 [2024-11-26 11:18:21.990438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=ce53a5adc6122861 00:08:04.006 [2024-11-26 11:18:21.990729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=bcaa5e2b6e562cc5 00:08:04.006 passed 00:08:04.006 Test: set_md_interleave_iovs_test ...passed 00:08:04.006 Test: set_md_interleave_iovs_split_test ...passed 00:08:04.006 Test: dif_generate_stream_pi_16_test ...passed 00:08:04.006 Test: dif_generate_stream_test ...passed 00:08:04.006 Test: set_md_interleave_iovs_alignment_test ...passed 00:08:04.006 Test: dif_generate_split_test ...[2024-11-26 11:18:21.998510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:04.006 passed 00:08:04.006 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:04.006 Test: dif_verify_split_test ...passed 00:08:04.006 Test: dif_verify_stream_multi_segments_test ...passed 00:08:04.006 Test: update_crc32c_pi_16_test ...passed 00:08:04.006 Test: update_crc32c_test ...passed 00:08:04.006 Test: dif_update_crc32c_split_test ...passed 00:08:04.006 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:04.006 Test: get_range_with_md_test ...passed 00:08:04.006 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:04.006 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:04.006 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:04.006 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:04.006 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:04.006 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:04.006 Test: dif_generate_and_verify_unmap_test ...passed 00:08:04.006 00:08:04.006 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.006 suites 1 1 n/a 0 0 00:08:04.006 tests 79 79 79 0 0 00:08:04.006 asserts 3584 3584 3584 0 n/a 00:08:04.006 00:08:04.006 Elapsed time = 0.361 seconds 00:08:04.006 11:18:22 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:04.006 00:08:04.006 00:08:04.006 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.006 http://cunit.sourceforge.net/ 00:08:04.006 00:08:04.006 00:08:04.006 Suite: iov 00:08:04.006 Test: test_single_iov ...passed 00:08:04.006 Test: test_simple_iov ...passed 00:08:04.006 Test: test_complex_iov ...passed 00:08:04.006 Test: test_iovs_to_buf ...passed 00:08:04.006 Test: test_buf_to_iovs ...passed 00:08:04.006 Test: test_memset ...passed 00:08:04.006 Test: test_iov_one ...passed 00:08:04.006 Test: test_iov_xfer ...passed 00:08:04.006 00:08:04.006 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.006 suites 1 1 n/a 0 0 00:08:04.006 tests 8 8 8 0 0 00:08:04.006 asserts 156 156 156 0 n/a 00:08:04.006 00:08:04.006 Elapsed time = 0.000 seconds 00:08:04.006 11:18:22 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:04.006 00:08:04.006 00:08:04.006 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.006 http://cunit.sourceforge.net/ 00:08:04.006 00:08:04.006 00:08:04.006 Suite: math 00:08:04.006 Test: test_serial_number_arithmetic ...passed 00:08:04.006 Suite: erase 00:08:04.006 Test: test_memset_s ...passed 00:08:04.006 00:08:04.006 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.006 suites 2 2 n/a 0 0 00:08:04.006 tests 2 2 2 0 0 00:08:04.006 asserts 18 18 18 0 n/a 00:08:04.006 00:08:04.006 Elapsed time = 0.000 seconds 00:08:04.006 11:18:22 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:04.006 00:08:04.006 00:08:04.006 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.006 http://cunit.sourceforge.net/ 00:08:04.006 00:08:04.006 00:08:04.006 Suite: pipe 00:08:04.006 Test: test_create_destroy ...passed 00:08:04.006 Test: test_write_get_buffer ...passed 00:08:04.006 Test: test_write_advance ...passed 00:08:04.006 Test: test_read_get_buffer ...passed 00:08:04.006 Test: test_read_advance ...passed 00:08:04.006 Test: test_data ...passed 00:08:04.006 00:08:04.006 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.006 suites 1 1 n/a 0 0 00:08:04.006 tests 6 6 6 0 0 00:08:04.006 asserts 250 250 250 0 n/a 00:08:04.006 00:08:04.006 Elapsed time = 0.000 seconds 00:08:04.006 11:18:22 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:04.006 00:08:04.006 00:08:04.006 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.006 http://cunit.sourceforge.net/ 00:08:04.006 00:08:04.006 00:08:04.006 Suite: xor 00:08:04.006 Test: test_xor_gen ...passed 00:08:04.006 00:08:04.006 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.006 suites 1 1 n/a 0 0 00:08:04.006 tests 1 1 1 0 0 00:08:04.006 asserts 17 17 17 0 n/a 00:08:04.006 00:08:04.006 Elapsed time = 0.007 seconds 00:08:04.006 00:08:04.006 real 0m0.754s 00:08:04.006 user 0m0.540s 00:08:04.006 sys 0m0.220s 00:08:04.006 11:18:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.006 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.006 ************************************ 00:08:04.006 END TEST unittest_util 00:08:04.006 ************************************ 00:08:04.006 11:18:22 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:04.007 11:18:22 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:04.007 11:18:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.007 11:18:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.007 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.266 ************************************ 00:08:04.266 START TEST unittest_vhost 00:08:04.266 ************************************ 00:08:04.266 11:18:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:04.266 00:08:04.266 00:08:04.266 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.266 http://cunit.sourceforge.net/ 00:08:04.266 00:08:04.266 00:08:04.266 Suite: vhost_suite 00:08:04.266 Test: desc_to_iov_test ...[2024-11-26 11:18:22.268613] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:04.266 passed 00:08:04.266 Test: create_controller_test ...[2024-11-26 11:18:22.274455] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:04.266 [2024-11-26 11:18:22.274617] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:04.266 [2024-11-26 11:18:22.274781] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:04.266 [2024-11-26 11:18:22.274961] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:04.266 [2024-11-26 11:18:22.275061] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:04.266 [2024-11-26 11:18:22.275195] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-26 11:18:22.276701] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:04.266 passed 00:08:04.266 Test: session_find_by_vid_test ...passed 00:08:04.266 Test: remove_controller_test ...[2024-11-26 11:18:22.279465] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:04.266 passed 00:08:04.266 Test: vq_avail_ring_get_test ...passed 00:08:04.266 Test: vq_packed_ring_test ...passed 00:08:04.266 Test: vhost_blk_construct_test ...passed 00:08:04.266 00:08:04.267 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.267 suites 1 1 n/a 0 0 00:08:04.267 tests 7 7 7 0 0 00:08:04.267 asserts 145 145 145 0 n/a 00:08:04.267 00:08:04.267 Elapsed time = 0.016 seconds 00:08:04.267 00:08:04.267 real 0m0.060s 00:08:04.267 user 0m0.035s 00:08:04.267 sys 0m0.025s 00:08:04.267 11:18:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.267 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 ************************************ 00:08:04.267 END TEST unittest_vhost 00:08:04.267 ************************************ 00:08:04.267 11:18:22 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:04.267 11:18:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.267 11:18:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.267 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 ************************************ 00:08:04.267 START TEST unittest_dma 00:08:04.267 ************************************ 00:08:04.267 11:18:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:04.267 00:08:04.267 00:08:04.267 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.267 http://cunit.sourceforge.net/ 00:08:04.267 00:08:04.267 00:08:04.267 Suite: dma_suite 00:08:04.267 Test: test_dma ...[2024-11-26 11:18:22.370376] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:04.267 passed 00:08:04.267 00:08:04.267 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.267 suites 1 1 n/a 0 0 00:08:04.267 tests 1 1 1 0 0 00:08:04.267 asserts 50 50 50 0 n/a 00:08:04.267 00:08:04.267 Elapsed time = 0.000 seconds 00:08:04.267 00:08:04.267 real 0m0.031s 00:08:04.267 user 0m0.016s 00:08:04.267 sys 0m0.016s 00:08:04.267 11:18:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.267 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 ************************************ 00:08:04.267 END TEST unittest_dma 00:08:04.267 ************************************ 00:08:04.267 11:18:22 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:08:04.267 11:18:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.267 11:18:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.267 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 ************************************ 00:08:04.267 START TEST unittest_init 00:08:04.267 ************************************ 00:08:04.267 11:18:22 -- common/autotest_common.sh@1114 -- # unittest_init 00:08:04.267 11:18:22 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:04.267 00:08:04.267 00:08:04.267 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.267 http://cunit.sourceforge.net/ 00:08:04.267 00:08:04.267 00:08:04.267 Suite: subsystem_suite 00:08:04.267 Test: subsystem_sort_test_depends_on_single ...passed 00:08:04.267 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:04.267 Test: subsystem_sort_test_missing_dependency ...[2024-11-26 11:18:22.453313] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:04.267 passed 00:08:04.267 00:08:04.267 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.267 suites 1 1 n/a 0 0 00:08:04.267 tests 3 3 3 0 0 00:08:04.267 asserts 20 20 20 0 n/a 00:08:04.267 00:08:04.267 Elapsed time = 0.000 seconds 00:08:04.267 [2024-11-26 11:18:22.453572] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:04.267 00:08:04.267 real 0m0.037s 00:08:04.267 user 0m0.020s 00:08:04.267 sys 0m0.018s 00:08:04.267 11:18:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.267 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.267 ************************************ 00:08:04.267 END TEST unittest_init 00:08:04.267 ************************************ 00:08:04.528 11:18:22 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:08:04.528 11:18:22 -- unit/unittest.sh@266 -- # hostname 00:08:04.528 11:18:22 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:04.528 geninfo: WARNING: invalid characters removed from testname! 00:08:36.625 11:18:53 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:39.915 11:18:57 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:43.206 11:19:00 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:45.741 11:19:03 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:48.281 11:19:06 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:50.852 11:19:08 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:53.386 11:19:11 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:53.386 11:19:11 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:53.645 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:53.645 Found 313 entries. 00:08:53.645 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:53.645 Writing .css and .png files. 00:08:53.645 Generating output. 00:08:53.904 Processing file include/linux/virtio_ring.h 00:08:54.162 Processing file include/spdk/trace.h 00:08:54.162 Processing file include/spdk/thread.h 00:08:54.162 Processing file include/spdk/nvmf_transport.h 00:08:54.162 Processing file include/spdk/endian.h 00:08:54.162 Processing file include/spdk/bdev_module.h 00:08:54.162 Processing file include/spdk/util.h 00:08:54.162 Processing file include/spdk/base64.h 00:08:54.162 Processing file include/spdk/mmio.h 00:08:54.162 Processing file include/spdk/nvme_spec.h 00:08:54.162 Processing file include/spdk/histogram_data.h 00:08:54.162 Processing file include/spdk/nvme.h 00:08:54.162 Processing file include/spdk_internal/sgl.h 00:08:54.162 Processing file include/spdk_internal/nvme_tcp.h 00:08:54.162 Processing file include/spdk_internal/sock.h 00:08:54.162 Processing file include/spdk_internal/rdma.h 00:08:54.162 Processing file include/spdk_internal/virtio.h 00:08:54.162 Processing file include/spdk_internal/utf.h 00:08:54.420 Processing file lib/accel/accel.c 00:08:54.420 Processing file lib/accel/accel_sw.c 00:08:54.420 Processing file lib/accel/accel_rpc.c 00:08:54.679 Processing file lib/bdev/scsi_nvme.c 00:08:54.679 Processing file lib/bdev/bdev.c 00:08:54.679 Processing file lib/bdev/bdev_zone.c 00:08:54.679 Processing file lib/bdev/part.c 00:08:54.679 Processing file lib/bdev/bdev_rpc.c 00:08:54.937 Processing file lib/blob/blob_bs_dev.c 00:08:54.937 Processing file lib/blob/blobstore.h 00:08:54.937 Processing file lib/blob/zeroes.c 00:08:54.937 Processing file lib/blob/blobstore.c 00:08:54.937 Processing file lib/blob/request.c 00:08:54.937 Processing file lib/blobfs/blobfs.c 00:08:54.937 Processing file lib/blobfs/tree.c 00:08:54.937 Processing file lib/conf/conf.c 00:08:54.937 Processing file lib/dma/dma.c 00:08:55.238 Processing file lib/env_dpdk/init.c 00:08:55.238 Processing file lib/env_dpdk/pci_idxd.c 00:08:55.238 Processing file lib/env_dpdk/pci_vmd.c 00:08:55.238 Processing file lib/env_dpdk/pci_ioat.c 00:08:55.238 Processing file lib/env_dpdk/pci_dpdk.c 00:08:55.238 Processing file lib/env_dpdk/pci_virtio.c 00:08:55.238 Processing file lib/env_dpdk/sigbus_handler.c 00:08:55.238 Processing file lib/env_dpdk/memory.c 00:08:55.238 Processing file lib/env_dpdk/env.c 00:08:55.238 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:55.238 Processing file lib/env_dpdk/pci.c 00:08:55.238 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:55.238 Processing file lib/env_dpdk/pci_event.c 00:08:55.238 Processing file lib/env_dpdk/threads.c 00:08:55.497 Processing file lib/event/app_rpc.c 00:08:55.497 Processing file lib/event/scheduler_static.c 00:08:55.497 Processing file lib/event/log_rpc.c 00:08:55.497 Processing file lib/event/reactor.c 00:08:55.497 Processing file lib/event/app.c 00:08:55.755 Processing file lib/ftl/ftl_l2p_flat.c 00:08:55.755 Processing file lib/ftl/ftl_io.c 00:08:55.755 Processing file lib/ftl/ftl_nv_cache.c 00:08:55.755 Processing file lib/ftl/ftl_init.c 00:08:55.755 Processing file lib/ftl/ftl_core.h 00:08:55.755 Processing file lib/ftl/ftl_band_ops.c 00:08:55.755 Processing file lib/ftl/ftl_io.h 00:08:55.755 Processing file lib/ftl/ftl_layout.c 00:08:55.755 Processing file lib/ftl/ftl_trace.c 00:08:55.755 Processing file lib/ftl/ftl_l2p_cache.c 00:08:55.755 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:55.755 Processing file lib/ftl/ftl_writer.h 00:08:55.755 Processing file lib/ftl/ftl_band.h 00:08:55.755 Processing file lib/ftl/ftl_debug.c 00:08:55.755 Processing file lib/ftl/ftl_rq.c 00:08:55.755 Processing file lib/ftl/ftl_core.c 00:08:55.755 Processing file lib/ftl/ftl_sb.c 00:08:55.755 Processing file lib/ftl/ftl_debug.h 00:08:55.755 Processing file lib/ftl/ftl_p2l.c 00:08:55.755 Processing file lib/ftl/ftl_band.c 00:08:55.755 Processing file lib/ftl/ftl_l2p.c 00:08:55.755 Processing file lib/ftl/ftl_reloc.c 00:08:55.755 Processing file lib/ftl/ftl_nv_cache.h 00:08:55.755 Processing file lib/ftl/ftl_writer.c 00:08:55.755 Processing file lib/ftl/base/ftl_base_dev.c 00:08:55.755 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:56.012 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:56.270 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:56.270 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:56.270 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:56.270 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:56.270 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:56.270 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:56.527 Processing file lib/ftl/utils/ftl_property.c 00:08:56.527 Processing file lib/ftl/utils/ftl_df.h 00:08:56.527 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:56.527 Processing file lib/ftl/utils/ftl_conf.c 00:08:56.527 Processing file lib/ftl/utils/ftl_mempool.c 00:08:56.528 Processing file lib/ftl/utils/ftl_property.h 00:08:56.528 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:56.528 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:56.528 Processing file lib/ftl/utils/ftl_md.c 00:08:56.528 Processing file lib/idxd/idxd_internal.h 00:08:56.528 Processing file lib/idxd/idxd.c 00:08:56.528 Processing file lib/idxd/idxd_kernel.c 00:08:56.528 Processing file lib/idxd/idxd_user.c 00:08:56.786 Processing file lib/init/rpc.c 00:08:56.786 Processing file lib/init/subsystem.c 00:08:56.786 Processing file lib/init/subsystem_rpc.c 00:08:56.786 Processing file lib/init/json_config.c 00:08:56.786 Processing file lib/ioat/ioat_internal.h 00:08:56.786 Processing file lib/ioat/ioat.c 00:08:57.045 Processing file lib/iscsi/iscsi_rpc.c 00:08:57.045 Processing file lib/iscsi/task.h 00:08:57.045 Processing file lib/iscsi/conn.c 00:08:57.045 Processing file lib/iscsi/iscsi.h 00:08:57.045 Processing file lib/iscsi/param.c 00:08:57.045 Processing file lib/iscsi/tgt_node.c 00:08:57.045 Processing file lib/iscsi/portal_grp.c 00:08:57.045 Processing file lib/iscsi/iscsi.c 00:08:57.045 Processing file lib/iscsi/md5.c 00:08:57.045 Processing file lib/iscsi/init_grp.c 00:08:57.045 Processing file lib/iscsi/task.c 00:08:57.045 Processing file lib/iscsi/iscsi_subsystem.c 00:08:57.304 Processing file lib/json/json_parse.c 00:08:57.304 Processing file lib/json/json_util.c 00:08:57.304 Processing file lib/json/json_write.c 00:08:57.304 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:57.304 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:57.304 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:57.304 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:57.304 Processing file lib/log/log.c 00:08:57.304 Processing file lib/log/log_deprecated.c 00:08:57.304 Processing file lib/log/log_flags.c 00:08:57.564 Processing file lib/lvol/lvol.c 00:08:57.564 Processing file lib/nbd/nbd_rpc.c 00:08:57.564 Processing file lib/nbd/nbd.c 00:08:57.564 Processing file lib/notify/notify_rpc.c 00:08:57.564 Processing file lib/notify/notify.c 00:08:58.500 Processing file lib/nvme/nvme_tcp.c 00:08:58.500 Processing file lib/nvme/nvme_discovery.c 00:08:58.500 Processing file lib/nvme/nvme_cuse.c 00:08:58.500 Processing file lib/nvme/nvme_ctrlr.c 00:08:58.500 Processing file lib/nvme/nvme_ns.c 00:08:58.500 Processing file lib/nvme/nvme_rdma.c 00:08:58.500 Processing file lib/nvme/nvme_vfio_user.c 00:08:58.500 Processing file lib/nvme/nvme_fabric.c 00:08:58.500 Processing file lib/nvme/nvme_internal.h 00:08:58.500 Processing file lib/nvme/nvme_quirks.c 00:08:58.500 Processing file lib/nvme/nvme_poll_group.c 00:08:58.500 Processing file lib/nvme/nvme_qpair.c 00:08:58.500 Processing file lib/nvme/nvme_pcie.c 00:08:58.500 Processing file lib/nvme/nvme_pcie_internal.h 00:08:58.500 Processing file lib/nvme/nvme.c 00:08:58.500 Processing file lib/nvme/nvme_pcie_common.c 00:08:58.500 Processing file lib/nvme/nvme_opal.c 00:08:58.500 Processing file lib/nvme/nvme_zns.c 00:08:58.500 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:58.500 Processing file lib/nvme/nvme_transport.c 00:08:58.500 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:58.500 Processing file lib/nvme/nvme_io_msg.c 00:08:58.500 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:58.500 Processing file lib/nvme/nvme_ns_cmd.c 00:08:58.759 Processing file lib/nvmf/subsystem.c 00:08:58.759 Processing file lib/nvmf/ctrlr_bdev.c 00:08:58.759 Processing file lib/nvmf/nvmf_rpc.c 00:08:58.759 Processing file lib/nvmf/transport.c 00:08:58.759 Processing file lib/nvmf/tcp.c 00:08:58.759 Processing file lib/nvmf/ctrlr.c 00:08:58.759 Processing file lib/nvmf/ctrlr_discovery.c 00:08:58.759 Processing file lib/nvmf/rdma.c 00:08:58.759 Processing file lib/nvmf/nvmf_internal.h 00:08:58.759 Processing file lib/nvmf/nvmf.c 00:08:59.018 Processing file lib/rdma/common.c 00:08:59.018 Processing file lib/rdma/rdma_verbs.c 00:08:59.018 Processing file lib/rpc/rpc.c 00:08:59.277 Processing file lib/scsi/dev.c 00:08:59.277 Processing file lib/scsi/lun.c 00:08:59.277 Processing file lib/scsi/port.c 00:08:59.277 Processing file lib/scsi/scsi_bdev.c 00:08:59.277 Processing file lib/scsi/task.c 00:08:59.277 Processing file lib/scsi/scsi_rpc.c 00:08:59.277 Processing file lib/scsi/scsi_pr.c 00:08:59.277 Processing file lib/scsi/scsi.c 00:08:59.277 Processing file lib/sock/sock.c 00:08:59.277 Processing file lib/sock/sock_rpc.c 00:08:59.277 Processing file lib/thread/iobuf.c 00:08:59.277 Processing file lib/thread/thread.c 00:08:59.537 Processing file lib/trace/trace_rpc.c 00:08:59.537 Processing file lib/trace/trace_flags.c 00:08:59.537 Processing file lib/trace/trace.c 00:08:59.537 Processing file lib/trace_parser/trace.cpp 00:08:59.537 Processing file lib/ublk/ublk_rpc.c 00:08:59.537 Processing file lib/ublk/ublk.c 00:08:59.537 Processing file lib/ut/ut.c 00:08:59.537 Processing file lib/ut_mock/mock.c 00:09:00.105 Processing file lib/util/crc32c.c 00:09:00.105 Processing file lib/util/crc16.c 00:09:00.105 Processing file lib/util/crc32.c 00:09:00.105 Processing file lib/util/iov.c 00:09:00.105 Processing file lib/util/fd.c 00:09:00.105 Processing file lib/util/crc64.c 00:09:00.105 Processing file lib/util/dif.c 00:09:00.105 Processing file lib/util/crc32_ieee.c 00:09:00.105 Processing file lib/util/strerror_tls.c 00:09:00.105 Processing file lib/util/xor.c 00:09:00.105 Processing file lib/util/fd_group.c 00:09:00.105 Processing file lib/util/bit_array.c 00:09:00.105 Processing file lib/util/hexlify.c 00:09:00.105 Processing file lib/util/math.c 00:09:00.105 Processing file lib/util/cpuset.c 00:09:00.105 Processing file lib/util/uuid.c 00:09:00.105 Processing file lib/util/zipf.c 00:09:00.105 Processing file lib/util/base64.c 00:09:00.105 Processing file lib/util/file.c 00:09:00.105 Processing file lib/util/string.c 00:09:00.105 Processing file lib/util/pipe.c 00:09:00.105 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:00.105 Processing file lib/vfio_user/host/vfio_user.c 00:09:00.105 Processing file lib/vhost/rte_vhost_user.c 00:09:00.105 Processing file lib/vhost/vhost.c 00:09:00.105 Processing file lib/vhost/vhost_internal.h 00:09:00.105 Processing file lib/vhost/vhost_scsi.c 00:09:00.105 Processing file lib/vhost/vhost_rpc.c 00:09:00.105 Processing file lib/vhost/vhost_blk.c 00:09:00.364 Processing file lib/virtio/virtio_pci.c 00:09:00.364 Processing file lib/virtio/virtio_vfio_user.c 00:09:00.364 Processing file lib/virtio/virtio.c 00:09:00.364 Processing file lib/virtio/virtio_vhost_user.c 00:09:00.364 Processing file lib/vmd/led.c 00:09:00.364 Processing file lib/vmd/vmd.c 00:09:00.364 Processing file module/accel/dsa/accel_dsa.c 00:09:00.364 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:00.364 Processing file module/accel/error/accel_error.c 00:09:00.364 Processing file module/accel/error/accel_error_rpc.c 00:09:00.623 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:00.623 Processing file module/accel/iaa/accel_iaa.c 00:09:00.623 Processing file module/accel/ioat/accel_ioat.c 00:09:00.623 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:00.623 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:00.623 Processing file module/bdev/aio/bdev_aio.c 00:09:00.623 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:00.623 Processing file module/bdev/delay/vbdev_delay.c 00:09:00.882 Processing file module/bdev/error/vbdev_error.c 00:09:00.882 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:00.882 Processing file module/bdev/ftl/bdev_ftl.c 00:09:00.882 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:00.882 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:00.882 Processing file module/bdev/gpt/gpt.c 00:09:00.882 Processing file module/bdev/gpt/gpt.h 00:09:01.140 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:01.140 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:01.140 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:01.140 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:01.140 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:01.140 Processing file module/bdev/malloc/bdev_malloc.c 00:09:01.399 Processing file module/bdev/null/bdev_null.c 00:09:01.399 Processing file module/bdev/null/bdev_null_rpc.c 00:09:01.658 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:01.658 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:01.658 Processing file module/bdev/nvme/bdev_nvme.c 00:09:01.658 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:01.658 Processing file module/bdev/nvme/vbdev_opal.c 00:09:01.658 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:01.658 Processing file module/bdev/nvme/nvme_rpc.c 00:09:01.658 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:01.658 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:01.917 Processing file module/bdev/raid/raid5f.c 00:09:01.917 Processing file module/bdev/raid/concat.c 00:09:01.917 Processing file module/bdev/raid/bdev_raid.h 00:09:01.917 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:01.917 Processing file module/bdev/raid/raid0.c 00:09:01.917 Processing file module/bdev/raid/raid1.c 00:09:01.917 Processing file module/bdev/raid/bdev_raid.c 00:09:01.917 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:01.917 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:01.917 Processing file module/bdev/split/vbdev_split.c 00:09:01.917 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:01.917 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:01.917 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:02.176 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:02.176 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:02.176 Processing file module/blob/bdev/blob_bdev.c 00:09:02.176 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:02.176 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:02.176 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:02.434 Processing file module/event/subsystems/accel/accel.c 00:09:02.434 Processing file module/event/subsystems/bdev/bdev.c 00:09:02.434 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:02.434 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:02.434 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:02.434 Processing file module/event/subsystems/nbd/nbd.c 00:09:02.693 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:02.693 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:02.693 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:02.693 Processing file module/event/subsystems/scsi/scsi.c 00:09:02.693 Processing file module/event/subsystems/sock/sock.c 00:09:02.693 Processing file module/event/subsystems/ublk/ublk.c 00:09:02.693 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:02.952 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:02.952 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:02.952 Processing file module/event/subsystems/vmd/vmd.c 00:09:02.952 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:02.952 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:02.952 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:02.952 Processing file module/sock/sock_kernel.h 00:09:03.211 Processing file module/sock/posix/posix.c 00:09:03.211 Writing directory view page. 00:09:03.211 Overall coverage rate: 00:09:03.211 lines......: 38.6% (39266 of 101740 lines) 00:09:03.211 functions..: 42.2% (3587 of 8494 functions) 00:09:03.211 00:09:03.211 00:09:03.211 ===================== 00:09:03.211 All unit tests passed 00:09:03.211 ===================== 00:09:03.211 11:19:21 -- unit/unittest.sh@277 -- # set +x 00:09:03.211 WARN: lcov not installed or SPDK built without coverage! 00:09:03.211 00:09:03.211 00:09:03.211 00:09:03.211 real 3m3.676s 00:09:03.211 user 2m40.064s 00:09:03.211 sys 0m14.675s 00:09:03.211 11:19:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.211 ************************************ 00:09:03.211 END TEST unittest 00:09:03.211 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.211 ************************************ 00:09:03.211 11:19:21 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:09:03.211 11:19:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:09:03.211 11:19:21 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:09:03.211 11:19:21 -- spdk/autotest.sh@160 -- # timing_enter lib 00:09:03.211 11:19:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.211 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.211 11:19:21 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:03.211 11:19:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.211 11:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.211 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.211 ************************************ 00:09:03.211 START TEST env 00:09:03.211 ************************************ 00:09:03.211 11:19:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:03.211 * Looking for test storage... 00:09:03.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:03.211 11:19:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:03.211 11:19:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:03.211 11:19:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.488 11:19:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.488 11:19:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.488 11:19:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.488 11:19:21 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.488 11:19:21 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.488 11:19:21 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.488 11:19:21 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.488 11:19:21 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.488 11:19:21 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.488 11:19:21 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.488 11:19:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.488 11:19:21 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.488 11:19:21 -- scripts/common.sh@344 -- # : 1 00:09:03.488 11:19:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.488 11:19:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.488 11:19:21 -- scripts/common.sh@364 -- # decimal 1 00:09:03.488 11:19:21 -- scripts/common.sh@352 -- # local d=1 00:09:03.488 11:19:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.488 11:19:21 -- scripts/common.sh@354 -- # echo 1 00:09:03.488 11:19:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.488 11:19:21 -- scripts/common.sh@365 -- # decimal 2 00:09:03.488 11:19:21 -- scripts/common.sh@352 -- # local d=2 00:09:03.488 11:19:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.488 11:19:21 -- scripts/common.sh@354 -- # echo 2 00:09:03.488 11:19:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.488 11:19:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.488 11:19:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.488 11:19:21 -- scripts/common.sh@367 -- # return 0 00:09:03.488 11:19:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.488 --rc genhtml_branch_coverage=1 00:09:03.488 --rc genhtml_function_coverage=1 00:09:03.488 --rc genhtml_legend=1 00:09:03.488 --rc geninfo_all_blocks=1 00:09:03.488 --rc geninfo_unexecuted_blocks=1 00:09:03.488 00:09:03.488 ' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.488 --rc genhtml_branch_coverage=1 00:09:03.488 --rc genhtml_function_coverage=1 00:09:03.488 --rc genhtml_legend=1 00:09:03.488 --rc geninfo_all_blocks=1 00:09:03.488 --rc geninfo_unexecuted_blocks=1 00:09:03.488 00:09:03.488 ' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.488 --rc genhtml_branch_coverage=1 00:09:03.488 --rc genhtml_function_coverage=1 00:09:03.488 --rc genhtml_legend=1 00:09:03.488 --rc geninfo_all_blocks=1 00:09:03.488 --rc geninfo_unexecuted_blocks=1 00:09:03.488 00:09:03.488 ' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.488 --rc genhtml_branch_coverage=1 00:09:03.488 --rc genhtml_function_coverage=1 00:09:03.488 --rc genhtml_legend=1 00:09:03.488 --rc geninfo_all_blocks=1 00:09:03.488 --rc geninfo_unexecuted_blocks=1 00:09:03.488 00:09:03.488 ' 00:09:03.488 11:19:21 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:03.488 11:19:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.488 11:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.488 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.488 ************************************ 00:09:03.488 START TEST env_memory 00:09:03.488 ************************************ 00:09:03.488 11:19:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:03.488 00:09:03.488 00:09:03.488 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.488 http://cunit.sourceforge.net/ 00:09:03.489 00:09:03.489 00:09:03.489 Suite: memory 00:09:03.489 Test: alloc and free memory map ...[2024-11-26 11:19:21.632620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:03.489 passed 00:09:03.489 Test: mem map translation ...[2024-11-26 11:19:21.696067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:03.489 [2024-11-26 11:19:21.696157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:03.489 [2024-11-26 11:19:21.696285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:03.489 [2024-11-26 11:19:21.696324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:03.756 passed 00:09:03.756 Test: mem map registration ...[2024-11-26 11:19:21.797092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:03.756 [2024-11-26 11:19:21.797165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:03.756 passed 00:09:03.756 Test: mem map adjacent registrations ...passed 00:09:03.756 00:09:03.756 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.756 suites 1 1 n/a 0 0 00:09:03.756 tests 4 4 4 0 0 00:09:03.756 asserts 152 152 152 0 n/a 00:09:03.756 00:09:03.756 Elapsed time = 0.344 seconds 00:09:03.756 00:09:03.756 real 0m0.380s 00:09:03.756 user 0m0.359s 00:09:03.756 sys 0m0.022s 00:09:03.756 11:19:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.756 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.756 ************************************ 00:09:03.756 END TEST env_memory 00:09:03.756 ************************************ 00:09:03.756 11:19:21 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:03.756 11:19:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.756 11:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.756 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.756 ************************************ 00:09:03.756 START TEST env_vtophys 00:09:03.756 ************************************ 00:09:03.756 11:19:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:04.015 EAL: lib.eal log level changed from notice to debug 00:09:04.015 EAL: Detected lcore 0 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 1 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 2 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 3 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 4 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 5 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 6 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 7 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 8 as core 0 on socket 0 00:09:04.015 EAL: Detected lcore 9 as core 0 on socket 0 00:09:04.015 EAL: Maximum logical cores by configuration: 128 00:09:04.015 EAL: Detected CPU lcores: 10 00:09:04.015 EAL: Detected NUMA nodes: 1 00:09:04.015 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:04.015 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:04.015 EAL: Checking presence of .so 'librte_eal.so' 00:09:04.015 EAL: Detected static linkage of DPDK 00:09:04.015 EAL: No shared files mode enabled, IPC will be disabled 00:09:04.015 EAL: Selected IOVA mode 'PA' 00:09:04.015 EAL: Probing VFIO support... 00:09:04.015 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:04.015 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:04.015 EAL: Ask a virtual area of 0x2e000 bytes 00:09:04.015 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:04.015 EAL: Setting up physically contiguous memory... 00:09:04.015 EAL: Setting maximum number of open files to 1048576 00:09:04.015 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:04.015 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:04.015 EAL: Ask a virtual area of 0x61000 bytes 00:09:04.015 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:04.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:04.015 EAL: Ask a virtual area of 0x400000000 bytes 00:09:04.015 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:04.015 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:04.015 EAL: Ask a virtual area of 0x61000 bytes 00:09:04.015 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:04.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:04.016 EAL: Ask a virtual area of 0x400000000 bytes 00:09:04.016 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:04.016 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:04.016 EAL: Ask a virtual area of 0x61000 bytes 00:09:04.016 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:04.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:04.016 EAL: Ask a virtual area of 0x400000000 bytes 00:09:04.016 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:04.016 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:04.016 EAL: Ask a virtual area of 0x61000 bytes 00:09:04.016 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:04.016 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:04.016 EAL: Ask a virtual area of 0x400000000 bytes 00:09:04.016 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:04.016 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:04.016 EAL: Hugepages will be freed exactly as allocated. 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: TSC frequency is ~2200000 KHz 00:09:04.016 EAL: Main lcore 0 is ready (tid=7654688dea80;cpuset=[0]) 00:09:04.016 EAL: Trying to obtain current memory policy. 00:09:04.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.016 EAL: Restoring previous memory policy: 0 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was expanded by 2MB 00:09:04.016 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:04.016 EAL: Mem event callback 'spdk:(nil)' registered 00:09:04.016 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:04.016 00:09:04.016 00:09:04.016 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.016 http://cunit.sourceforge.net/ 00:09:04.016 00:09:04.016 00:09:04.016 Suite: components_suite 00:09:04.016 Test: vtophys_malloc_test ...passed 00:09:04.016 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:04.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.016 EAL: Restoring previous memory policy: 4 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was expanded by 4MB 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was shrunk by 4MB 00:09:04.016 EAL: Trying to obtain current memory policy. 00:09:04.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.016 EAL: Restoring previous memory policy: 4 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was expanded by 6MB 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was shrunk by 6MB 00:09:04.016 EAL: Trying to obtain current memory policy. 00:09:04.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.016 EAL: Restoring previous memory policy: 4 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.016 EAL: request: mp_malloc_sync 00:09:04.016 EAL: No shared files mode enabled, IPC is disabled 00:09:04.016 EAL: Heap on socket 0 was expanded by 10MB 00:09:04.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was shrunk by 10MB 00:09:04.275 EAL: Trying to obtain current memory policy. 00:09:04.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.275 EAL: Restoring previous memory policy: 4 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was expanded by 18MB 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was shrunk by 18MB 00:09:04.275 EAL: Trying to obtain current memory policy. 00:09:04.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.275 EAL: Restoring previous memory policy: 4 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was expanded by 34MB 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was shrunk by 34MB 00:09:04.275 EAL: Trying to obtain current memory policy. 00:09:04.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.275 EAL: Restoring previous memory policy: 4 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.275 EAL: No shared files mode enabled, IPC is disabled 00:09:04.275 EAL: Heap on socket 0 was expanded by 66MB 00:09:04.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.275 EAL: request: mp_malloc_sync 00:09:04.276 EAL: No shared files mode enabled, IPC is disabled 00:09:04.276 EAL: Heap on socket 0 was shrunk by 66MB 00:09:04.276 EAL: Trying to obtain current memory policy. 00:09:04.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.276 EAL: Restoring previous memory policy: 4 00:09:04.276 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.276 EAL: request: mp_malloc_sync 00:09:04.276 EAL: No shared files mode enabled, IPC is disabled 00:09:04.276 EAL: Heap on socket 0 was expanded by 130MB 00:09:04.276 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.276 EAL: request: mp_malloc_sync 00:09:04.276 EAL: No shared files mode enabled, IPC is disabled 00:09:04.276 EAL: Heap on socket 0 was shrunk by 130MB 00:09:04.276 EAL: Trying to obtain current memory policy. 00:09:04.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.276 EAL: Restoring previous memory policy: 4 00:09:04.276 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.276 EAL: request: mp_malloc_sync 00:09:04.276 EAL: No shared files mode enabled, IPC is disabled 00:09:04.276 EAL: Heap on socket 0 was expanded by 258MB 00:09:04.276 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.276 EAL: request: mp_malloc_sync 00:09:04.276 EAL: No shared files mode enabled, IPC is disabled 00:09:04.276 EAL: Heap on socket 0 was shrunk by 258MB 00:09:04.276 EAL: Trying to obtain current memory policy. 00:09:04.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.536 EAL: Restoring previous memory policy: 4 00:09:04.536 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.536 EAL: request: mp_malloc_sync 00:09:04.536 EAL: No shared files mode enabled, IPC is disabled 00:09:04.536 EAL: Heap on socket 0 was expanded by 514MB 00:09:04.536 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.536 EAL: request: mp_malloc_sync 00:09:04.536 EAL: No shared files mode enabled, IPC is disabled 00:09:04.536 EAL: Heap on socket 0 was shrunk by 514MB 00:09:04.536 EAL: Trying to obtain current memory policy. 00:09:04.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.795 EAL: Restoring previous memory policy: 4 00:09:04.795 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.795 EAL: request: mp_malloc_sync 00:09:04.795 EAL: No shared files mode enabled, IPC is disabled 00:09:04.795 EAL: Heap on socket 0 was expanded by 1026MB 00:09:04.795 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.055 passed 00:09:05.055 00:09:05.055 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.055 suites 1 1 n/a 0 0 00:09:05.055 tests 2 2 2 0 0 00:09:05.055 asserts 5442 5442 5442 0 n/a 00:09:05.055 00:09:05.055 Elapsed time = 0.955 seconds 00:09:05.055 EAL: request: mp_malloc_sync 00:09:05.055 EAL: No shared files mode enabled, IPC is disabled 00:09:05.055 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:05.055 EAL: Calling mem event callback 'spdk:(nil)' 00:09:05.055 EAL: request: mp_malloc_sync 00:09:05.055 EAL: No shared files mode enabled, IPC is disabled 00:09:05.055 EAL: Heap on socket 0 was shrunk by 2MB 00:09:05.055 EAL: No shared files mode enabled, IPC is disabled 00:09:05.055 EAL: No shared files mode enabled, IPC is disabled 00:09:05.055 EAL: No shared files mode enabled, IPC is disabled 00:09:05.055 00:09:05.055 real 0m1.201s 00:09:05.055 user 0m0.597s 00:09:05.055 sys 0m0.476s 00:09:05.055 11:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.055 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.055 ************************************ 00:09:05.055 END TEST env_vtophys 00:09:05.055 ************************************ 00:09:05.055 11:19:23 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:05.055 11:19:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:05.055 11:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.055 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.055 ************************************ 00:09:05.055 START TEST env_pci 00:09:05.055 ************************************ 00:09:05.055 11:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:05.055 00:09:05.055 00:09:05.055 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.055 http://cunit.sourceforge.net/ 00:09:05.055 00:09:05.055 00:09:05.055 Suite: pci 00:09:05.055 Test: pci_hook ...[2024-11-26 11:19:23.272298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72085 has claimed it 00:09:05.314 passed 00:09:05.314 00:09:05.314 EAL: Cannot find device (10000:00:01.0) 00:09:05.314 EAL: Failed to attach device on primary process 00:09:05.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.314 suites 1 1 n/a 0 0 00:09:05.314 tests 1 1 1 0 0 00:09:05.314 asserts 25 25 25 0 n/a 00:09:05.314 00:09:05.314 Elapsed time = 0.006 seconds 00:09:05.314 00:09:05.314 real 0m0.073s 00:09:05.314 user 0m0.036s 00:09:05.314 sys 0m0.038s 00:09:05.314 11:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.314 ************************************ 00:09:05.314 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.314 END TEST env_pci 00:09:05.314 ************************************ 00:09:05.314 11:19:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:05.314 11:19:23 -- env/env.sh@15 -- # uname 00:09:05.314 11:19:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:05.314 11:19:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:05.314 11:19:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:05.314 11:19:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:05.314 11:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.314 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.314 ************************************ 00:09:05.314 START TEST env_dpdk_post_init 00:09:05.314 ************************************ 00:09:05.315 11:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:05.315 EAL: Detected CPU lcores: 10 00:09:05.315 EAL: Detected NUMA nodes: 1 00:09:05.315 EAL: Detected static linkage of DPDK 00:09:05.315 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:05.315 EAL: Selected IOVA mode 'PA' 00:09:05.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:05.574 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:09:05.574 Starting DPDK initialization... 00:09:05.574 Starting SPDK post initialization... 00:09:05.574 SPDK NVMe probe 00:09:05.574 Attaching to 0000:00:06.0 00:09:05.574 Attached to 0000:00:06.0 00:09:05.574 Cleaning up... 00:09:05.574 00:09:05.574 real 0m0.252s 00:09:05.574 user 0m0.076s 00:09:05.574 sys 0m0.077s 00:09:05.574 11:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.574 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.574 ************************************ 00:09:05.574 END TEST env_dpdk_post_init 00:09:05.574 ************************************ 00:09:05.574 11:19:23 -- env/env.sh@26 -- # uname 00:09:05.574 11:19:23 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:05.574 11:19:23 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:05.574 11:19:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:05.574 11:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.574 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.574 ************************************ 00:09:05.574 START TEST env_mem_callbacks 00:09:05.574 ************************************ 00:09:05.574 11:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:05.574 EAL: Detected CPU lcores: 10 00:09:05.574 EAL: Detected NUMA nodes: 1 00:09:05.574 EAL: Detected static linkage of DPDK 00:09:05.574 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:05.574 EAL: Selected IOVA mode 'PA' 00:09:05.834 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:05.834 00:09:05.834 00:09:05.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.834 http://cunit.sourceforge.net/ 00:09:05.834 00:09:05.834 00:09:05.834 Suite: memory 00:09:05.834 Test: test ... 00:09:05.834 register 0x200000200000 2097152 00:09:05.834 malloc 3145728 00:09:05.834 register 0x200000400000 4194304 00:09:05.834 buf 0x200000500000 len 3145728 PASSED 00:09:05.834 malloc 64 00:09:05.834 buf 0x2000004fff40 len 64 PASSED 00:09:05.834 malloc 4194304 00:09:05.834 register 0x200000800000 6291456 00:09:05.834 buf 0x200000a00000 len 4194304 PASSED 00:09:05.834 free 0x200000500000 3145728 00:09:05.834 free 0x2000004fff40 64 00:09:05.834 unregister 0x200000400000 4194304 PASSED 00:09:05.834 free 0x200000a00000 4194304 00:09:05.834 unregister 0x200000800000 6291456 PASSED 00:09:05.834 malloc 8388608 00:09:05.834 register 0x200000400000 10485760 00:09:05.834 buf 0x200000600000 len 8388608 PASSED 00:09:05.834 free 0x200000600000 8388608 00:09:05.834 unregister 0x200000400000 10485760 PASSED 00:09:05.834 passed 00:09:05.834 00:09:05.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.834 suites 1 1 n/a 0 0 00:09:05.834 tests 1 1 1 0 0 00:09:05.834 asserts 15 15 15 0 n/a 00:09:05.834 00:09:05.834 Elapsed time = 0.012 seconds 00:09:05.834 00:09:05.834 real 0m0.202s 00:09:05.834 user 0m0.036s 00:09:05.834 sys 0m0.065s 00:09:05.834 11:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.834 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.834 ************************************ 00:09:05.834 END TEST env_mem_callbacks 00:09:05.834 ************************************ 00:09:05.834 00:09:05.834 real 0m2.569s 00:09:05.834 user 0m1.296s 00:09:05.834 sys 0m0.953s 00:09:05.834 11:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.834 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.834 ************************************ 00:09:05.834 END TEST env 00:09:05.834 ************************************ 00:09:05.834 11:19:23 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:05.834 11:19:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:05.834 11:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.834 11:19:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.834 ************************************ 00:09:05.834 START TEST rpc 00:09:05.834 ************************************ 00:09:05.834 11:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:05.834 * Looking for test storage... 00:09:05.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:05.834 11:19:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:05.834 11:19:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:05.834 11:19:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:06.093 11:19:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:06.094 11:19:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:06.094 11:19:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:06.094 11:19:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:06.094 11:19:24 -- scripts/common.sh@335 -- # IFS=.-: 00:09:06.094 11:19:24 -- scripts/common.sh@335 -- # read -ra ver1 00:09:06.094 11:19:24 -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.094 11:19:24 -- scripts/common.sh@336 -- # read -ra ver2 00:09:06.094 11:19:24 -- scripts/common.sh@337 -- # local 'op=<' 00:09:06.094 11:19:24 -- scripts/common.sh@339 -- # ver1_l=2 00:09:06.094 11:19:24 -- scripts/common.sh@340 -- # ver2_l=1 00:09:06.094 11:19:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:06.094 11:19:24 -- scripts/common.sh@343 -- # case "$op" in 00:09:06.094 11:19:24 -- scripts/common.sh@344 -- # : 1 00:09:06.094 11:19:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:06.094 11:19:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.094 11:19:24 -- scripts/common.sh@364 -- # decimal 1 00:09:06.094 11:19:24 -- scripts/common.sh@352 -- # local d=1 00:09:06.094 11:19:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.094 11:19:24 -- scripts/common.sh@354 -- # echo 1 00:09:06.094 11:19:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:06.094 11:19:24 -- scripts/common.sh@365 -- # decimal 2 00:09:06.094 11:19:24 -- scripts/common.sh@352 -- # local d=2 00:09:06.094 11:19:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.094 11:19:24 -- scripts/common.sh@354 -- # echo 2 00:09:06.094 11:19:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:06.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.094 11:19:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:06.094 11:19:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:06.094 11:19:24 -- scripts/common.sh@367 -- # return 0 00:09:06.094 11:19:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.094 11:19:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.094 --rc genhtml_branch_coverage=1 00:09:06.094 --rc genhtml_function_coverage=1 00:09:06.094 --rc genhtml_legend=1 00:09:06.094 --rc geninfo_all_blocks=1 00:09:06.094 --rc geninfo_unexecuted_blocks=1 00:09:06.094 00:09:06.094 ' 00:09:06.094 11:19:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.094 --rc genhtml_branch_coverage=1 00:09:06.094 --rc genhtml_function_coverage=1 00:09:06.094 --rc genhtml_legend=1 00:09:06.094 --rc geninfo_all_blocks=1 00:09:06.094 --rc geninfo_unexecuted_blocks=1 00:09:06.094 00:09:06.094 ' 00:09:06.094 11:19:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.094 --rc genhtml_branch_coverage=1 00:09:06.094 --rc genhtml_function_coverage=1 00:09:06.094 --rc genhtml_legend=1 00:09:06.094 --rc geninfo_all_blocks=1 00:09:06.094 --rc geninfo_unexecuted_blocks=1 00:09:06.094 00:09:06.094 ' 00:09:06.094 11:19:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.094 --rc genhtml_branch_coverage=1 00:09:06.094 --rc genhtml_function_coverage=1 00:09:06.094 --rc genhtml_legend=1 00:09:06.094 --rc geninfo_all_blocks=1 00:09:06.094 --rc geninfo_unexecuted_blocks=1 00:09:06.094 00:09:06.094 ' 00:09:06.094 11:19:24 -- rpc/rpc.sh@65 -- # spdk_pid=72211 00:09:06.094 11:19:24 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.094 11:19:24 -- rpc/rpc.sh@67 -- # waitforlisten 72211 00:09:06.094 11:19:24 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:06.094 11:19:24 -- common/autotest_common.sh@829 -- # '[' -z 72211 ']' 00:09:06.094 11:19:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.094 11:19:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.094 11:19:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.094 11:19:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.094 11:19:24 -- common/autotest_common.sh@10 -- # set +x 00:09:06.094 [2024-11-26 11:19:24.248061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.094 [2024-11-26 11:19:24.248216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72211 ] 00:09:06.353 [2024-11-26 11:19:24.420398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.353 [2024-11-26 11:19:24.465232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.353 [2024-11-26 11:19:24.465571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:06.353 [2024-11-26 11:19:24.465633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72211' to capture a snapshot of events at runtime. 00:09:06.353 [2024-11-26 11:19:24.465651] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72211 for offline analysis/debug. 00:09:06.353 [2024-11-26 11:19:24.465716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.291 11:19:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.291 11:19:25 -- common/autotest_common.sh@862 -- # return 0 00:09:07.291 11:19:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.291 11:19:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.291 11:19:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:07.291 11:19:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:07.291 11:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.291 11:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.291 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 ************************************ 00:09:07.291 START TEST rpc_integrity 00:09:07.291 ************************************ 00:09:07.291 11:19:25 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:07.291 11:19:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:07.291 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.291 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.291 11:19:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:07.291 11:19:25 -- rpc/rpc.sh@13 -- # jq length 00:09:07.291 11:19:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:07.291 11:19:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:07.291 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.291 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.291 11:19:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:07.291 11:19:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:07.291 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.291 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.291 11:19:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:07.291 { 00:09:07.291 "name": "Malloc0", 00:09:07.291 "aliases": [ 00:09:07.291 "029890e1-bdb0-4ce8-a73f-181bbfc5dfa7" 00:09:07.291 ], 00:09:07.291 "product_name": "Malloc disk", 00:09:07.291 "block_size": 512, 00:09:07.291 "num_blocks": 16384, 00:09:07.291 "uuid": "029890e1-bdb0-4ce8-a73f-181bbfc5dfa7", 00:09:07.291 "assigned_rate_limits": { 00:09:07.291 "rw_ios_per_sec": 0, 00:09:07.291 "rw_mbytes_per_sec": 0, 00:09:07.292 "r_mbytes_per_sec": 0, 00:09:07.292 "w_mbytes_per_sec": 0 00:09:07.292 }, 00:09:07.292 "claimed": false, 00:09:07.292 "zoned": false, 00:09:07.292 "supported_io_types": { 00:09:07.292 "read": true, 00:09:07.292 "write": true, 00:09:07.292 "unmap": true, 00:09:07.292 "write_zeroes": true, 00:09:07.292 "flush": true, 00:09:07.292 "reset": true, 00:09:07.292 "compare": false, 00:09:07.292 "compare_and_write": false, 00:09:07.292 "abort": true, 00:09:07.292 "nvme_admin": false, 00:09:07.292 "nvme_io": false 00:09:07.292 }, 00:09:07.292 "memory_domains": [ 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 } 00:09:07.292 ], 00:09:07.292 "driver_specific": {} 00:09:07.292 } 00:09:07.292 ]' 00:09:07.292 11:19:25 -- rpc/rpc.sh@17 -- # jq length 00:09:07.292 11:19:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:07.292 11:19:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 [2024-11-26 11:19:25.321838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:07.292 [2024-11-26 11:19:25.321960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.292 [2024-11-26 11:19:25.322009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:09:07.292 [2024-11-26 11:19:25.322036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.292 [2024-11-26 11:19:25.325217] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.292 [2024-11-26 11:19:25.325262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:07.292 Passthru0 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:07.292 { 00:09:07.292 "name": "Malloc0", 00:09:07.292 "aliases": [ 00:09:07.292 "029890e1-bdb0-4ce8-a73f-181bbfc5dfa7" 00:09:07.292 ], 00:09:07.292 "product_name": "Malloc disk", 00:09:07.292 "block_size": 512, 00:09:07.292 "num_blocks": 16384, 00:09:07.292 "uuid": "029890e1-bdb0-4ce8-a73f-181bbfc5dfa7", 00:09:07.292 "assigned_rate_limits": { 00:09:07.292 "rw_ios_per_sec": 0, 00:09:07.292 "rw_mbytes_per_sec": 0, 00:09:07.292 "r_mbytes_per_sec": 0, 00:09:07.292 "w_mbytes_per_sec": 0 00:09:07.292 }, 00:09:07.292 "claimed": true, 00:09:07.292 "claim_type": "exclusive_write", 00:09:07.292 "zoned": false, 00:09:07.292 "supported_io_types": { 00:09:07.292 "read": true, 00:09:07.292 "write": true, 00:09:07.292 "unmap": true, 00:09:07.292 "write_zeroes": true, 00:09:07.292 "flush": true, 00:09:07.292 "reset": true, 00:09:07.292 "compare": false, 00:09:07.292 "compare_and_write": false, 00:09:07.292 "abort": true, 00:09:07.292 "nvme_admin": false, 00:09:07.292 "nvme_io": false 00:09:07.292 }, 00:09:07.292 "memory_domains": [ 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 } 00:09:07.292 ], 00:09:07.292 "driver_specific": {} 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "name": "Passthru0", 00:09:07.292 "aliases": [ 00:09:07.292 "a9cfe00b-11d2-5a78-a77b-f5112629515f" 00:09:07.292 ], 00:09:07.292 "product_name": "passthru", 00:09:07.292 "block_size": 512, 00:09:07.292 "num_blocks": 16384, 00:09:07.292 "uuid": "a9cfe00b-11d2-5a78-a77b-f5112629515f", 00:09:07.292 "assigned_rate_limits": { 00:09:07.292 "rw_ios_per_sec": 0, 00:09:07.292 "rw_mbytes_per_sec": 0, 00:09:07.292 "r_mbytes_per_sec": 0, 00:09:07.292 "w_mbytes_per_sec": 0 00:09:07.292 }, 00:09:07.292 "claimed": false, 00:09:07.292 "zoned": false, 00:09:07.292 "supported_io_types": { 00:09:07.292 "read": true, 00:09:07.292 "write": true, 00:09:07.292 "unmap": true, 00:09:07.292 "write_zeroes": true, 00:09:07.292 "flush": true, 00:09:07.292 "reset": true, 00:09:07.292 "compare": false, 00:09:07.292 "compare_and_write": false, 00:09:07.292 "abort": true, 00:09:07.292 "nvme_admin": false, 00:09:07.292 "nvme_io": false 00:09:07.292 }, 00:09:07.292 "memory_domains": [ 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 } 00:09:07.292 ], 00:09:07.292 "driver_specific": { 00:09:07.292 "passthru": { 00:09:07.292 "name": "Passthru0", 00:09:07.292 "base_bdev_name": "Malloc0" 00:09:07.292 } 00:09:07.292 } 00:09:07.292 } 00:09:07.292 ]' 00:09:07.292 11:19:25 -- rpc/rpc.sh@21 -- # jq length 00:09:07.292 11:19:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:07.292 11:19:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:07.292 11:19:25 -- rpc/rpc.sh@26 -- # jq length 00:09:07.292 ************************************ 00:09:07.292 END TEST rpc_integrity 00:09:07.292 ************************************ 00:09:07.292 11:19:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:07.292 00:09:07.292 real 0m0.153s 00:09:07.292 user 0m0.039s 00:09:07.292 sys 0m0.051s 00:09:07.292 11:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:07.292 11:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.292 11:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 ************************************ 00:09:07.292 START TEST rpc_plugins 00:09:07.292 ************************************ 00:09:07.292 11:19:25 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:09:07.292 11:19:25 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:07.292 11:19:25 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:07.292 { 00:09:07.292 "name": "Malloc1", 00:09:07.292 "aliases": [ 00:09:07.292 "02241bb2-ae30-4c44-808d-318af1a886cc" 00:09:07.292 ], 00:09:07.292 "product_name": "Malloc disk", 00:09:07.292 "block_size": 4096, 00:09:07.292 "num_blocks": 256, 00:09:07.292 "uuid": "02241bb2-ae30-4c44-808d-318af1a886cc", 00:09:07.292 "assigned_rate_limits": { 00:09:07.292 "rw_ios_per_sec": 0, 00:09:07.292 "rw_mbytes_per_sec": 0, 00:09:07.292 "r_mbytes_per_sec": 0, 00:09:07.292 "w_mbytes_per_sec": 0 00:09:07.292 }, 00:09:07.292 "claimed": false, 00:09:07.292 "zoned": false, 00:09:07.292 "supported_io_types": { 00:09:07.292 "read": true, 00:09:07.292 "write": true, 00:09:07.292 "unmap": true, 00:09:07.292 "write_zeroes": true, 00:09:07.292 "flush": true, 00:09:07.292 "reset": true, 00:09:07.292 "compare": false, 00:09:07.292 "compare_and_write": false, 00:09:07.292 "abort": true, 00:09:07.292 "nvme_admin": false, 00:09:07.292 "nvme_io": false 00:09:07.292 }, 00:09:07.292 "memory_domains": [ 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 } 00:09:07.292 ], 00:09:07.292 "driver_specific": {} 00:09:07.292 } 00:09:07.292 ]' 00:09:07.292 11:19:25 -- rpc/rpc.sh@32 -- # jq length 00:09:07.292 11:19:25 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:07.292 11:19:25 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:07.292 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.292 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.292 11:19:25 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:07.550 11:19:25 -- rpc/rpc.sh@36 -- # jq length 00:09:07.550 ************************************ 00:09:07.550 END TEST rpc_plugins 00:09:07.550 ************************************ 00:09:07.550 11:19:25 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:07.550 00:09:07.550 real 0m0.077s 00:09:07.550 user 0m0.027s 00:09:07.550 sys 0m0.017s 00:09:07.550 11:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:07.550 11:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.550 11:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 ************************************ 00:09:07.550 START TEST rpc_trace_cmd_test 00:09:07.550 ************************************ 00:09:07.550 11:19:25 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:09:07.550 11:19:25 -- rpc/rpc.sh@40 -- # local info 00:09:07.550 11:19:25 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@42 -- # info='{ 00:09:07.550 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72211", 00:09:07.550 "tpoint_group_mask": "0x8", 00:09:07.550 "iscsi_conn": { 00:09:07.550 "mask": "0x2", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "scsi": { 00:09:07.550 "mask": "0x4", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "bdev": { 00:09:07.550 "mask": "0x8", 00:09:07.550 "tpoint_mask": "0xffffffffffffffff" 00:09:07.550 }, 00:09:07.550 "nvmf_rdma": { 00:09:07.550 "mask": "0x10", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "nvmf_tcp": { 00:09:07.550 "mask": "0x20", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "ftl": { 00:09:07.550 "mask": "0x40", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "blobfs": { 00:09:07.550 "mask": "0x80", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "dsa": { 00:09:07.550 "mask": "0x200", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "thread": { 00:09:07.550 "mask": "0x400", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "nvme_pcie": { 00:09:07.550 "mask": "0x800", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "iaa": { 00:09:07.550 "mask": "0x1000", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "nvme_tcp": { 00:09:07.550 "mask": "0x2000", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 }, 00:09:07.550 "bdev_nvme": { 00:09:07.550 "mask": "0x4000", 00:09:07.550 "tpoint_mask": "0x0" 00:09:07.550 } 00:09:07.550 }' 00:09:07.550 11:19:25 -- rpc/rpc.sh@43 -- # jq length 00:09:07.550 11:19:25 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:07.550 11:19:25 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:07.550 11:19:25 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:07.550 11:19:25 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:07.550 ************************************ 00:09:07.550 END TEST rpc_trace_cmd_test 00:09:07.550 ************************************ 00:09:07.550 11:19:25 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:07.550 00:09:07.550 real 0m0.067s 00:09:07.550 user 0m0.032s 00:09:07.550 sys 0m0.028s 00:09:07.550 11:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:07.550 11:19:25 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:07.550 11:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.550 11:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 ************************************ 00:09:07.550 START TEST rpc_daemon_integrity 00:09:07.550 ************************************ 00:09:07.550 11:19:25 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:07.550 11:19:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:07.550 11:19:25 -- rpc/rpc.sh@13 -- # jq length 00:09:07.550 11:19:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:07.550 11:19:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:07.550 { 00:09:07.550 "name": "Malloc2", 00:09:07.550 "aliases": [ 00:09:07.550 "5ec444f6-04cf-475d-be12-c4550fa5d407" 00:09:07.550 ], 00:09:07.550 "product_name": "Malloc disk", 00:09:07.550 "block_size": 512, 00:09:07.550 "num_blocks": 16384, 00:09:07.550 "uuid": "5ec444f6-04cf-475d-be12-c4550fa5d407", 00:09:07.550 "assigned_rate_limits": { 00:09:07.550 "rw_ios_per_sec": 0, 00:09:07.550 "rw_mbytes_per_sec": 0, 00:09:07.550 "r_mbytes_per_sec": 0, 00:09:07.550 "w_mbytes_per_sec": 0 00:09:07.550 }, 00:09:07.550 "claimed": false, 00:09:07.550 "zoned": false, 00:09:07.550 "supported_io_types": { 00:09:07.550 "read": true, 00:09:07.550 "write": true, 00:09:07.550 "unmap": true, 00:09:07.550 "write_zeroes": true, 00:09:07.550 "flush": true, 00:09:07.550 "reset": true, 00:09:07.550 "compare": false, 00:09:07.550 "compare_and_write": false, 00:09:07.550 "abort": true, 00:09:07.550 "nvme_admin": false, 00:09:07.550 "nvme_io": false 00:09:07.550 }, 00:09:07.550 "memory_domains": [ 00:09:07.550 { 00:09:07.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.550 "dma_device_type": 2 00:09:07.550 } 00:09:07.550 ], 00:09:07.550 "driver_specific": {} 00:09:07.550 } 00:09:07.550 ]' 00:09:07.550 11:19:25 -- rpc/rpc.sh@17 -- # jq length 00:09:07.550 11:19:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:07.550 11:19:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.550 [2024-11-26 11:19:25.770600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:07.550 [2024-11-26 11:19:25.770664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.550 [2024-11-26 11:19:25.770692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:09:07.550 [2024-11-26 11:19:25.770709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.550 [2024-11-26 11:19:25.773621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.550 [2024-11-26 11:19:25.773679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:07.550 Passthru0 00:09:07.550 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.550 11:19:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:07.550 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.550 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.808 11:19:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:07.808 { 00:09:07.808 "name": "Malloc2", 00:09:07.808 "aliases": [ 00:09:07.808 "5ec444f6-04cf-475d-be12-c4550fa5d407" 00:09:07.808 ], 00:09:07.808 "product_name": "Malloc disk", 00:09:07.808 "block_size": 512, 00:09:07.808 "num_blocks": 16384, 00:09:07.808 "uuid": "5ec444f6-04cf-475d-be12-c4550fa5d407", 00:09:07.808 "assigned_rate_limits": { 00:09:07.808 "rw_ios_per_sec": 0, 00:09:07.808 "rw_mbytes_per_sec": 0, 00:09:07.808 "r_mbytes_per_sec": 0, 00:09:07.808 "w_mbytes_per_sec": 0 00:09:07.808 }, 00:09:07.808 "claimed": true, 00:09:07.808 "claim_type": "exclusive_write", 00:09:07.808 "zoned": false, 00:09:07.808 "supported_io_types": { 00:09:07.808 "read": true, 00:09:07.808 "write": true, 00:09:07.808 "unmap": true, 00:09:07.808 "write_zeroes": true, 00:09:07.808 "flush": true, 00:09:07.808 "reset": true, 00:09:07.808 "compare": false, 00:09:07.808 "compare_and_write": false, 00:09:07.808 "abort": true, 00:09:07.808 "nvme_admin": false, 00:09:07.808 "nvme_io": false 00:09:07.808 }, 00:09:07.808 "memory_domains": [ 00:09:07.808 { 00:09:07.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.808 "dma_device_type": 2 00:09:07.808 } 00:09:07.808 ], 00:09:07.808 "driver_specific": {} 00:09:07.808 }, 00:09:07.808 { 00:09:07.808 "name": "Passthru0", 00:09:07.808 "aliases": [ 00:09:07.808 "e158f36c-b28a-50f6-a33f-778dccbb7e11" 00:09:07.808 ], 00:09:07.808 "product_name": "passthru", 00:09:07.808 "block_size": 512, 00:09:07.808 "num_blocks": 16384, 00:09:07.808 "uuid": "e158f36c-b28a-50f6-a33f-778dccbb7e11", 00:09:07.808 "assigned_rate_limits": { 00:09:07.808 "rw_ios_per_sec": 0, 00:09:07.808 "rw_mbytes_per_sec": 0, 00:09:07.808 "r_mbytes_per_sec": 0, 00:09:07.808 "w_mbytes_per_sec": 0 00:09:07.808 }, 00:09:07.808 "claimed": false, 00:09:07.808 "zoned": false, 00:09:07.808 "supported_io_types": { 00:09:07.808 "read": true, 00:09:07.808 "write": true, 00:09:07.808 "unmap": true, 00:09:07.808 "write_zeroes": true, 00:09:07.808 "flush": true, 00:09:07.808 "reset": true, 00:09:07.808 "compare": false, 00:09:07.808 "compare_and_write": false, 00:09:07.808 "abort": true, 00:09:07.808 "nvme_admin": false, 00:09:07.808 "nvme_io": false 00:09:07.808 }, 00:09:07.808 "memory_domains": [ 00:09:07.808 { 00:09:07.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.808 "dma_device_type": 2 00:09:07.808 } 00:09:07.808 ], 00:09:07.808 "driver_specific": { 00:09:07.808 "passthru": { 00:09:07.808 "name": "Passthru0", 00:09:07.808 "base_bdev_name": "Malloc2" 00:09:07.808 } 00:09:07.808 } 00:09:07.808 } 00:09:07.808 ]' 00:09:07.808 11:19:25 -- rpc/rpc.sh@21 -- # jq length 00:09:07.808 11:19:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:07.808 11:19:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:07.808 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.808 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.808 11:19:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:07.808 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.808 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.808 11:19:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:07.808 11:19:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.808 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 11:19:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.808 11:19:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:07.808 11:19:25 -- rpc/rpc.sh@26 -- # jq length 00:09:07.808 ************************************ 00:09:07.808 END TEST rpc_daemon_integrity 00:09:07.808 ************************************ 00:09:07.808 11:19:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:07.808 00:09:07.808 real 0m0.142s 00:09:07.808 user 0m0.053s 00:09:07.808 sys 0m0.035s 00:09:07.808 11:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.808 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.808 11:19:25 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:07.808 11:19:25 -- rpc/rpc.sh@84 -- # killprocess 72211 00:09:07.808 11:19:25 -- common/autotest_common.sh@936 -- # '[' -z 72211 ']' 00:09:07.808 11:19:25 -- common/autotest_common.sh@940 -- # kill -0 72211 00:09:07.808 11:19:25 -- common/autotest_common.sh@941 -- # uname 00:09:07.808 11:19:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.808 11:19:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72211 00:09:07.808 killing process with pid 72211 00:09:07.808 11:19:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.808 11:19:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.808 11:19:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72211' 00:09:07.808 11:19:25 -- common/autotest_common.sh@955 -- # kill 72211 00:09:07.808 11:19:25 -- common/autotest_common.sh@960 -- # wait 72211 00:09:08.066 00:09:08.066 real 0m2.266s 00:09:08.066 user 0m2.523s 00:09:08.066 sys 0m0.732s 00:09:08.066 11:19:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.066 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.066 ************************************ 00:09:08.066 END TEST rpc 00:09:08.066 ************************************ 00:09:08.066 11:19:26 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.066 11:19:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.066 11:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.066 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.066 ************************************ 00:09:08.066 START TEST rpc_client 00:09:08.066 ************************************ 00:09:08.066 11:19:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.325 * Looking for test storage... 00:09:08.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:08.325 11:19:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.325 11:19:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.325 11:19:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.325 11:19:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.325 11:19:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.325 11:19:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.325 11:19:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.325 11:19:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.325 11:19:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.325 11:19:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.325 11:19:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.325 11:19:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.325 11:19:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.325 11:19:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.325 11:19:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.325 11:19:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.325 11:19:26 -- scripts/common.sh@344 -- # : 1 00:09:08.325 11:19:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.325 11:19:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.325 11:19:26 -- scripts/common.sh@364 -- # decimal 1 00:09:08.325 11:19:26 -- scripts/common.sh@352 -- # local d=1 00:09:08.325 11:19:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.325 11:19:26 -- scripts/common.sh@354 -- # echo 1 00:09:08.325 11:19:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.325 11:19:26 -- scripts/common.sh@365 -- # decimal 2 00:09:08.325 11:19:26 -- scripts/common.sh@352 -- # local d=2 00:09:08.325 11:19:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.325 11:19:26 -- scripts/common.sh@354 -- # echo 2 00:09:08.325 11:19:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.325 11:19:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.325 11:19:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.325 11:19:26 -- scripts/common.sh@367 -- # return 0 00:09:08.325 11:19:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.325 11:19:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.325 --rc genhtml_branch_coverage=1 00:09:08.325 --rc genhtml_function_coverage=1 00:09:08.325 --rc genhtml_legend=1 00:09:08.325 --rc geninfo_all_blocks=1 00:09:08.325 --rc geninfo_unexecuted_blocks=1 00:09:08.325 00:09:08.325 ' 00:09:08.325 11:19:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.325 --rc genhtml_branch_coverage=1 00:09:08.325 --rc genhtml_function_coverage=1 00:09:08.325 --rc genhtml_legend=1 00:09:08.325 --rc geninfo_all_blocks=1 00:09:08.325 --rc geninfo_unexecuted_blocks=1 00:09:08.325 00:09:08.325 ' 00:09:08.325 11:19:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.325 --rc genhtml_branch_coverage=1 00:09:08.325 --rc genhtml_function_coverage=1 00:09:08.325 --rc genhtml_legend=1 00:09:08.325 --rc geninfo_all_blocks=1 00:09:08.325 --rc geninfo_unexecuted_blocks=1 00:09:08.325 00:09:08.325 ' 00:09:08.325 11:19:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.325 --rc genhtml_branch_coverage=1 00:09:08.325 --rc genhtml_function_coverage=1 00:09:08.325 --rc genhtml_legend=1 00:09:08.325 --rc geninfo_all_blocks=1 00:09:08.325 --rc geninfo_unexecuted_blocks=1 00:09:08.325 00:09:08.325 ' 00:09:08.325 11:19:26 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:08.325 OK 00:09:08.325 11:19:26 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:08.325 00:09:08.325 real 0m0.232s 00:09:08.325 user 0m0.146s 00:09:08.325 sys 0m0.103s 00:09:08.325 11:19:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.325 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.325 ************************************ 00:09:08.325 END TEST rpc_client 00:09:08.325 ************************************ 00:09:08.584 11:19:26 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.584 11:19:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.584 11:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.584 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.584 ************************************ 00:09:08.584 START TEST json_config 00:09:08.584 ************************************ 00:09:08.584 11:19:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.584 11:19:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.584 11:19:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.584 11:19:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.584 11:19:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.585 11:19:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.585 11:19:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.585 11:19:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.585 11:19:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.585 11:19:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.585 11:19:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.585 11:19:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.585 11:19:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.585 11:19:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.585 11:19:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.585 11:19:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.585 11:19:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.585 11:19:26 -- scripts/common.sh@344 -- # : 1 00:09:08.585 11:19:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.585 11:19:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.585 11:19:26 -- scripts/common.sh@364 -- # decimal 1 00:09:08.585 11:19:26 -- scripts/common.sh@352 -- # local d=1 00:09:08.585 11:19:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.585 11:19:26 -- scripts/common.sh@354 -- # echo 1 00:09:08.585 11:19:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.585 11:19:26 -- scripts/common.sh@365 -- # decimal 2 00:09:08.585 11:19:26 -- scripts/common.sh@352 -- # local d=2 00:09:08.585 11:19:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.585 11:19:26 -- scripts/common.sh@354 -- # echo 2 00:09:08.585 11:19:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.585 11:19:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.585 11:19:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.585 11:19:26 -- scripts/common.sh@367 -- # return 0 00:09:08.585 11:19:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.585 11:19:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.585 --rc genhtml_branch_coverage=1 00:09:08.585 --rc genhtml_function_coverage=1 00:09:08.585 --rc genhtml_legend=1 00:09:08.585 --rc geninfo_all_blocks=1 00:09:08.585 --rc geninfo_unexecuted_blocks=1 00:09:08.585 00:09:08.585 ' 00:09:08.585 11:19:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.585 --rc genhtml_branch_coverage=1 00:09:08.585 --rc genhtml_function_coverage=1 00:09:08.585 --rc genhtml_legend=1 00:09:08.585 --rc geninfo_all_blocks=1 00:09:08.585 --rc geninfo_unexecuted_blocks=1 00:09:08.585 00:09:08.585 ' 00:09:08.585 11:19:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.585 --rc genhtml_branch_coverage=1 00:09:08.585 --rc genhtml_function_coverage=1 00:09:08.585 --rc genhtml_legend=1 00:09:08.585 --rc geninfo_all_blocks=1 00:09:08.585 --rc geninfo_unexecuted_blocks=1 00:09:08.585 00:09:08.585 ' 00:09:08.585 11:19:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.585 --rc genhtml_branch_coverage=1 00:09:08.585 --rc genhtml_function_coverage=1 00:09:08.585 --rc genhtml_legend=1 00:09:08.585 --rc geninfo_all_blocks=1 00:09:08.585 --rc geninfo_unexecuted_blocks=1 00:09:08.585 00:09:08.585 ' 00:09:08.585 11:19:26 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.585 11:19:26 -- nvmf/common.sh@7 -- # uname -s 00:09:08.585 11:19:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.585 11:19:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.585 11:19:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.585 11:19:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.585 11:19:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.585 11:19:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.585 11:19:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.585 11:19:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.585 11:19:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.585 11:19:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.585 11:19:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:09:08.585 11:19:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:09:08.585 11:19:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.585 11:19:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.585 11:19:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:08.585 11:19:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.585 11:19:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.585 11:19:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.585 11:19:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.585 11:19:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.585 11:19:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.585 11:19:26 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.585 11:19:26 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.585 11:19:26 -- paths/export.sh@6 -- # export PATH 00:09:08.585 11:19:26 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:08.585 11:19:26 -- nvmf/common.sh@46 -- # : 0 00:09:08.585 11:19:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.585 11:19:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.585 11:19:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.585 11:19:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.585 11:19:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.585 11:19:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.585 11:19:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.585 11:19:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.585 11:19:26 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:08.585 11:19:26 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:09:08.585 11:19:26 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:08.585 11:19:26 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:08.585 11:19:26 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:08.585 11:19:26 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:08.585 11:19:26 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:08.585 11:19:26 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:08.585 11:19:26 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:08.585 11:19:26 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:08.585 11:19:26 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:08.585 INFO: JSON configuration test init 00:09:08.585 11:19:26 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:08.585 11:19:26 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:08.585 11:19:26 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:08.585 11:19:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.585 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.585 11:19:26 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:08.585 11:19:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.585 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.585 11:19:26 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:08.585 11:19:26 -- json_config/json_config.sh@98 -- # local app=target 00:09:08.585 11:19:26 -- json_config/json_config.sh@99 -- # shift 00:09:08.585 11:19:26 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:08.585 11:19:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:08.585 11:19:26 -- json_config/json_config.sh@111 -- # app_pid[$app]=72455 00:09:08.585 Waiting for target to run... 00:09:08.585 11:19:26 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:08.585 11:19:26 -- json_config/json_config.sh@114 -- # waitforlisten 72455 /var/tmp/spdk_tgt.sock 00:09:08.585 11:19:26 -- common/autotest_common.sh@829 -- # '[' -z 72455 ']' 00:09:08.585 11:19:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:08.586 11:19:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:08.586 11:19:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:08.586 11:19:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.586 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.586 11:19:26 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:08.845 [2024-11-26 11:19:26.851022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.845 [2024-11-26 11:19:26.851237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72455 ] 00:09:09.104 [2024-11-26 11:19:27.203224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.104 [2024-11-26 11:19:27.229315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.104 [2024-11-26 11:19:27.229606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.672 11:19:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.672 11:19:27 -- common/autotest_common.sh@862 -- # return 0 00:09:09.672 00:09:09.672 11:19:27 -- json_config/json_config.sh@115 -- # echo '' 00:09:09.672 11:19:27 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:09.672 11:19:27 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:09.672 11:19:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.672 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 11:19:27 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:09.672 11:19:27 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:09.672 11:19:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.672 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 11:19:27 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:09.672 11:19:27 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:09.672 11:19:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:10.239 11:19:28 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:10.239 11:19:28 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:10.239 11:19:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.239 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:10.239 11:19:28 -- json_config/json_config.sh@48 -- # local ret=0 00:09:10.240 11:19:28 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:10.240 11:19:28 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:10.240 11:19:28 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:10.240 11:19:28 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:10.240 11:19:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:10.499 11:19:28 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:10.499 11:19:28 -- json_config/json_config.sh@51 -- # local get_types 00:09:10.499 11:19:28 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:10.499 11:19:28 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:10.499 11:19:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.499 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:10.499 11:19:28 -- json_config/json_config.sh@58 -- # return 0 00:09:10.499 11:19:28 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:09:10.499 11:19:28 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:09:10.499 11:19:28 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:09:10.499 11:19:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.499 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:10.499 11:19:28 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:09:10.499 11:19:28 -- json_config/json_config.sh@160 -- # local expected_notifications 00:09:10.499 11:19:28 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:09:10.499 11:19:28 -- json_config/json_config.sh@164 -- # get_notifications 00:09:10.499 11:19:28 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:10.499 11:19:28 -- json_config/json_config.sh@64 -- # IFS=: 00:09:10.499 11:19:28 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:10.499 11:19:28 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:10.499 11:19:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:10.499 11:19:28 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:10.757 11:19:28 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:10.757 11:19:28 -- json_config/json_config.sh@64 -- # IFS=: 00:09:10.757 11:19:28 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:10.757 11:19:28 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:09:10.757 11:19:28 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:09:10.757 11:19:28 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:10.757 11:19:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:11.016 Nvme0n1p0 Nvme0n1p1 00:09:11.016 11:19:29 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:11.016 11:19:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:11.273 [2024-11-26 11:19:29.396124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:11.273 [2024-11-26 11:19:29.396208] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:11.273 00:09:11.273 11:19:29 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:11.273 11:19:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:11.532 Malloc3 00:09:11.532 11:19:29 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:11.532 11:19:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:11.790 [2024-11-26 11:19:29.856483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:11.790 [2024-11-26 11:19:29.856574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.790 [2024-11-26 11:19:29.856610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:09:11.790 [2024-11-26 11:19:29.856629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.790 [2024-11-26 11:19:29.859737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.790 [2024-11-26 11:19:29.859810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:11.790 PTBdevFromMalloc3 00:09:11.790 11:19:29 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:11.790 11:19:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:12.049 Null0 00:09:12.049 11:19:30 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:12.049 11:19:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:12.308 Malloc0 00:09:12.308 11:19:30 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:12.308 11:19:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:12.568 Malloc1 00:09:12.568 11:19:30 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:12.568 11:19:30 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:12.828 102400+0 records in 00:09:12.828 102400+0 records out 00:09:12.828 104857600 bytes (105 MB, 100 MiB) copied, 0.246989 s, 425 MB/s 00:09:12.828 11:19:30 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:12.828 11:19:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:13.088 aio_disk 00:09:13.088 11:19:31 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:13.088 11:19:31 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:13.088 11:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:13.346 ce74130c-2ef3-4e1b-b5ff-dec5602f8e15 00:09:13.346 11:19:31 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:13.346 11:19:31 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:13.346 11:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:13.346 11:19:31 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:13.346 11:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:13.605 11:19:31 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:13.605 11:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:13.864 11:19:32 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:13.864 11:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:14.124 11:19:32 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:09:14.124 11:19:32 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:09:14.124 11:19:32 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e bdev_register:968369e2-1727-40f0-9823-28c55595b563 00:09:14.124 11:19:32 -- json_config/json_config.sh@70 -- # local events_to_check 00:09:14.124 11:19:32 -- json_config/json_config.sh@71 -- # local recorded_events 00:09:14.124 11:19:32 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:14.124 11:19:32 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e bdev_register:968369e2-1727-40f0-9823-28c55595b563 00:09:14.124 11:19:32 -- json_config/json_config.sh@74 -- # sort 00:09:14.124 11:19:32 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:09:14.124 11:19:32 -- json_config/json_config.sh@75 -- # get_notifications 00:09:14.124 11:19:32 -- json_config/json_config.sh@75 -- # sort 00:09:14.124 11:19:32 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:14.124 11:19:32 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:14.124 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.124 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.124 11:19:32 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:14.124 11:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@65 -- # echo bdev_register:968369e2-1727-40f0-9823-28c55595b563 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # IFS=: 00:09:14.384 11:19:32 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:14.384 11:19:32 -- json_config/json_config.sh@77 -- # [[ bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f bdev_register:968369e2-1727-40f0-9823-28c55595b563 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\3\f\2\5\a\d\0\-\5\2\d\f\-\4\1\4\a\-\9\3\8\c\-\9\f\0\2\5\b\2\b\0\f\6\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\8\0\4\d\b\4\2\-\c\8\f\b\-\4\f\d\e\-\8\5\b\d\-\b\9\b\c\9\c\4\4\d\c\5\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\e\f\7\3\8\7\9\-\5\f\0\b\-\4\7\9\1\-\9\d\7\8\-\c\5\9\5\c\9\6\6\2\e\6\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\6\8\3\6\9\e\2\-\1\7\2\7\-\4\0\f\0\-\9\8\2\3\-\2\8\c\5\5\5\9\5\b\5\6\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:09:14.385 11:19:32 -- json_config/json_config.sh@89 -- # cat 00:09:14.385 11:19:32 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f bdev_register:968369e2-1727-40f0-9823-28c55595b563 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:09:14.385 Expected events matched: 00:09:14.385 bdev_register:23f25ad0-52df-414a-938c-9f025b2b0f6e 00:09:14.385 bdev_register:6804db42-c8fb-4fde-85bd-b9bc9c44dc56 00:09:14.385 bdev_register:8ef73879-5f0b-4791-9d78-c595c9662e6f 00:09:14.385 bdev_register:968369e2-1727-40f0-9823-28c55595b563 00:09:14.385 bdev_register:Malloc0 00:09:14.385 bdev_register:Malloc0p0 00:09:14.385 bdev_register:Malloc0p1 00:09:14.385 bdev_register:Malloc0p2 00:09:14.385 bdev_register:Malloc1 00:09:14.385 bdev_register:Malloc3 00:09:14.385 bdev_register:Null0 00:09:14.385 bdev_register:Nvme0n1 00:09:14.385 bdev_register:Nvme0n1p0 00:09:14.385 bdev_register:Nvme0n1p1 00:09:14.385 bdev_register:PTBdevFromMalloc3 00:09:14.385 bdev_register:aio_disk 00:09:14.385 11:19:32 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:09:14.385 11:19:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.385 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:14.385 11:19:32 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:09:14.385 11:19:32 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:09:14.385 11:19:32 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:09:14.385 11:19:32 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:14.385 11:19:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.385 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:14.385 11:19:32 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:14.385 11:19:32 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:14.385 11:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:14.643 MallocBdevForConfigChangeCheck 00:09:14.643 11:19:32 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:14.643 11:19:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.643 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:14.643 11:19:32 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:14.643 11:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.212 INFO: shutting down applications... 00:09:15.212 11:19:33 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:15.212 11:19:33 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:15.212 11:19:33 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:15.212 11:19:33 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:15.212 11:19:33 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:15.212 [2024-11-26 11:19:33.380555] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:15.471 Calling clear_vhost_scsi_subsystem 00:09:15.471 Calling clear_iscsi_subsystem 00:09:15.471 Calling clear_vhost_blk_subsystem 00:09:15.471 Calling clear_ublk_subsystem 00:09:15.471 Calling clear_nbd_subsystem 00:09:15.471 Calling clear_nvmf_subsystem 00:09:15.471 Calling clear_bdev_subsystem 00:09:15.471 Calling clear_accel_subsystem 00:09:15.471 Calling clear_iobuf_subsystem 00:09:15.471 Calling clear_sock_subsystem 00:09:15.471 Calling clear_vmd_subsystem 00:09:15.471 Calling clear_scheduler_subsystem 00:09:15.471 11:19:33 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:15.471 11:19:33 -- json_config/json_config.sh@396 -- # count=100 00:09:15.471 11:19:33 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:15.471 11:19:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:15.471 11:19:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:15.471 11:19:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:15.731 11:19:33 -- json_config/json_config.sh@398 -- # break 00:09:15.731 11:19:33 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:15.731 11:19:33 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:15.731 11:19:33 -- json_config/json_config.sh@120 -- # local app=target 00:09:15.731 11:19:33 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:15.731 11:19:33 -- json_config/json_config.sh@124 -- # [[ -n 72455 ]] 00:09:15.731 11:19:33 -- json_config/json_config.sh@127 -- # kill -SIGINT 72455 00:09:15.731 11:19:33 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:15.731 11:19:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:15.731 11:19:33 -- json_config/json_config.sh@130 -- # kill -0 72455 00:09:15.731 11:19:33 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:16.369 11:19:34 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:16.369 11:19:34 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:16.369 11:19:34 -- json_config/json_config.sh@130 -- # kill -0 72455 00:09:16.369 11:19:34 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:16.369 11:19:34 -- json_config/json_config.sh@132 -- # break 00:09:16.369 11:19:34 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:16.369 11:19:34 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:16.369 SPDK target shutdown done 00:09:16.369 INFO: relaunching applications... 00:09:16.369 11:19:34 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:16.369 11:19:34 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:16.369 11:19:34 -- json_config/json_config.sh@98 -- # local app=target 00:09:16.369 11:19:34 -- json_config/json_config.sh@99 -- # shift 00:09:16.369 11:19:34 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:16.369 11:19:34 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:16.369 11:19:34 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:16.369 11:19:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:16.369 11:19:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:16.369 11:19:34 -- json_config/json_config.sh@111 -- # app_pid[$app]=72694 00:09:16.369 11:19:34 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:16.369 Waiting for target to run... 00:09:16.369 11:19:34 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:16.369 11:19:34 -- json_config/json_config.sh@114 -- # waitforlisten 72694 /var/tmp/spdk_tgt.sock 00:09:16.369 11:19:34 -- common/autotest_common.sh@829 -- # '[' -z 72694 ']' 00:09:16.369 11:19:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:16.369 11:19:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:16.369 11:19:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:16.369 11:19:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.369 11:19:34 -- common/autotest_common.sh@10 -- # set +x 00:09:16.369 [2024-11-26 11:19:34.482600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.369 [2024-11-26 11:19:34.482765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:09:16.629 [2024-11-26 11:19:34.809756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.629 [2024-11-26 11:19:34.832272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.629 [2024-11-26 11:19:34.832604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.888 [2024-11-26 11:19:34.966529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:16.888 [2024-11-26 11:19:34.966627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:16.888 [2024-11-26 11:19:34.974511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:16.888 [2024-11-26 11:19:34.974592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:16.888 [2024-11-26 11:19:34.982529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.888 [2024-11-26 11:19:34.982599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:16.888 [2024-11-26 11:19:34.982619] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:16.888 [2024-11-26 11:19:35.068007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.888 [2024-11-26 11:19:35.068092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.888 [2024-11-26 11:19:35.068117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:09:16.888 [2024-11-26 11:19:35.068129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.888 [2024-11-26 11:19:35.068567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.888 [2024-11-26 11:19:35.068602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:17.146 11:19:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.147 11:19:35 -- common/autotest_common.sh@862 -- # return 0 00:09:17.147 00:09:17.147 11:19:35 -- json_config/json_config.sh@115 -- # echo '' 00:09:17.147 11:19:35 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:17.147 INFO: Checking if target configuration is the same... 00:09:17.147 11:19:35 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:17.147 11:19:35 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.147 11:19:35 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:17.147 11:19:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:17.147 + '[' 2 -ne 2 ']' 00:09:17.147 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:17.147 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:17.147 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:17.147 +++ basename /dev/fd/62 00:09:17.147 ++ mktemp /tmp/62.XXX 00:09:17.147 + tmp_file_1=/tmp/62.qxB 00:09:17.147 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.147 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:17.147 + tmp_file_2=/tmp/spdk_tgt_config.json.lVF 00:09:17.147 + ret=0 00:09:17.147 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:17.714 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:17.714 + diff -u /tmp/62.qxB /tmp/spdk_tgt_config.json.lVF 00:09:17.714 INFO: JSON config files are the same 00:09:17.714 + echo 'INFO: JSON config files are the same' 00:09:17.714 + rm /tmp/62.qxB /tmp/spdk_tgt_config.json.lVF 00:09:17.714 + exit 0 00:09:17.714 11:19:35 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:17.714 11:19:35 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:17.714 INFO: changing configuration and checking if this can be detected... 00:09:17.714 11:19:35 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:17.714 11:19:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:17.974 11:19:36 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.974 11:19:36 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:17.974 11:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:17.974 + '[' 2 -ne 2 ']' 00:09:17.974 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:17.974 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:17.974 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:17.974 +++ basename /dev/fd/62 00:09:17.974 ++ mktemp /tmp/62.XXX 00:09:17.974 + tmp_file_1=/tmp/62.3Sl 00:09:17.974 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:17.974 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:17.974 + tmp_file_2=/tmp/spdk_tgt_config.json.8Y2 00:09:17.974 + ret=0 00:09:17.974 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:18.233 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:18.233 + diff -u /tmp/62.3Sl /tmp/spdk_tgt_config.json.8Y2 00:09:18.233 + ret=1 00:09:18.233 + echo '=== Start of file: /tmp/62.3Sl ===' 00:09:18.233 + cat /tmp/62.3Sl 00:09:18.233 + echo '=== End of file: /tmp/62.3Sl ===' 00:09:18.233 + echo '' 00:09:18.233 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8Y2 ===' 00:09:18.233 + cat /tmp/spdk_tgt_config.json.8Y2 00:09:18.233 + echo '=== End of file: /tmp/spdk_tgt_config.json.8Y2 ===' 00:09:18.233 + echo '' 00:09:18.233 + rm /tmp/62.3Sl /tmp/spdk_tgt_config.json.8Y2 00:09:18.233 + exit 1 00:09:18.233 INFO: configuration change detected. 00:09:18.233 11:19:36 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:18.233 11:19:36 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:18.233 11:19:36 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:18.233 11:19:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.233 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 11:19:36 -- json_config/json_config.sh@360 -- # local ret=0 00:09:18.233 11:19:36 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:18.233 11:19:36 -- json_config/json_config.sh@370 -- # [[ -n 72694 ]] 00:09:18.233 11:19:36 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:18.233 11:19:36 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:18.233 11:19:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.233 11:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 11:19:36 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:09:18.233 11:19:36 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:18.233 11:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:18.493 11:19:36 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:18.493 11:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:18.752 11:19:36 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:18.752 11:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:19.011 11:19:37 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:19.011 11:19:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:19.270 11:19:37 -- json_config/json_config.sh@246 -- # uname -s 00:09:19.270 11:19:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:19.270 11:19:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:19.270 11:19:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:19.270 11:19:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:19.270 11:19:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.270 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.270 11:19:37 -- json_config/json_config.sh@376 -- # killprocess 72694 00:09:19.270 11:19:37 -- common/autotest_common.sh@936 -- # '[' -z 72694 ']' 00:09:19.270 11:19:37 -- common/autotest_common.sh@940 -- # kill -0 72694 00:09:19.270 11:19:37 -- common/autotest_common.sh@941 -- # uname 00:09:19.270 11:19:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.270 11:19:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72694 00:09:19.270 11:19:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.270 killing process with pid 72694 00:09:19.270 11:19:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.270 11:19:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72694' 00:09:19.270 11:19:37 -- common/autotest_common.sh@955 -- # kill 72694 00:09:19.270 11:19:37 -- common/autotest_common.sh@960 -- # wait 72694 00:09:19.529 11:19:37 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:19.529 11:19:37 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:19.529 11:19:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.529 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.529 11:19:37 -- json_config/json_config.sh@381 -- # return 0 00:09:19.529 INFO: Success 00:09:19.529 11:19:37 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:19.529 00:09:19.529 real 0m11.123s 00:09:19.529 user 0m16.942s 00:09:19.529 sys 0m2.214s 00:09:19.529 11:19:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.529 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.529 ************************************ 00:09:19.529 END TEST json_config 00:09:19.529 ************************************ 00:09:19.529 11:19:37 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:19.529 11:19:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.529 11:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.529 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.529 ************************************ 00:09:19.529 START TEST json_config_extra_key 00:09:19.529 ************************************ 00:09:19.529 11:19:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:19.789 11:19:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:19.789 11:19:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:19.789 11:19:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:19.789 11:19:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:19.789 11:19:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:19.789 11:19:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:19.789 11:19:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:19.789 11:19:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:19.789 11:19:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:19.789 11:19:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.789 11:19:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:19.789 11:19:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:19.789 11:19:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:19.789 11:19:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:19.789 11:19:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:19.789 11:19:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:19.789 11:19:37 -- scripts/common.sh@344 -- # : 1 00:09:19.789 11:19:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:19.789 11:19:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.789 11:19:37 -- scripts/common.sh@364 -- # decimal 1 00:09:19.789 11:19:37 -- scripts/common.sh@352 -- # local d=1 00:09:19.789 11:19:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.789 11:19:37 -- scripts/common.sh@354 -- # echo 1 00:09:19.789 11:19:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:19.789 11:19:37 -- scripts/common.sh@365 -- # decimal 2 00:09:19.789 11:19:37 -- scripts/common.sh@352 -- # local d=2 00:09:19.789 11:19:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.789 11:19:37 -- scripts/common.sh@354 -- # echo 2 00:09:19.789 11:19:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:19.789 11:19:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:19.789 11:19:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:19.789 11:19:37 -- scripts/common.sh@367 -- # return 0 00:09:19.789 11:19:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.789 11:19:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:19.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.789 --rc genhtml_branch_coverage=1 00:09:19.789 --rc genhtml_function_coverage=1 00:09:19.789 --rc genhtml_legend=1 00:09:19.789 --rc geninfo_all_blocks=1 00:09:19.789 --rc geninfo_unexecuted_blocks=1 00:09:19.789 00:09:19.789 ' 00:09:19.789 11:19:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:19.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.789 --rc genhtml_branch_coverage=1 00:09:19.789 --rc genhtml_function_coverage=1 00:09:19.789 --rc genhtml_legend=1 00:09:19.789 --rc geninfo_all_blocks=1 00:09:19.789 --rc geninfo_unexecuted_blocks=1 00:09:19.789 00:09:19.789 ' 00:09:19.789 11:19:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:19.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.789 --rc genhtml_branch_coverage=1 00:09:19.789 --rc genhtml_function_coverage=1 00:09:19.789 --rc genhtml_legend=1 00:09:19.789 --rc geninfo_all_blocks=1 00:09:19.789 --rc geninfo_unexecuted_blocks=1 00:09:19.789 00:09:19.789 ' 00:09:19.789 11:19:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:19.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.789 --rc genhtml_branch_coverage=1 00:09:19.789 --rc genhtml_function_coverage=1 00:09:19.789 --rc genhtml_legend=1 00:09:19.789 --rc geninfo_all_blocks=1 00:09:19.789 --rc geninfo_unexecuted_blocks=1 00:09:19.789 00:09:19.789 ' 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.789 11:19:37 -- nvmf/common.sh@7 -- # uname -s 00:09:19.789 11:19:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.789 11:19:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.789 11:19:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.789 11:19:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.789 11:19:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.789 11:19:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.789 11:19:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.789 11:19:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.789 11:19:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.789 11:19:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.789 11:19:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:09:19.789 11:19:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=3e2997a5-5d7e-4ec9-92ac-75a699fb75c5 00:09:19.789 11:19:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.789 11:19:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.789 11:19:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:19.789 11:19:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.789 11:19:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.789 11:19:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.789 11:19:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.789 11:19:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:19.789 11:19:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:19.789 11:19:37 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:19.789 11:19:37 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:19.789 11:19:37 -- paths/export.sh@6 -- # export PATH 00:09:19.789 11:19:37 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:19.789 11:19:37 -- nvmf/common.sh@46 -- # : 0 00:09:19.789 11:19:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:19.789 11:19:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:19.789 11:19:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:19.789 11:19:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.789 11:19:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.789 11:19:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:19.789 11:19:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:19.789 11:19:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:19.789 11:19:37 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:19.790 INFO: launching applications... 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=72856 00:09:19.790 Waiting for target to run... 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:19.790 11:19:37 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 72856 /var/tmp/spdk_tgt.sock 00:09:19.790 11:19:37 -- common/autotest_common.sh@829 -- # '[' -z 72856 ']' 00:09:19.790 11:19:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:19.790 11:19:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:19.790 11:19:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:19.790 11:19:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.790 11:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.790 [2024-11-26 11:19:37.986188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.790 [2024-11-26 11:19:37.986394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72856 ] 00:09:20.357 [2024-11-26 11:19:38.330127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.357 [2024-11-26 11:19:38.349137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.357 [2024-11-26 11:19:38.349369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.923 00:09:20.923 INFO: shutting down applications... 00:09:20.923 11:19:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.923 11:19:38 -- common/autotest_common.sh@862 -- # return 0 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 72856 ]] 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 72856 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@50 -- # kill -0 72856 00:09:20.923 11:19:38 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 72856 00:09:21.491 SPDK target shutdown done 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:21.491 Success 00:09:21.491 11:19:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:21.491 00:09:21.491 real 0m1.673s 00:09:21.491 user 0m1.498s 00:09:21.491 sys 0m0.409s 00:09:21.491 11:19:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.491 11:19:39 -- common/autotest_common.sh@10 -- # set +x 00:09:21.491 ************************************ 00:09:21.491 END TEST json_config_extra_key 00:09:21.491 ************************************ 00:09:21.491 11:19:39 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:21.491 11:19:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.491 11:19:39 -- common/autotest_common.sh@10 -- # set +x 00:09:21.491 ************************************ 00:09:21.491 START TEST alias_rpc 00:09:21.491 ************************************ 00:09:21.491 11:19:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:21.491 * Looking for test storage... 00:09:21.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:21.491 11:19:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:21.491 11:19:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:21.491 11:19:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:21.491 11:19:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:21.491 11:19:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:21.491 11:19:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:21.491 11:19:39 -- scripts/common.sh@335 -- # IFS=.-: 00:09:21.491 11:19:39 -- scripts/common.sh@335 -- # read -ra ver1 00:09:21.491 11:19:39 -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.491 11:19:39 -- scripts/common.sh@336 -- # read -ra ver2 00:09:21.491 11:19:39 -- scripts/common.sh@337 -- # local 'op=<' 00:09:21.491 11:19:39 -- scripts/common.sh@339 -- # ver1_l=2 00:09:21.491 11:19:39 -- scripts/common.sh@340 -- # ver2_l=1 00:09:21.491 11:19:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:21.491 11:19:39 -- scripts/common.sh@343 -- # case "$op" in 00:09:21.491 11:19:39 -- scripts/common.sh@344 -- # : 1 00:09:21.491 11:19:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:21.491 11:19:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.491 11:19:39 -- scripts/common.sh@364 -- # decimal 1 00:09:21.491 11:19:39 -- scripts/common.sh@352 -- # local d=1 00:09:21.491 11:19:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.491 11:19:39 -- scripts/common.sh@354 -- # echo 1 00:09:21.491 11:19:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:21.491 11:19:39 -- scripts/common.sh@365 -- # decimal 2 00:09:21.491 11:19:39 -- scripts/common.sh@352 -- # local d=2 00:09:21.491 11:19:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.491 11:19:39 -- scripts/common.sh@354 -- # echo 2 00:09:21.491 11:19:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:21.491 11:19:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:21.491 11:19:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:21.491 11:19:39 -- scripts/common.sh@367 -- # return 0 00:09:21.491 11:19:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.491 --rc genhtml_branch_coverage=1 00:09:21.491 --rc genhtml_function_coverage=1 00:09:21.491 --rc genhtml_legend=1 00:09:21.491 --rc geninfo_all_blocks=1 00:09:21.491 --rc geninfo_unexecuted_blocks=1 00:09:21.491 00:09:21.491 ' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.491 --rc genhtml_branch_coverage=1 00:09:21.491 --rc genhtml_function_coverage=1 00:09:21.491 --rc genhtml_legend=1 00:09:21.491 --rc geninfo_all_blocks=1 00:09:21.491 --rc geninfo_unexecuted_blocks=1 00:09:21.491 00:09:21.491 ' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.491 --rc genhtml_branch_coverage=1 00:09:21.491 --rc genhtml_function_coverage=1 00:09:21.491 --rc genhtml_legend=1 00:09:21.491 --rc geninfo_all_blocks=1 00:09:21.491 --rc geninfo_unexecuted_blocks=1 00:09:21.491 00:09:21.491 ' 00:09:21.491 11:19:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:21.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.491 --rc genhtml_branch_coverage=1 00:09:21.491 --rc genhtml_function_coverage=1 00:09:21.491 --rc genhtml_legend=1 00:09:21.491 --rc geninfo_all_blocks=1 00:09:21.491 --rc geninfo_unexecuted_blocks=1 00:09:21.491 00:09:21.491 ' 00:09:21.491 11:19:39 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:21.491 11:19:39 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72930 00:09:21.491 11:19:39 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72930 00:09:21.491 11:19:39 -- common/autotest_common.sh@829 -- # '[' -z 72930 ']' 00:09:21.491 11:19:39 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.491 11:19:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.491 11:19:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.492 11:19:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.492 11:19:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.492 11:19:39 -- common/autotest_common.sh@10 -- # set +x 00:09:21.751 [2024-11-26 11:19:39.731311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:21.751 [2024-11-26 11:19:39.731516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72930 ] 00:09:21.751 [2024-11-26 11:19:39.900779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.751 [2024-11-26 11:19:39.945131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:21.751 [2024-11-26 11:19:39.945458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.686 11:19:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.686 11:19:40 -- common/autotest_common.sh@862 -- # return 0 00:09:22.686 11:19:40 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:22.944 11:19:40 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72930 00:09:22.944 11:19:40 -- common/autotest_common.sh@936 -- # '[' -z 72930 ']' 00:09:22.944 11:19:40 -- common/autotest_common.sh@940 -- # kill -0 72930 00:09:22.944 11:19:40 -- common/autotest_common.sh@941 -- # uname 00:09:22.944 11:19:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.944 11:19:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72930 00:09:22.944 11:19:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:22.944 11:19:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:22.944 killing process with pid 72930 00:09:22.944 11:19:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72930' 00:09:22.944 11:19:41 -- common/autotest_common.sh@955 -- # kill 72930 00:09:22.944 11:19:41 -- common/autotest_common.sh@960 -- # wait 72930 00:09:23.203 00:09:23.203 real 0m1.848s 00:09:23.203 user 0m2.109s 00:09:23.203 sys 0m0.446s 00:09:23.203 11:19:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.203 11:19:41 -- common/autotest_common.sh@10 -- # set +x 00:09:23.203 ************************************ 00:09:23.203 END TEST alias_rpc 00:09:23.203 ************************************ 00:09:23.203 11:19:41 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:09:23.203 11:19:41 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:23.203 11:19:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.203 11:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.203 11:19:41 -- common/autotest_common.sh@10 -- # set +x 00:09:23.203 ************************************ 00:09:23.203 START TEST spdkcli_tcp 00:09:23.203 ************************************ 00:09:23.203 11:19:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:23.462 * Looking for test storage... 00:09:23.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:23.462 11:19:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:23.462 11:19:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:23.462 11:19:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:23.462 11:19:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:23.462 11:19:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:23.462 11:19:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:23.462 11:19:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:23.462 11:19:41 -- scripts/common.sh@335 -- # IFS=.-: 00:09:23.462 11:19:41 -- scripts/common.sh@335 -- # read -ra ver1 00:09:23.462 11:19:41 -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.462 11:19:41 -- scripts/common.sh@336 -- # read -ra ver2 00:09:23.462 11:19:41 -- scripts/common.sh@337 -- # local 'op=<' 00:09:23.462 11:19:41 -- scripts/common.sh@339 -- # ver1_l=2 00:09:23.462 11:19:41 -- scripts/common.sh@340 -- # ver2_l=1 00:09:23.462 11:19:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:23.462 11:19:41 -- scripts/common.sh@343 -- # case "$op" in 00:09:23.462 11:19:41 -- scripts/common.sh@344 -- # : 1 00:09:23.462 11:19:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:23.462 11:19:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.462 11:19:41 -- scripts/common.sh@364 -- # decimal 1 00:09:23.462 11:19:41 -- scripts/common.sh@352 -- # local d=1 00:09:23.462 11:19:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.462 11:19:41 -- scripts/common.sh@354 -- # echo 1 00:09:23.462 11:19:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:23.462 11:19:41 -- scripts/common.sh@365 -- # decimal 2 00:09:23.462 11:19:41 -- scripts/common.sh@352 -- # local d=2 00:09:23.462 11:19:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.462 11:19:41 -- scripts/common.sh@354 -- # echo 2 00:09:23.462 11:19:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:23.462 11:19:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:23.462 11:19:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:23.462 11:19:41 -- scripts/common.sh@367 -- # return 0 00:09:23.462 11:19:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.462 11:19:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.462 --rc genhtml_branch_coverage=1 00:09:23.462 --rc genhtml_function_coverage=1 00:09:23.462 --rc genhtml_legend=1 00:09:23.462 --rc geninfo_all_blocks=1 00:09:23.462 --rc geninfo_unexecuted_blocks=1 00:09:23.462 00:09:23.462 ' 00:09:23.462 11:19:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.462 --rc genhtml_branch_coverage=1 00:09:23.462 --rc genhtml_function_coverage=1 00:09:23.462 --rc genhtml_legend=1 00:09:23.462 --rc geninfo_all_blocks=1 00:09:23.462 --rc geninfo_unexecuted_blocks=1 00:09:23.462 00:09:23.462 ' 00:09:23.462 11:19:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.462 --rc genhtml_branch_coverage=1 00:09:23.462 --rc genhtml_function_coverage=1 00:09:23.462 --rc genhtml_legend=1 00:09:23.462 --rc geninfo_all_blocks=1 00:09:23.462 --rc geninfo_unexecuted_blocks=1 00:09:23.462 00:09:23.462 ' 00:09:23.462 11:19:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.462 --rc genhtml_branch_coverage=1 00:09:23.462 --rc genhtml_function_coverage=1 00:09:23.462 --rc genhtml_legend=1 00:09:23.462 --rc geninfo_all_blocks=1 00:09:23.462 --rc geninfo_unexecuted_blocks=1 00:09:23.462 00:09:23.462 ' 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:23.462 11:19:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:23.462 11:19:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:23.462 11:19:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.462 11:19:41 -- common/autotest_common.sh@10 -- # set +x 00:09:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=73014 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@27 -- # waitforlisten 73014 00:09:23.462 11:19:41 -- common/autotest_common.sh@829 -- # '[' -z 73014 ']' 00:09:23.462 11:19:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.462 11:19:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.462 11:19:41 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:23.462 11:19:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.462 11:19:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.462 11:19:41 -- common/autotest_common.sh@10 -- # set +x 00:09:23.463 [2024-11-26 11:19:41.623724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.463 [2024-11-26 11:19:41.623884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73014 ] 00:09:23.721 [2024-11-26 11:19:41.787697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.721 [2024-11-26 11:19:41.827296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.721 [2024-11-26 11:19:41.827767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.721 [2024-11-26 11:19:41.827846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.657 11:19:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.657 11:19:42 -- common/autotest_common.sh@862 -- # return 0 00:09:24.657 11:19:42 -- spdkcli/tcp.sh@31 -- # socat_pid=73031 00:09:24.657 11:19:42 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:24.657 11:19:42 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:24.657 [ 00:09:24.657 "spdk_get_version", 00:09:24.657 "rpc_get_methods", 00:09:24.657 "trace_get_info", 00:09:24.657 "trace_get_tpoint_group_mask", 00:09:24.657 "trace_disable_tpoint_group", 00:09:24.657 "trace_enable_tpoint_group", 00:09:24.657 "trace_clear_tpoint_mask", 00:09:24.657 "trace_set_tpoint_mask", 00:09:24.657 "framework_get_pci_devices", 00:09:24.657 "framework_get_config", 00:09:24.657 "framework_get_subsystems", 00:09:24.657 "iobuf_get_stats", 00:09:24.657 "iobuf_set_options", 00:09:24.657 "sock_set_default_impl", 00:09:24.657 "sock_impl_set_options", 00:09:24.657 "sock_impl_get_options", 00:09:24.657 "vmd_rescan", 00:09:24.657 "vmd_remove_device", 00:09:24.657 "vmd_enable", 00:09:24.657 "accel_get_stats", 00:09:24.657 "accel_set_options", 00:09:24.657 "accel_set_driver", 00:09:24.657 "accel_crypto_key_destroy", 00:09:24.657 "accel_crypto_keys_get", 00:09:24.657 "accel_crypto_key_create", 00:09:24.658 "accel_assign_opc", 00:09:24.658 "accel_get_module_info", 00:09:24.658 "accel_get_opc_assignments", 00:09:24.658 "notify_get_notifications", 00:09:24.658 "notify_get_types", 00:09:24.658 "bdev_get_histogram", 00:09:24.658 "bdev_enable_histogram", 00:09:24.658 "bdev_set_qos_limit", 00:09:24.658 "bdev_set_qd_sampling_period", 00:09:24.658 "bdev_get_bdevs", 00:09:24.658 "bdev_reset_iostat", 00:09:24.658 "bdev_get_iostat", 00:09:24.658 "bdev_examine", 00:09:24.658 "bdev_wait_for_examine", 00:09:24.658 "bdev_set_options", 00:09:24.658 "scsi_get_devices", 00:09:24.658 "thread_set_cpumask", 00:09:24.658 "framework_get_scheduler", 00:09:24.658 "framework_set_scheduler", 00:09:24.658 "framework_get_reactors", 00:09:24.658 "thread_get_io_channels", 00:09:24.658 "thread_get_pollers", 00:09:24.658 "thread_get_stats", 00:09:24.658 "framework_monitor_context_switch", 00:09:24.658 "spdk_kill_instance", 00:09:24.658 "log_enable_timestamps", 00:09:24.658 "log_get_flags", 00:09:24.658 "log_clear_flag", 00:09:24.658 "log_set_flag", 00:09:24.658 "log_get_level", 00:09:24.658 "log_set_level", 00:09:24.658 "log_get_print_level", 00:09:24.658 "log_set_print_level", 00:09:24.658 "framework_enable_cpumask_locks", 00:09:24.658 "framework_disable_cpumask_locks", 00:09:24.658 "framework_wait_init", 00:09:24.658 "framework_start_init", 00:09:24.658 "virtio_blk_create_transport", 00:09:24.658 "virtio_blk_get_transports", 00:09:24.658 "vhost_controller_set_coalescing", 00:09:24.658 "vhost_get_controllers", 00:09:24.658 "vhost_delete_controller", 00:09:24.658 "vhost_create_blk_controller", 00:09:24.658 "vhost_scsi_controller_remove_target", 00:09:24.658 "vhost_scsi_controller_add_target", 00:09:24.658 "vhost_start_scsi_controller", 00:09:24.658 "vhost_create_scsi_controller", 00:09:24.658 "ublk_recover_disk", 00:09:24.658 "ublk_get_disks", 00:09:24.658 "ublk_stop_disk", 00:09:24.658 "ublk_start_disk", 00:09:24.658 "ublk_destroy_target", 00:09:24.658 "ublk_create_target", 00:09:24.658 "nbd_get_disks", 00:09:24.658 "nbd_stop_disk", 00:09:24.658 "nbd_start_disk", 00:09:24.658 "env_dpdk_get_mem_stats", 00:09:24.658 "nvmf_subsystem_get_listeners", 00:09:24.658 "nvmf_subsystem_get_qpairs", 00:09:24.658 "nvmf_subsystem_get_controllers", 00:09:24.658 "nvmf_get_stats", 00:09:24.658 "nvmf_get_transports", 00:09:24.658 "nvmf_create_transport", 00:09:24.658 "nvmf_get_targets", 00:09:24.658 "nvmf_delete_target", 00:09:24.658 "nvmf_create_target", 00:09:24.658 "nvmf_subsystem_allow_any_host", 00:09:24.658 "nvmf_subsystem_remove_host", 00:09:24.658 "nvmf_subsystem_add_host", 00:09:24.658 "nvmf_subsystem_remove_ns", 00:09:24.658 "nvmf_subsystem_add_ns", 00:09:24.658 "nvmf_subsystem_listener_set_ana_state", 00:09:24.658 "nvmf_discovery_get_referrals", 00:09:24.658 "nvmf_discovery_remove_referral", 00:09:24.658 "nvmf_discovery_add_referral", 00:09:24.658 "nvmf_subsystem_remove_listener", 00:09:24.658 "nvmf_subsystem_add_listener", 00:09:24.658 "nvmf_delete_subsystem", 00:09:24.658 "nvmf_create_subsystem", 00:09:24.658 "nvmf_get_subsystems", 00:09:24.658 "nvmf_set_crdt", 00:09:24.658 "nvmf_set_config", 00:09:24.658 "nvmf_set_max_subsystems", 00:09:24.658 "iscsi_set_options", 00:09:24.658 "iscsi_get_auth_groups", 00:09:24.658 "iscsi_auth_group_remove_secret", 00:09:24.658 "iscsi_auth_group_add_secret", 00:09:24.658 "iscsi_delete_auth_group", 00:09:24.658 "iscsi_create_auth_group", 00:09:24.658 "iscsi_set_discovery_auth", 00:09:24.658 "iscsi_get_options", 00:09:24.658 "iscsi_target_node_request_logout", 00:09:24.658 "iscsi_target_node_set_redirect", 00:09:24.658 "iscsi_target_node_set_auth", 00:09:24.658 "iscsi_target_node_add_lun", 00:09:24.658 "iscsi_get_connections", 00:09:24.658 "iscsi_portal_group_set_auth", 00:09:24.658 "iscsi_start_portal_group", 00:09:24.658 "iscsi_delete_portal_group", 00:09:24.658 "iscsi_create_portal_group", 00:09:24.658 "iscsi_get_portal_groups", 00:09:24.658 "iscsi_delete_target_node", 00:09:24.658 "iscsi_target_node_remove_pg_ig_maps", 00:09:24.658 "iscsi_target_node_add_pg_ig_maps", 00:09:24.658 "iscsi_create_target_node", 00:09:24.658 "iscsi_get_target_nodes", 00:09:24.658 "iscsi_delete_initiator_group", 00:09:24.658 "iscsi_initiator_group_remove_initiators", 00:09:24.658 "iscsi_initiator_group_add_initiators", 00:09:24.658 "iscsi_create_initiator_group", 00:09:24.658 "iscsi_get_initiator_groups", 00:09:24.658 "iaa_scan_accel_module", 00:09:24.658 "dsa_scan_accel_module", 00:09:24.658 "ioat_scan_accel_module", 00:09:24.658 "accel_error_inject_error", 00:09:24.658 "bdev_iscsi_delete", 00:09:24.658 "bdev_iscsi_create", 00:09:24.658 "bdev_iscsi_set_options", 00:09:24.658 "bdev_virtio_attach_controller", 00:09:24.658 "bdev_virtio_scsi_get_devices", 00:09:24.658 "bdev_virtio_detach_controller", 00:09:24.658 "bdev_virtio_blk_set_hotplug", 00:09:24.658 "bdev_ftl_set_property", 00:09:24.658 "bdev_ftl_get_properties", 00:09:24.658 "bdev_ftl_get_stats", 00:09:24.658 "bdev_ftl_unmap", 00:09:24.658 "bdev_ftl_unload", 00:09:24.658 "bdev_ftl_delete", 00:09:24.658 "bdev_ftl_load", 00:09:24.658 "bdev_ftl_create", 00:09:24.658 "bdev_aio_delete", 00:09:24.658 "bdev_aio_rescan", 00:09:24.658 "bdev_aio_create", 00:09:24.658 "blobfs_create", 00:09:24.658 "blobfs_detect", 00:09:24.658 "blobfs_set_cache_size", 00:09:24.658 "bdev_zone_block_delete", 00:09:24.658 "bdev_zone_block_create", 00:09:24.658 "bdev_delay_delete", 00:09:24.658 "bdev_delay_create", 00:09:24.658 "bdev_delay_update_latency", 00:09:24.658 "bdev_split_delete", 00:09:24.658 "bdev_split_create", 00:09:24.658 "bdev_error_inject_error", 00:09:24.658 "bdev_error_delete", 00:09:24.658 "bdev_error_create", 00:09:24.658 "bdev_raid_set_options", 00:09:24.658 "bdev_raid_remove_base_bdev", 00:09:24.658 "bdev_raid_add_base_bdev", 00:09:24.658 "bdev_raid_delete", 00:09:24.658 "bdev_raid_create", 00:09:24.658 "bdev_raid_get_bdevs", 00:09:24.658 "bdev_lvol_grow_lvstore", 00:09:24.658 "bdev_lvol_get_lvols", 00:09:24.658 "bdev_lvol_get_lvstores", 00:09:24.658 "bdev_lvol_delete", 00:09:24.658 "bdev_lvol_set_read_only", 00:09:24.658 "bdev_lvol_resize", 00:09:24.658 "bdev_lvol_decouple_parent", 00:09:24.658 "bdev_lvol_inflate", 00:09:24.658 "bdev_lvol_rename", 00:09:24.658 "bdev_lvol_clone_bdev", 00:09:24.658 "bdev_lvol_clone", 00:09:24.658 "bdev_lvol_snapshot", 00:09:24.658 "bdev_lvol_create", 00:09:24.658 "bdev_lvol_delete_lvstore", 00:09:24.658 "bdev_lvol_rename_lvstore", 00:09:24.658 "bdev_lvol_create_lvstore", 00:09:24.658 "bdev_passthru_delete", 00:09:24.658 "bdev_passthru_create", 00:09:24.658 "bdev_nvme_cuse_unregister", 00:09:24.658 "bdev_nvme_cuse_register", 00:09:24.658 "bdev_opal_new_user", 00:09:24.658 "bdev_opal_set_lock_state", 00:09:24.658 "bdev_opal_delete", 00:09:24.658 "bdev_opal_get_info", 00:09:24.658 "bdev_opal_create", 00:09:24.658 "bdev_nvme_opal_revert", 00:09:24.658 "bdev_nvme_opal_init", 00:09:24.658 "bdev_nvme_send_cmd", 00:09:24.658 "bdev_nvme_get_path_iostat", 00:09:24.658 "bdev_nvme_get_mdns_discovery_info", 00:09:24.658 "bdev_nvme_stop_mdns_discovery", 00:09:24.658 "bdev_nvme_start_mdns_discovery", 00:09:24.658 "bdev_nvme_set_multipath_policy", 00:09:24.658 "bdev_nvme_set_preferred_path", 00:09:24.658 "bdev_nvme_get_io_paths", 00:09:24.658 "bdev_nvme_remove_error_injection", 00:09:24.658 "bdev_nvme_add_error_injection", 00:09:24.658 "bdev_nvme_get_discovery_info", 00:09:24.658 "bdev_nvme_stop_discovery", 00:09:24.658 "bdev_nvme_start_discovery", 00:09:24.658 "bdev_nvme_get_controller_health_info", 00:09:24.658 "bdev_nvme_disable_controller", 00:09:24.658 "bdev_nvme_enable_controller", 00:09:24.658 "bdev_nvme_reset_controller", 00:09:24.658 "bdev_nvme_get_transport_statistics", 00:09:24.658 "bdev_nvme_apply_firmware", 00:09:24.658 "bdev_nvme_detach_controller", 00:09:24.658 "bdev_nvme_get_controllers", 00:09:24.659 "bdev_nvme_attach_controller", 00:09:24.659 "bdev_nvme_set_hotplug", 00:09:24.659 "bdev_nvme_set_options", 00:09:24.659 "bdev_null_resize", 00:09:24.659 "bdev_null_delete", 00:09:24.659 "bdev_null_create", 00:09:24.659 "bdev_malloc_delete", 00:09:24.659 "bdev_malloc_create" 00:09:24.659 ] 00:09:24.659 11:19:42 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:24.659 11:19:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.659 11:19:42 -- common/autotest_common.sh@10 -- # set +x 00:09:24.916 11:19:42 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:24.916 11:19:42 -- spdkcli/tcp.sh@38 -- # killprocess 73014 00:09:24.916 11:19:42 -- common/autotest_common.sh@936 -- # '[' -z 73014 ']' 00:09:24.916 11:19:42 -- common/autotest_common.sh@940 -- # kill -0 73014 00:09:24.916 11:19:42 -- common/autotest_common.sh@941 -- # uname 00:09:24.916 11:19:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.916 11:19:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73014 00:09:24.916 killing process with pid 73014 00:09:24.916 11:19:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.916 11:19:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.916 11:19:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73014' 00:09:24.916 11:19:42 -- common/autotest_common.sh@955 -- # kill 73014 00:09:24.916 11:19:42 -- common/autotest_common.sh@960 -- # wait 73014 00:09:25.175 ************************************ 00:09:25.175 END TEST spdkcli_tcp 00:09:25.175 ************************************ 00:09:25.175 00:09:25.175 real 0m1.889s 00:09:25.175 user 0m3.483s 00:09:25.175 sys 0m0.487s 00:09:25.175 11:19:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:25.175 11:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:25.175 11:19:43 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:25.175 11:19:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.175 11:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.175 11:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:25.175 ************************************ 00:09:25.175 START TEST dpdk_mem_utility 00:09:25.175 ************************************ 00:09:25.175 11:19:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:25.175 * Looking for test storage... 00:09:25.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:25.175 11:19:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:25.175 11:19:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:25.175 11:19:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:25.435 11:19:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:25.435 11:19:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:25.435 11:19:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:25.435 11:19:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:25.435 11:19:43 -- scripts/common.sh@335 -- # IFS=.-: 00:09:25.435 11:19:43 -- scripts/common.sh@335 -- # read -ra ver1 00:09:25.435 11:19:43 -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.435 11:19:43 -- scripts/common.sh@336 -- # read -ra ver2 00:09:25.435 11:19:43 -- scripts/common.sh@337 -- # local 'op=<' 00:09:25.435 11:19:43 -- scripts/common.sh@339 -- # ver1_l=2 00:09:25.435 11:19:43 -- scripts/common.sh@340 -- # ver2_l=1 00:09:25.435 11:19:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:25.435 11:19:43 -- scripts/common.sh@343 -- # case "$op" in 00:09:25.435 11:19:43 -- scripts/common.sh@344 -- # : 1 00:09:25.435 11:19:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:25.435 11:19:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.435 11:19:43 -- scripts/common.sh@364 -- # decimal 1 00:09:25.435 11:19:43 -- scripts/common.sh@352 -- # local d=1 00:09:25.435 11:19:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.435 11:19:43 -- scripts/common.sh@354 -- # echo 1 00:09:25.435 11:19:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:25.435 11:19:43 -- scripts/common.sh@365 -- # decimal 2 00:09:25.435 11:19:43 -- scripts/common.sh@352 -- # local d=2 00:09:25.435 11:19:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.435 11:19:43 -- scripts/common.sh@354 -- # echo 2 00:09:25.435 11:19:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:25.435 11:19:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:25.435 11:19:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:25.435 11:19:43 -- scripts/common.sh@367 -- # return 0 00:09:25.435 11:19:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.435 11:19:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.435 --rc genhtml_branch_coverage=1 00:09:25.435 --rc genhtml_function_coverage=1 00:09:25.435 --rc genhtml_legend=1 00:09:25.435 --rc geninfo_all_blocks=1 00:09:25.435 --rc geninfo_unexecuted_blocks=1 00:09:25.435 00:09:25.435 ' 00:09:25.435 11:19:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.435 --rc genhtml_branch_coverage=1 00:09:25.435 --rc genhtml_function_coverage=1 00:09:25.435 --rc genhtml_legend=1 00:09:25.435 --rc geninfo_all_blocks=1 00:09:25.435 --rc geninfo_unexecuted_blocks=1 00:09:25.435 00:09:25.435 ' 00:09:25.435 11:19:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.435 --rc genhtml_branch_coverage=1 00:09:25.435 --rc genhtml_function_coverage=1 00:09:25.435 --rc genhtml_legend=1 00:09:25.435 --rc geninfo_all_blocks=1 00:09:25.435 --rc geninfo_unexecuted_blocks=1 00:09:25.435 00:09:25.435 ' 00:09:25.435 11:19:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:25.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.435 --rc genhtml_branch_coverage=1 00:09:25.435 --rc genhtml_function_coverage=1 00:09:25.435 --rc genhtml_legend=1 00:09:25.435 --rc geninfo_all_blocks=1 00:09:25.435 --rc geninfo_unexecuted_blocks=1 00:09:25.435 00:09:25.435 ' 00:09:25.435 11:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:25.435 11:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73108 00:09:25.435 11:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73108 00:09:25.435 11:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:25.435 11:19:43 -- common/autotest_common.sh@829 -- # '[' -z 73108 ']' 00:09:25.435 11:19:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.435 11:19:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.435 11:19:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.435 11:19:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.435 11:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:25.435 [2024-11-26 11:19:43.561461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.435 [2024-11-26 11:19:43.562202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73108 ] 00:09:25.694 [2024-11-26 11:19:43.733520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.694 [2024-11-26 11:19:43.771083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.694 [2024-11-26 11:19:43.771378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.630 11:19:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.630 11:19:44 -- common/autotest_common.sh@862 -- # return 0 00:09:26.630 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:26.630 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:26.630 11:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.630 11:19:44 -- common/autotest_common.sh@10 -- # set +x 00:09:26.630 { 00:09:26.630 "filename": "/tmp/spdk_mem_dump.txt" 00:09:26.630 } 00:09:26.630 11:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.630 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:26.630 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:26.630 1 heaps totaling size 814.000000 MiB 00:09:26.630 size: 814.000000 MiB heap id: 0 00:09:26.630 end heaps---------- 00:09:26.630 8 mempools totaling size 598.116089 MiB 00:09:26.630 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:26.630 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:26.630 size: 84.521057 MiB name: bdev_io_73108 00:09:26.630 size: 51.011292 MiB name: evtpool_73108 00:09:26.630 size: 50.003479 MiB name: msgpool_73108 00:09:26.630 size: 21.763794 MiB name: PDU_Pool 00:09:26.630 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:26.631 size: 0.026123 MiB name: Session_Pool 00:09:26.631 end mempools------- 00:09:26.631 6 memzones totaling size 4.142822 MiB 00:09:26.631 size: 1.000366 MiB name: RG_ring_0_73108 00:09:26.631 size: 1.000366 MiB name: RG_ring_1_73108 00:09:26.631 size: 1.000366 MiB name: RG_ring_4_73108 00:09:26.631 size: 1.000366 MiB name: RG_ring_5_73108 00:09:26.631 size: 0.125366 MiB name: RG_ring_2_73108 00:09:26.631 size: 0.015991 MiB name: RG_ring_3_73108 00:09:26.631 end memzones------- 00:09:26.631 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:26.631 heap id: 0 total size: 814.000000 MiB number of busy elements: 312 number of free elements: 15 00:09:26.631 list of free elements. size: 12.469727 MiB 00:09:26.631 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:26.631 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:26.631 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:26.631 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:26.631 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:26.631 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:26.631 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:26.631 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:26.631 element at address: 0x200000200000 with size: 0.832825 MiB 00:09:26.631 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:09:26.631 element at address: 0x20000b200000 with size: 0.488892 MiB 00:09:26.631 element at address: 0x200000800000 with size: 0.486145 MiB 00:09:26.631 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:26.631 element at address: 0x200027e00000 with size: 0.395752 MiB 00:09:26.631 element at address: 0x200003a00000 with size: 0.347839 MiB 00:09:26.631 list of standard malloc elements. size: 199.267700 MiB 00:09:26.631 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:26.631 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:26.631 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:26.631 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:26.631 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:26.631 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:26.631 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:26.631 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:26.631 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:26.631 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087c740 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087c800 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59180 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59240 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59300 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59480 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59540 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59600 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59780 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59840 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59900 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:26.631 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:26.632 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e65500 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:26.632 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:26.633 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:26.633 list of memzone associated elements. size: 602.262573 MiB 00:09:26.633 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:26.633 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:26.633 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:26.633 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:26.633 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:26.633 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73108_0 00:09:26.633 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:26.633 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73108_0 00:09:26.633 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:26.633 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73108_0 00:09:26.633 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:26.633 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:26.633 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:26.633 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:26.633 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:26.633 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73108 00:09:26.633 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:26.633 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73108 00:09:26.633 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:26.633 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73108 00:09:26.633 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:26.633 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:26.633 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:26.633 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:26.633 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:26.633 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:26.633 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:26.633 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:26.633 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:26.633 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73108 00:09:26.633 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:26.633 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73108 00:09:26.633 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:26.633 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73108 00:09:26.633 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:26.633 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73108 00:09:26.633 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:26.633 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73108 00:09:26.633 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:26.633 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:26.633 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:26.633 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:26.633 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:26.633 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:26.633 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:26.633 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73108 00:09:26.633 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:26.633 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:26.633 element at address: 0x200027e65680 with size: 0.023743 MiB 00:09:26.633 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:26.633 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:26.633 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73108 00:09:26.633 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:09:26.633 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:26.633 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:09:26.633 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73108 00:09:26.633 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:26.633 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73108 00:09:26.633 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:09:26.633 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:26.633 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:26.633 11:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73108 00:09:26.633 11:19:44 -- common/autotest_common.sh@936 -- # '[' -z 73108 ']' 00:09:26.633 11:19:44 -- common/autotest_common.sh@940 -- # kill -0 73108 00:09:26.633 11:19:44 -- common/autotest_common.sh@941 -- # uname 00:09:26.633 11:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.633 11:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73108 00:09:26.633 11:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.633 11:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.633 11:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73108' 00:09:26.633 killing process with pid 73108 00:09:26.633 11:19:44 -- common/autotest_common.sh@955 -- # kill 73108 00:09:26.633 11:19:44 -- common/autotest_common.sh@960 -- # wait 73108 00:09:26.892 00:09:26.892 real 0m1.672s 00:09:26.892 user 0m1.803s 00:09:26.892 sys 0m0.441s 00:09:26.892 ************************************ 00:09:26.892 END TEST dpdk_mem_utility 00:09:26.892 ************************************ 00:09:26.892 11:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.892 11:19:44 -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 11:19:45 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:26.892 11:19:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.892 11:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.892 11:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 ************************************ 00:09:26.892 START TEST event 00:09:26.892 ************************************ 00:09:26.892 11:19:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:26.892 * Looking for test storage... 00:09:26.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:26.892 11:19:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:26.892 11:19:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:26.892 11:19:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:27.152 11:19:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:27.152 11:19:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:27.152 11:19:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:27.152 11:19:45 -- scripts/common.sh@335 -- # IFS=.-: 00:09:27.152 11:19:45 -- scripts/common.sh@335 -- # read -ra ver1 00:09:27.152 11:19:45 -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.152 11:19:45 -- scripts/common.sh@336 -- # read -ra ver2 00:09:27.152 11:19:45 -- scripts/common.sh@337 -- # local 'op=<' 00:09:27.152 11:19:45 -- scripts/common.sh@339 -- # ver1_l=2 00:09:27.152 11:19:45 -- scripts/common.sh@340 -- # ver2_l=1 00:09:27.152 11:19:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:27.152 11:19:45 -- scripts/common.sh@343 -- # case "$op" in 00:09:27.152 11:19:45 -- scripts/common.sh@344 -- # : 1 00:09:27.152 11:19:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:27.152 11:19:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.152 11:19:45 -- scripts/common.sh@364 -- # decimal 1 00:09:27.152 11:19:45 -- scripts/common.sh@352 -- # local d=1 00:09:27.152 11:19:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.152 11:19:45 -- scripts/common.sh@354 -- # echo 1 00:09:27.152 11:19:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:27.152 11:19:45 -- scripts/common.sh@365 -- # decimal 2 00:09:27.152 11:19:45 -- scripts/common.sh@352 -- # local d=2 00:09:27.152 11:19:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.152 11:19:45 -- scripts/common.sh@354 -- # echo 2 00:09:27.152 11:19:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:27.152 11:19:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:27.152 11:19:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:27.152 11:19:45 -- scripts/common.sh@367 -- # return 0 00:09:27.152 11:19:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:27.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.152 --rc genhtml_branch_coverage=1 00:09:27.152 --rc genhtml_function_coverage=1 00:09:27.152 --rc genhtml_legend=1 00:09:27.152 --rc geninfo_all_blocks=1 00:09:27.152 --rc geninfo_unexecuted_blocks=1 00:09:27.152 00:09:27.152 ' 00:09:27.152 11:19:45 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:27.152 11:19:45 -- bdev/nbd_common.sh@6 -- # set -e 00:09:27.152 11:19:45 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:27.152 11:19:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:27.152 11:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.152 11:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:27.152 ************************************ 00:09:27.152 START TEST event_perf 00:09:27.152 ************************************ 00:09:27.152 11:19:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:27.152 Running I/O for 1 seconds...[2024-11-26 11:19:45.265691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.152 [2024-11-26 11:19:45.265891] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73187 ] 00:09:27.411 [2024-11-26 11:19:45.436819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.411 [2024-11-26 11:19:45.478917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.411 [2024-11-26 11:19:45.479084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.411 [2024-11-26 11:19:45.479156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.411 [2024-11-26 11:19:45.479257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.346 Running I/O for 1 seconds... 00:09:28.346 lcore 0: 181692 00:09:28.346 lcore 1: 181691 00:09:28.346 lcore 2: 181691 00:09:28.346 lcore 3: 181690 00:09:28.346 done. 00:09:28.346 00:09:28.346 real 0m1.339s 00:09:28.346 user 0m4.130s 00:09:28.346 sys 0m0.109s 00:09:28.346 11:19:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.346 11:19:46 -- common/autotest_common.sh@10 -- # set +x 00:09:28.346 ************************************ 00:09:28.346 END TEST event_perf 00:09:28.346 ************************************ 00:09:28.640 11:19:46 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:28.640 11:19:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:28.640 11:19:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.640 11:19:46 -- common/autotest_common.sh@10 -- # set +x 00:09:28.640 ************************************ 00:09:28.640 START TEST event_reactor 00:09:28.640 ************************************ 00:09:28.640 11:19:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:28.640 [2024-11-26 11:19:46.649313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.640 [2024-11-26 11:19:46.649535] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73232 ] 00:09:28.640 [2024-11-26 11:19:46.809264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.640 [2024-11-26 11:19:46.848058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.041 test_start 00:09:30.041 oneshot 00:09:30.041 tick 100 00:09:30.041 tick 100 00:09:30.041 tick 250 00:09:30.041 tick 100 00:09:30.041 tick 100 00:09:30.041 tick 100 00:09:30.041 tick 250 00:09:30.041 tick 500 00:09:30.041 tick 100 00:09:30.041 tick 100 00:09:30.041 tick 250 00:09:30.041 tick 100 00:09:30.041 tick 100 00:09:30.041 test_end 00:09:30.041 00:09:30.041 real 0m1.308s 00:09:30.041 user 0m1.121s 00:09:30.041 sys 0m0.087s 00:09:30.041 11:19:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.042 ************************************ 00:09:30.042 END TEST event_reactor 00:09:30.042 ************************************ 00:09:30.042 11:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 11:19:47 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:30.042 11:19:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:30.042 11:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.042 11:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 ************************************ 00:09:30.042 START TEST event_reactor_perf 00:09:30.042 ************************************ 00:09:30.042 11:19:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:30.042 [2024-11-26 11:19:48.022362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:30.042 [2024-11-26 11:19:48.022603] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73263 ] 00:09:30.042 [2024-11-26 11:19:48.195691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.042 [2024-11-26 11:19:48.233296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.417 test_start 00:09:31.417 test_end 00:09:31.417 Performance: 271254 events per second 00:09:31.417 00:09:31.417 real 0m1.320s 00:09:31.417 user 0m1.141s 00:09:31.417 sys 0m0.078s 00:09:31.417 11:19:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.417 ************************************ 00:09:31.417 END TEST event_reactor_perf 00:09:31.417 ************************************ 00:09:31.417 11:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:31.417 11:19:49 -- event/event.sh@49 -- # uname -s 00:09:31.417 11:19:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:31.418 11:19:49 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:31.418 11:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.418 11:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:31.418 ************************************ 00:09:31.418 START TEST event_scheduler 00:09:31.418 ************************************ 00:09:31.418 11:19:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:31.418 * Looking for test storage... 00:09:31.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:31.418 11:19:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:31.418 11:19:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:31.418 11:19:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:31.418 11:19:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:31.418 11:19:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:31.418 11:19:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:31.418 11:19:49 -- scripts/common.sh@335 -- # IFS=.-: 00:09:31.418 11:19:49 -- scripts/common.sh@335 -- # read -ra ver1 00:09:31.418 11:19:49 -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.418 11:19:49 -- scripts/common.sh@336 -- # read -ra ver2 00:09:31.418 11:19:49 -- scripts/common.sh@337 -- # local 'op=<' 00:09:31.418 11:19:49 -- scripts/common.sh@339 -- # ver1_l=2 00:09:31.418 11:19:49 -- scripts/common.sh@340 -- # ver2_l=1 00:09:31.418 11:19:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:31.418 11:19:49 -- scripts/common.sh@343 -- # case "$op" in 00:09:31.418 11:19:49 -- scripts/common.sh@344 -- # : 1 00:09:31.418 11:19:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:31.418 11:19:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.418 11:19:49 -- scripts/common.sh@364 -- # decimal 1 00:09:31.418 11:19:49 -- scripts/common.sh@352 -- # local d=1 00:09:31.418 11:19:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.418 11:19:49 -- scripts/common.sh@354 -- # echo 1 00:09:31.418 11:19:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:31.418 11:19:49 -- scripts/common.sh@365 -- # decimal 2 00:09:31.418 11:19:49 -- scripts/common.sh@352 -- # local d=2 00:09:31.418 11:19:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.418 11:19:49 -- scripts/common.sh@354 -- # echo 2 00:09:31.418 11:19:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:31.418 11:19:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:31.418 11:19:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:31.418 11:19:49 -- scripts/common.sh@367 -- # return 0 00:09:31.418 11:19:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.418 --rc genhtml_branch_coverage=1 00:09:31.418 --rc genhtml_function_coverage=1 00:09:31.418 --rc genhtml_legend=1 00:09:31.418 --rc geninfo_all_blocks=1 00:09:31.418 --rc geninfo_unexecuted_blocks=1 00:09:31.418 00:09:31.418 ' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.418 --rc genhtml_branch_coverage=1 00:09:31.418 --rc genhtml_function_coverage=1 00:09:31.418 --rc genhtml_legend=1 00:09:31.418 --rc geninfo_all_blocks=1 00:09:31.418 --rc geninfo_unexecuted_blocks=1 00:09:31.418 00:09:31.418 ' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.418 --rc genhtml_branch_coverage=1 00:09:31.418 --rc genhtml_function_coverage=1 00:09:31.418 --rc genhtml_legend=1 00:09:31.418 --rc geninfo_all_blocks=1 00:09:31.418 --rc geninfo_unexecuted_blocks=1 00:09:31.418 00:09:31.418 ' 00:09:31.418 11:19:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.418 --rc genhtml_branch_coverage=1 00:09:31.418 --rc genhtml_function_coverage=1 00:09:31.418 --rc genhtml_legend=1 00:09:31.418 --rc geninfo_all_blocks=1 00:09:31.418 --rc geninfo_unexecuted_blocks=1 00:09:31.418 00:09:31.418 ' 00:09:31.418 11:19:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:31.418 11:19:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=73327 00:09:31.418 11:19:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.418 11:19:49 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:31.418 11:19:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 73327 00:09:31.418 11:19:49 -- common/autotest_common.sh@829 -- # '[' -z 73327 ']' 00:09:31.418 11:19:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.418 11:19:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.418 11:19:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.418 11:19:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.418 11:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:31.418 [2024-11-26 11:19:49.590658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.418 [2024-11-26 11:19:49.590859] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73327 ] 00:09:31.677 [2024-11-26 11:19:49.762102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.677 [2024-11-26 11:19:49.806399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.677 [2024-11-26 11:19:49.806537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.677 [2024-11-26 11:19:49.807250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.677 [2024-11-26 11:19:49.807316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.613 11:19:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.613 11:19:50 -- common/autotest_common.sh@862 -- # return 0 00:09:32.613 11:19:50 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:32.613 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.613 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.613 POWER: Env isn't set yet! 00:09:32.613 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:32.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.613 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.613 POWER: Attempting to initialise PSTAT power management... 00:09:32.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.613 POWER: Cannot set governor of lcore 0 to performance 00:09:32.613 POWER: Attempting to initialise AMD PSTATE power management... 00:09:32.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.613 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.613 POWER: Attempting to initialise CPPC power management... 00:09:32.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.613 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.613 POWER: Attempting to initialise VM power management... 00:09:32.613 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:32.613 POWER: Unable to set Power Management Environment for lcore 0 00:09:32.613 [2024-11-26 11:19:50.541112] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:32.613 [2024-11-26 11:19:50.541154] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:32.613 [2024-11-26 11:19:50.541168] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:32.613 [2024-11-26 11:19:50.541241] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:32.613 [2024-11-26 11:19:50.541257] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:32.613 [2024-11-26 11:19:50.541285] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:32.613 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.613 11:19:50 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 [2024-11-26 11:19:50.590160] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:32.614 11:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.614 11:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 ************************************ 00:09:32.614 START TEST scheduler_create_thread 00:09:32.614 ************************************ 00:09:32.614 11:19:50 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 2 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 3 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 4 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 5 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 6 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 7 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 8 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 9 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 10 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.614 11:19:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:32.614 11:19:50 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:32.614 11:19:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.614 11:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:33.548 ************************************ 00:09:33.548 END TEST scheduler_create_thread 00:09:33.548 ************************************ 00:09:33.548 11:19:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.548 00:09:33.548 real 0m1.167s 00:09:33.548 user 0m0.015s 00:09:33.548 sys 0m0.009s 00:09:33.548 11:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.548 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:09:33.806 11:19:51 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:33.806 11:19:51 -- scheduler/scheduler.sh@46 -- # killprocess 73327 00:09:33.806 11:19:51 -- common/autotest_common.sh@936 -- # '[' -z 73327 ']' 00:09:33.806 11:19:51 -- common/autotest_common.sh@940 -- # kill -0 73327 00:09:33.806 11:19:51 -- common/autotest_common.sh@941 -- # uname 00:09:33.806 11:19:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:33.806 11:19:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73327 00:09:33.806 killing process with pid 73327 00:09:33.807 11:19:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:33.807 11:19:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:33.807 11:19:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73327' 00:09:33.807 11:19:51 -- common/autotest_common.sh@955 -- # kill 73327 00:09:33.807 11:19:51 -- common/autotest_common.sh@960 -- # wait 73327 00:09:34.082 [2024-11-26 11:19:52.248866] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:34.341 00:09:34.341 real 0m3.065s 00:09:34.341 user 0m5.484s 00:09:34.341 sys 0m0.421s 00:09:34.341 11:19:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.341 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.341 ************************************ 00:09:34.341 END TEST event_scheduler 00:09:34.341 ************************************ 00:09:34.341 11:19:52 -- event/event.sh@51 -- # modprobe -n nbd 00:09:34.341 11:19:52 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:34.341 11:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.341 11:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.341 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.341 ************************************ 00:09:34.341 START TEST app_repeat 00:09:34.341 ************************************ 00:09:34.341 11:19:52 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:09:34.341 11:19:52 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.341 11:19:52 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.341 11:19:52 -- event/event.sh@13 -- # local nbd_list 00:09:34.341 11:19:52 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:34.341 11:19:52 -- event/event.sh@14 -- # local bdev_list 00:09:34.341 11:19:52 -- event/event.sh@15 -- # local repeat_times=4 00:09:34.341 11:19:52 -- event/event.sh@17 -- # modprobe nbd 00:09:34.341 11:19:52 -- event/event.sh@19 -- # repeat_pid=73411 00:09:34.341 11:19:52 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.341 Process app_repeat pid: 73411 00:09:34.341 11:19:52 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73411' 00:09:34.341 11:19:52 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:34.341 11:19:52 -- event/event.sh@23 -- # for i in {0..2} 00:09:34.341 spdk_app_start Round 0 00:09:34.341 11:19:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:34.341 11:19:52 -- event/event.sh@25 -- # waitforlisten 73411 /var/tmp/spdk-nbd.sock 00:09:34.341 11:19:52 -- common/autotest_common.sh@829 -- # '[' -z 73411 ']' 00:09:34.341 11:19:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:34.341 11:19:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:34.341 11:19:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:34.341 11:19:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.341 11:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.341 [2024-11-26 11:19:52.530170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.341 [2024-11-26 11:19:52.530349] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:09:34.600 [2024-11-26 11:19:52.699448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.600 [2024-11-26 11:19:52.740767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.600 [2024-11-26 11:19:52.740787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.167 11:19:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.167 11:19:53 -- common/autotest_common.sh@862 -- # return 0 00:09:35.167 11:19:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:35.426 Malloc0 00:09:35.426 11:19:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:35.685 Malloc1 00:09:35.685 11:19:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@12 -- # local i 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:35.685 11:19:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:35.943 /dev/nbd0 00:09:35.943 11:19:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:35.943 11:19:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:35.943 11:19:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:35.943 11:19:54 -- common/autotest_common.sh@867 -- # local i 00:09:35.943 11:19:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.944 11:19:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.944 11:19:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:35.944 11:19:54 -- common/autotest_common.sh@871 -- # break 00:09:35.944 11:19:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.944 11:19:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.944 11:19:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:35.944 1+0 records in 00:09:35.944 1+0 records out 00:09:35.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194629 s, 21.0 MB/s 00:09:35.944 11:19:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.944 11:19:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:35.944 11:19:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:35.944 11:19:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.944 11:19:54 -- common/autotest_common.sh@887 -- # return 0 00:09:35.944 11:19:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.944 11:19:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:35.944 11:19:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:36.202 /dev/nbd1 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:36.202 11:19:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:36.202 11:19:54 -- common/autotest_common.sh@867 -- # local i 00:09:36.202 11:19:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.202 11:19:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.202 11:19:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:36.202 11:19:54 -- common/autotest_common.sh@871 -- # break 00:09:36.202 11:19:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.202 11:19:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.202 11:19:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:36.202 1+0 records in 00:09:36.202 1+0 records out 00:09:36.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238123 s, 17.2 MB/s 00:09:36.202 11:19:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.202 11:19:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:36.202 11:19:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.202 11:19:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.202 11:19:54 -- common/autotest_common.sh@887 -- # return 0 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.202 11:19:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:36.461 { 00:09:36.461 "nbd_device": "/dev/nbd0", 00:09:36.461 "bdev_name": "Malloc0" 00:09:36.461 }, 00:09:36.461 { 00:09:36.461 "nbd_device": "/dev/nbd1", 00:09:36.461 "bdev_name": "Malloc1" 00:09:36.461 } 00:09:36.461 ]' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:36.461 { 00:09:36.461 "nbd_device": "/dev/nbd0", 00:09:36.461 "bdev_name": "Malloc0" 00:09:36.461 }, 00:09:36.461 { 00:09:36.461 "nbd_device": "/dev/nbd1", 00:09:36.461 "bdev_name": "Malloc1" 00:09:36.461 } 00:09:36.461 ]' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:36.461 /dev/nbd1' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:36.461 /dev/nbd1' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@65 -- # count=2 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@95 -- # count=2 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:36.461 256+0 records in 00:09:36.461 256+0 records out 00:09:36.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061928 s, 169 MB/s 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:36.461 256+0 records in 00:09:36.461 256+0 records out 00:09:36.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248485 s, 42.2 MB/s 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.461 11:19:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:36.720 256+0 records in 00:09:36.720 256+0 records out 00:09:36.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344829 s, 30.4 MB/s 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@51 -- # local i 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@41 -- # break 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.720 11:19:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:36.979 11:19:55 -- bdev/nbd_common.sh@41 -- # break 00:09:36.980 11:19:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.980 11:19:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.980 11:19:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@65 -- # true 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@104 -- # count=0 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:37.239 11:19:55 -- bdev/nbd_common.sh@109 -- # return 0 00:09:37.239 11:19:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:37.498 11:19:55 -- event/event.sh@35 -- # sleep 3 00:09:37.756 [2024-11-26 11:19:55.853702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.756 [2024-11-26 11:19:55.890083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.756 [2024-11-26 11:19:55.890090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.756 [2024-11-26 11:19:55.923863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:37.756 [2024-11-26 11:19:55.923966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:41.046 11:19:58 -- event/event.sh@23 -- # for i in {0..2} 00:09:41.046 spdk_app_start Round 1 00:09:41.046 11:19:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:41.046 11:19:58 -- event/event.sh@25 -- # waitforlisten 73411 /var/tmp/spdk-nbd.sock 00:09:41.046 11:19:58 -- common/autotest_common.sh@829 -- # '[' -z 73411 ']' 00:09:41.046 11:19:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:41.046 11:19:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:41.046 11:19:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:41.046 11:19:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.046 11:19:58 -- common/autotest_common.sh@10 -- # set +x 00:09:41.046 11:19:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.046 11:19:58 -- common/autotest_common.sh@862 -- # return 0 00:09:41.046 11:19:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:41.046 Malloc0 00:09:41.046 11:19:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:41.304 Malloc1 00:09:41.304 11:19:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@12 -- # local i 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.304 11:19:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:41.567 /dev/nbd0 00:09:41.567 11:19:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:41.567 11:19:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:41.567 11:19:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:41.567 11:19:59 -- common/autotest_common.sh@867 -- # local i 00:09:41.567 11:19:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:41.567 11:19:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:41.567 11:19:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:41.567 11:19:59 -- common/autotest_common.sh@871 -- # break 00:09:41.567 11:19:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:41.567 11:19:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:41.567 11:19:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:41.567 1+0 records in 00:09:41.567 1+0 records out 00:09:41.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264262 s, 15.5 MB/s 00:09:41.567 11:19:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.567 11:19:59 -- common/autotest_common.sh@884 -- # size=4096 00:09:41.567 11:19:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.567 11:19:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:41.567 11:19:59 -- common/autotest_common.sh@887 -- # return 0 00:09:41.567 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.567 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.567 11:19:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:41.835 /dev/nbd1 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:41.835 11:19:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:41.835 11:19:59 -- common/autotest_common.sh@867 -- # local i 00:09:41.835 11:19:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:41.835 11:19:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:41.835 11:19:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:41.835 11:19:59 -- common/autotest_common.sh@871 -- # break 00:09:41.835 11:19:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:41.835 11:19:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:41.835 11:19:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:41.835 1+0 records in 00:09:41.835 1+0 records out 00:09:41.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030632 s, 13.4 MB/s 00:09:41.835 11:19:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.835 11:19:59 -- common/autotest_common.sh@884 -- # size=4096 00:09:41.835 11:19:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.835 11:19:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:41.835 11:19:59 -- common/autotest_common.sh@887 -- # return 0 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.835 11:19:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.093 11:20:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.093 { 00:09:42.093 "nbd_device": "/dev/nbd0", 00:09:42.093 "bdev_name": "Malloc0" 00:09:42.093 }, 00:09:42.093 { 00:09:42.093 "nbd_device": "/dev/nbd1", 00:09:42.093 "bdev_name": "Malloc1" 00:09:42.093 } 00:09:42.093 ]' 00:09:42.093 11:20:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.093 { 00:09:42.093 "nbd_device": "/dev/nbd0", 00:09:42.093 "bdev_name": "Malloc0" 00:09:42.093 }, 00:09:42.093 { 00:09:42.094 "nbd_device": "/dev/nbd1", 00:09:42.094 "bdev_name": "Malloc1" 00:09:42.094 } 00:09:42.094 ]' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:42.094 /dev/nbd1' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:42.094 /dev/nbd1' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@65 -- # count=2 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@95 -- # count=2 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:42.094 256+0 records in 00:09:42.094 256+0 records out 00:09:42.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00989952 s, 106 MB/s 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:42.094 256+0 records in 00:09:42.094 256+0 records out 00:09:42.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027038 s, 38.8 MB/s 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.094 11:20:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:42.352 256+0 records in 00:09:42.352 256+0 records out 00:09:42.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288703 s, 36.3 MB/s 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:42.352 11:20:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@51 -- # local i 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.353 11:20:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:42.610 11:20:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:42.610 11:20:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:42.610 11:20:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:42.610 11:20:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@41 -- # break 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@41 -- # break 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.611 11:20:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@65 -- # true 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.869 11:20:01 -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.869 11:20:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:43.436 11:20:01 -- event/event.sh@35 -- # sleep 3 00:09:43.436 [2024-11-26 11:20:01.512404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.436 [2024-11-26 11:20:01.543836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.436 [2024-11-26 11:20:01.543838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.436 [2024-11-26 11:20:01.574303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:43.436 [2024-11-26 11:20:01.574643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:46.724 spdk_app_start Round 2 00:09:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.724 11:20:04 -- event/event.sh@23 -- # for i in {0..2} 00:09:46.724 11:20:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:46.724 11:20:04 -- event/event.sh@25 -- # waitforlisten 73411 /var/tmp/spdk-nbd.sock 00:09:46.724 11:20:04 -- common/autotest_common.sh@829 -- # '[' -z 73411 ']' 00:09:46.724 11:20:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.724 11:20:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.724 11:20:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.724 11:20:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.724 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:09:46.724 11:20:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.724 11:20:04 -- common/autotest_common.sh@862 -- # return 0 00:09:46.724 11:20:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:46.724 Malloc0 00:09:46.724 11:20:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:46.983 Malloc1 00:09:46.983 11:20:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@12 -- # local i 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.983 11:20:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:47.242 /dev/nbd0 00:09:47.242 11:20:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:47.242 11:20:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:47.242 11:20:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:47.242 11:20:05 -- common/autotest_common.sh@867 -- # local i 00:09:47.242 11:20:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:47.242 11:20:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:47.242 11:20:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:47.242 11:20:05 -- common/autotest_common.sh@871 -- # break 00:09:47.242 11:20:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:47.242 11:20:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:47.242 11:20:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:47.242 1+0 records in 00:09:47.242 1+0 records out 00:09:47.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206615 s, 19.8 MB/s 00:09:47.242 11:20:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.242 11:20:05 -- common/autotest_common.sh@884 -- # size=4096 00:09:47.242 11:20:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.242 11:20:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:47.242 11:20:05 -- common/autotest_common.sh@887 -- # return 0 00:09:47.242 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.242 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.242 11:20:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:47.500 /dev/nbd1 00:09:47.500 11:20:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:47.500 11:20:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:47.500 11:20:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:47.500 11:20:05 -- common/autotest_common.sh@867 -- # local i 00:09:47.500 11:20:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:47.500 11:20:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:47.500 11:20:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:47.500 11:20:05 -- common/autotest_common.sh@871 -- # break 00:09:47.500 11:20:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:47.500 11:20:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:47.500 11:20:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:47.500 1+0 records in 00:09:47.500 1+0 records out 00:09:47.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326924 s, 12.5 MB/s 00:09:47.501 11:20:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.501 11:20:05 -- common/autotest_common.sh@884 -- # size=4096 00:09:47.501 11:20:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.501 11:20:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:47.501 11:20:05 -- common/autotest_common.sh@887 -- # return 0 00:09:47.501 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.501 11:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.501 11:20:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.501 11:20:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.501 11:20:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:47.760 { 00:09:47.760 "nbd_device": "/dev/nbd0", 00:09:47.760 "bdev_name": "Malloc0" 00:09:47.760 }, 00:09:47.760 { 00:09:47.760 "nbd_device": "/dev/nbd1", 00:09:47.760 "bdev_name": "Malloc1" 00:09:47.760 } 00:09:47.760 ]' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:47.760 { 00:09:47.760 "nbd_device": "/dev/nbd0", 00:09:47.760 "bdev_name": "Malloc0" 00:09:47.760 }, 00:09:47.760 { 00:09:47.760 "nbd_device": "/dev/nbd1", 00:09:47.760 "bdev_name": "Malloc1" 00:09:47.760 } 00:09:47.760 ]' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:47.760 /dev/nbd1' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:47.760 /dev/nbd1' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@65 -- # count=2 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@95 -- # count=2 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:47.760 256+0 records in 00:09:47.760 256+0 records out 00:09:47.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00868386 s, 121 MB/s 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:47.760 256+0 records in 00:09:47.760 256+0 records out 00:09:47.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265889 s, 39.4 MB/s 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:47.760 256+0 records in 00:09:47.760 256+0 records out 00:09:47.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287608 s, 36.5 MB/s 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@51 -- # local i 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.760 11:20:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@41 -- # break 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@41 -- # break 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.328 11:20:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@65 -- # true 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@104 -- # count=0 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:48.587 11:20:06 -- bdev/nbd_common.sh@109 -- # return 0 00:09:48.587 11:20:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:49.155 11:20:07 -- event/event.sh@35 -- # sleep 3 00:09:49.156 [2024-11-26 11:20:07.205208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.156 [2024-11-26 11:20:07.236567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.156 [2024-11-26 11:20:07.236574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.156 [2024-11-26 11:20:07.266650] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:49.156 [2024-11-26 11:20:07.266726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:52.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:52.444 11:20:10 -- event/event.sh@38 -- # waitforlisten 73411 /var/tmp/spdk-nbd.sock 00:09:52.444 11:20:10 -- common/autotest_common.sh@829 -- # '[' -z 73411 ']' 00:09:52.444 11:20:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:52.444 11:20:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.444 11:20:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:52.444 11:20:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.444 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.444 11:20:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.444 11:20:10 -- common/autotest_common.sh@862 -- # return 0 00:09:52.444 11:20:10 -- event/event.sh@39 -- # killprocess 73411 00:09:52.444 11:20:10 -- common/autotest_common.sh@936 -- # '[' -z 73411 ']' 00:09:52.444 11:20:10 -- common/autotest_common.sh@940 -- # kill -0 73411 00:09:52.444 11:20:10 -- common/autotest_common.sh@941 -- # uname 00:09:52.444 11:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.444 11:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73411 00:09:52.444 11:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.444 11:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.444 killing process with pid 73411 00:09:52.444 11:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73411' 00:09:52.444 11:20:10 -- common/autotest_common.sh@955 -- # kill 73411 00:09:52.444 11:20:10 -- common/autotest_common.sh@960 -- # wait 73411 00:09:52.444 spdk_app_start is called in Round 0. 00:09:52.444 Shutdown signal received, stop current app iteration 00:09:52.444 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:52.444 spdk_app_start is called in Round 1. 00:09:52.444 Shutdown signal received, stop current app iteration 00:09:52.444 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:52.444 spdk_app_start is called in Round 2. 00:09:52.444 Shutdown signal received, stop current app iteration 00:09:52.444 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:09:52.444 spdk_app_start is called in Round 3. 00:09:52.444 Shutdown signal received, stop current app iteration 00:09:52.444 11:20:10 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:52.444 11:20:10 -- event/event.sh@42 -- # return 0 00:09:52.444 00:09:52.444 real 0m18.037s 00:09:52.444 user 0m40.750s 00:09:52.444 sys 0m2.453s 00:09:52.444 11:20:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.444 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.444 ************************************ 00:09:52.444 END TEST app_repeat 00:09:52.444 ************************************ 00:09:52.444 11:20:10 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:52.444 11:20:10 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:52.444 11:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.444 11:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.444 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.444 ************************************ 00:09:52.444 START TEST cpu_locks 00:09:52.444 ************************************ 00:09:52.444 11:20:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:52.444 * Looking for test storage... 00:09:52.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:52.444 11:20:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:52.444 11:20:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:52.444 11:20:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:52.704 11:20:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:52.704 11:20:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:52.704 11:20:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:52.704 11:20:10 -- scripts/common.sh@335 -- # IFS=.-: 00:09:52.704 11:20:10 -- scripts/common.sh@335 -- # read -ra ver1 00:09:52.704 11:20:10 -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.704 11:20:10 -- scripts/common.sh@336 -- # read -ra ver2 00:09:52.704 11:20:10 -- scripts/common.sh@337 -- # local 'op=<' 00:09:52.704 11:20:10 -- scripts/common.sh@339 -- # ver1_l=2 00:09:52.704 11:20:10 -- scripts/common.sh@340 -- # ver2_l=1 00:09:52.704 11:20:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:52.704 11:20:10 -- scripts/common.sh@343 -- # case "$op" in 00:09:52.704 11:20:10 -- scripts/common.sh@344 -- # : 1 00:09:52.704 11:20:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:52.704 11:20:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.704 11:20:10 -- scripts/common.sh@364 -- # decimal 1 00:09:52.704 11:20:10 -- scripts/common.sh@352 -- # local d=1 00:09:52.704 11:20:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.704 11:20:10 -- scripts/common.sh@354 -- # echo 1 00:09:52.704 11:20:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:52.704 11:20:10 -- scripts/common.sh@365 -- # decimal 2 00:09:52.704 11:20:10 -- scripts/common.sh@352 -- # local d=2 00:09:52.704 11:20:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.704 11:20:10 -- scripts/common.sh@354 -- # echo 2 00:09:52.704 11:20:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:52.704 11:20:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:52.704 11:20:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:52.704 11:20:10 -- scripts/common.sh@367 -- # return 0 00:09:52.704 11:20:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:52.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.704 --rc genhtml_branch_coverage=1 00:09:52.704 --rc genhtml_function_coverage=1 00:09:52.704 --rc genhtml_legend=1 00:09:52.704 --rc geninfo_all_blocks=1 00:09:52.704 --rc geninfo_unexecuted_blocks=1 00:09:52.704 00:09:52.704 ' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:52.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.704 --rc genhtml_branch_coverage=1 00:09:52.704 --rc genhtml_function_coverage=1 00:09:52.704 --rc genhtml_legend=1 00:09:52.704 --rc geninfo_all_blocks=1 00:09:52.704 --rc geninfo_unexecuted_blocks=1 00:09:52.704 00:09:52.704 ' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:52.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.704 --rc genhtml_branch_coverage=1 00:09:52.704 --rc genhtml_function_coverage=1 00:09:52.704 --rc genhtml_legend=1 00:09:52.704 --rc geninfo_all_blocks=1 00:09:52.704 --rc geninfo_unexecuted_blocks=1 00:09:52.704 00:09:52.704 ' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:52.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.704 --rc genhtml_branch_coverage=1 00:09:52.704 --rc genhtml_function_coverage=1 00:09:52.704 --rc genhtml_legend=1 00:09:52.704 --rc geninfo_all_blocks=1 00:09:52.704 --rc geninfo_unexecuted_blocks=1 00:09:52.704 00:09:52.704 ' 00:09:52.704 11:20:10 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:52.704 11:20:10 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:52.704 11:20:10 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:52.704 11:20:10 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:52.704 11:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.704 11:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.704 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.704 ************************************ 00:09:52.704 START TEST default_locks 00:09:52.704 ************************************ 00:09:52.704 11:20:10 -- common/autotest_common.sh@1114 -- # default_locks 00:09:52.704 11:20:10 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73892 00:09:52.704 11:20:10 -- event/cpu_locks.sh@47 -- # waitforlisten 73892 00:09:52.704 11:20:10 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:52.705 11:20:10 -- common/autotest_common.sh@829 -- # '[' -z 73892 ']' 00:09:52.705 11:20:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.705 11:20:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.705 11:20:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.705 11:20:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.705 11:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.705 [2024-11-26 11:20:10.821030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.705 [2024-11-26 11:20:10.821205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73892 ] 00:09:52.964 [2024-11-26 11:20:10.986560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.964 [2024-11-26 11:20:11.018607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:52.964 [2024-11-26 11:20:11.018901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.532 11:20:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.532 11:20:11 -- common/autotest_common.sh@862 -- # return 0 00:09:53.532 11:20:11 -- event/cpu_locks.sh@49 -- # locks_exist 73892 00:09:53.532 11:20:11 -- event/cpu_locks.sh@22 -- # lslocks -p 73892 00:09:53.532 11:20:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:54.100 11:20:12 -- event/cpu_locks.sh@50 -- # killprocess 73892 00:09:54.100 11:20:12 -- common/autotest_common.sh@936 -- # '[' -z 73892 ']' 00:09:54.100 11:20:12 -- common/autotest_common.sh@940 -- # kill -0 73892 00:09:54.100 11:20:12 -- common/autotest_common.sh@941 -- # uname 00:09:54.100 11:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:54.100 11:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73892 00:09:54.100 11:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:54.100 11:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:54.100 killing process with pid 73892 00:09:54.100 11:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73892' 00:09:54.100 11:20:12 -- common/autotest_common.sh@955 -- # kill 73892 00:09:54.101 11:20:12 -- common/autotest_common.sh@960 -- # wait 73892 00:09:54.360 11:20:12 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73892 00:09:54.360 11:20:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:54.360 11:20:12 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 73892 00:09:54.360 11:20:12 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:54.360 11:20:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.360 11:20:12 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:54.360 11:20:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.360 11:20:12 -- common/autotest_common.sh@653 -- # waitforlisten 73892 00:09:54.360 11:20:12 -- common/autotest_common.sh@829 -- # '[' -z 73892 ']' 00:09:54.360 11:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.360 11:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.360 11:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.360 11:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.360 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.360 ERROR: process (pid: 73892) is no longer running 00:09:54.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73892) - No such process 00:09:54.360 11:20:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.360 11:20:12 -- common/autotest_common.sh@862 -- # return 1 00:09:54.360 11:20:12 -- common/autotest_common.sh@653 -- # es=1 00:09:54.360 11:20:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:54.360 11:20:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:54.360 11:20:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:54.360 11:20:12 -- event/cpu_locks.sh@54 -- # no_locks 00:09:54.360 11:20:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:54.360 11:20:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:54.360 11:20:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:54.360 00:09:54.360 real 0m1.751s 00:09:54.360 user 0m1.895s 00:09:54.360 sys 0m0.531s 00:09:54.360 11:20:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.360 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.360 ************************************ 00:09:54.360 END TEST default_locks 00:09:54.360 ************************************ 00:09:54.360 11:20:12 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:54.360 11:20:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.360 11:20:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.360 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.360 ************************************ 00:09:54.360 START TEST default_locks_via_rpc 00:09:54.360 ************************************ 00:09:54.360 11:20:12 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:09:54.360 11:20:12 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73945 00:09:54.360 11:20:12 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:54.360 11:20:12 -- event/cpu_locks.sh@63 -- # waitforlisten 73945 00:09:54.360 11:20:12 -- common/autotest_common.sh@829 -- # '[' -z 73945 ']' 00:09:54.360 11:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.360 11:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.360 11:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.360 11:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.360 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.620 [2024-11-26 11:20:12.608338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.620 [2024-11-26 11:20:12.608474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73945 ] 00:09:54.620 [2024-11-26 11:20:12.762896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.620 [2024-11-26 11:20:12.796111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:54.620 [2024-11-26 11:20:12.796386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.556 11:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.556 11:20:13 -- common/autotest_common.sh@862 -- # return 0 00:09:55.556 11:20:13 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:55.556 11:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.556 11:20:13 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 11:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.556 11:20:13 -- event/cpu_locks.sh@67 -- # no_locks 00:09:55.556 11:20:13 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:55.556 11:20:13 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:55.556 11:20:13 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:55.556 11:20:13 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:55.556 11:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.556 11:20:13 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 11:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.556 11:20:13 -- event/cpu_locks.sh@71 -- # locks_exist 73945 00:09:55.556 11:20:13 -- event/cpu_locks.sh@22 -- # lslocks -p 73945 00:09:55.556 11:20:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:55.815 11:20:14 -- event/cpu_locks.sh@73 -- # killprocess 73945 00:09:55.815 11:20:14 -- common/autotest_common.sh@936 -- # '[' -z 73945 ']' 00:09:55.815 11:20:14 -- common/autotest_common.sh@940 -- # kill -0 73945 00:09:55.815 11:20:14 -- common/autotest_common.sh@941 -- # uname 00:09:55.815 11:20:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.815 11:20:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73945 00:09:55.815 11:20:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.815 11:20:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.815 killing process with pid 73945 00:09:55.815 11:20:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73945' 00:09:55.815 11:20:14 -- common/autotest_common.sh@955 -- # kill 73945 00:09:55.815 11:20:14 -- common/autotest_common.sh@960 -- # wait 73945 00:09:56.383 00:09:56.383 real 0m1.780s 00:09:56.383 user 0m1.923s 00:09:56.383 sys 0m0.563s 00:09:56.383 11:20:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.383 11:20:14 -- common/autotest_common.sh@10 -- # set +x 00:09:56.383 ************************************ 00:09:56.383 END TEST default_locks_via_rpc 00:09:56.383 ************************************ 00:09:56.383 11:20:14 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:56.383 11:20:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.383 11:20:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.383 11:20:14 -- common/autotest_common.sh@10 -- # set +x 00:09:56.383 ************************************ 00:09:56.383 START TEST non_locking_app_on_locked_coremask 00:09:56.383 ************************************ 00:09:56.383 11:20:14 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:09:56.383 11:20:14 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73996 00:09:56.383 11:20:14 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:56.383 11:20:14 -- event/cpu_locks.sh@81 -- # waitforlisten 73996 /var/tmp/spdk.sock 00:09:56.383 11:20:14 -- common/autotest_common.sh@829 -- # '[' -z 73996 ']' 00:09:56.383 11:20:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.383 11:20:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.383 11:20:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.383 11:20:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.383 11:20:14 -- common/autotest_common.sh@10 -- # set +x 00:09:56.383 [2024-11-26 11:20:14.453725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:56.383 [2024-11-26 11:20:14.453923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:09:56.383 [2024-11-26 11:20:14.620306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.642 [2024-11-26 11:20:14.654244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:56.642 [2024-11-26 11:20:14.654488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.209 11:20:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.209 11:20:15 -- common/autotest_common.sh@862 -- # return 0 00:09:57.209 11:20:15 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74008 00:09:57.209 11:20:15 -- event/cpu_locks.sh@85 -- # waitforlisten 74008 /var/tmp/spdk2.sock 00:09:57.209 11:20:15 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:57.209 11:20:15 -- common/autotest_common.sh@829 -- # '[' -z 74008 ']' 00:09:57.209 11:20:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.209 11:20:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.209 11:20:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.209 11:20:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.209 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:09:57.469 [2024-11-26 11:20:15.481130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:57.469 [2024-11-26 11:20:15.481304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74008 ] 00:09:57.469 [2024-11-26 11:20:15.650805] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:57.469 [2024-11-26 11:20:15.650900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.727 [2024-11-26 11:20:15.715703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.727 [2024-11-26 11:20:15.722104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.294 11:20:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.294 11:20:16 -- common/autotest_common.sh@862 -- # return 0 00:09:58.294 11:20:16 -- event/cpu_locks.sh@87 -- # locks_exist 73996 00:09:58.294 11:20:16 -- event/cpu_locks.sh@22 -- # lslocks -p 73996 00:09:58.294 11:20:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:58.866 11:20:17 -- event/cpu_locks.sh@89 -- # killprocess 73996 00:09:58.866 11:20:17 -- common/autotest_common.sh@936 -- # '[' -z 73996 ']' 00:09:58.866 11:20:17 -- common/autotest_common.sh@940 -- # kill -0 73996 00:09:58.866 11:20:17 -- common/autotest_common.sh@941 -- # uname 00:09:58.866 11:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.866 11:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73996 00:09:59.136 11:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:59.136 11:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:59.136 killing process with pid 73996 00:09:59.136 11:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73996' 00:09:59.136 11:20:17 -- common/autotest_common.sh@955 -- # kill 73996 00:09:59.136 11:20:17 -- common/autotest_common.sh@960 -- # wait 73996 00:09:59.703 11:20:17 -- event/cpu_locks.sh@90 -- # killprocess 74008 00:09:59.703 11:20:17 -- common/autotest_common.sh@936 -- # '[' -z 74008 ']' 00:09:59.703 11:20:17 -- common/autotest_common.sh@940 -- # kill -0 74008 00:09:59.703 11:20:17 -- common/autotest_common.sh@941 -- # uname 00:09:59.703 11:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:59.703 11:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74008 00:09:59.703 11:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:59.703 11:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:59.703 killing process with pid 74008 00:09:59.704 11:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74008' 00:09:59.704 11:20:17 -- common/autotest_common.sh@955 -- # kill 74008 00:09:59.704 11:20:17 -- common/autotest_common.sh@960 -- # wait 74008 00:09:59.962 00:09:59.962 real 0m3.612s 00:09:59.962 user 0m4.111s 00:09:59.962 sys 0m0.998s 00:09:59.962 11:20:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.962 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:09:59.962 ************************************ 00:09:59.962 END TEST non_locking_app_on_locked_coremask 00:09:59.962 ************************************ 00:09:59.962 11:20:18 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:59.962 11:20:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:59.962 11:20:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.962 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:09:59.962 ************************************ 00:09:59.962 START TEST locking_app_on_unlocked_coremask 00:09:59.962 ************************************ 00:09:59.962 11:20:18 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:59.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.962 11:20:18 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74071 00:09:59.962 11:20:18 -- event/cpu_locks.sh@99 -- # waitforlisten 74071 /var/tmp/spdk.sock 00:09:59.962 11:20:18 -- common/autotest_common.sh@829 -- # '[' -z 74071 ']' 00:09:59.962 11:20:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.962 11:20:18 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:59.962 11:20:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.962 11:20:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.962 11:20:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.962 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:09:59.962 [2024-11-26 11:20:18.123868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.962 [2024-11-26 11:20:18.124127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74071 ] 00:10:00.222 [2024-11-26 11:20:18.292288] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:00.222 [2024-11-26 11:20:18.292361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.222 [2024-11-26 11:20:18.335583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:00.222 [2024-11-26 11:20:18.335927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.165 11:20:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.165 11:20:19 -- common/autotest_common.sh@862 -- # return 0 00:10:01.165 11:20:19 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74087 00:10:01.165 11:20:19 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:01.165 11:20:19 -- event/cpu_locks.sh@103 -- # waitforlisten 74087 /var/tmp/spdk2.sock 00:10:01.165 11:20:19 -- common/autotest_common.sh@829 -- # '[' -z 74087 ']' 00:10:01.165 11:20:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:01.165 11:20:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:01.165 11:20:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:01.166 11:20:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.166 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:10:01.166 [2024-11-26 11:20:19.153837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.166 [2024-11-26 11:20:19.154104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74087 ] 00:10:01.166 [2024-11-26 11:20:19.335560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.424 [2024-11-26 11:20:19.413577] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:01.424 [2024-11-26 11:20:19.413873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.989 11:20:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.989 11:20:20 -- common/autotest_common.sh@862 -- # return 0 00:10:01.990 11:20:20 -- event/cpu_locks.sh@105 -- # locks_exist 74087 00:10:01.990 11:20:20 -- event/cpu_locks.sh@22 -- # lslocks -p 74087 00:10:01.990 11:20:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.925 11:20:21 -- event/cpu_locks.sh@107 -- # killprocess 74071 00:10:02.925 11:20:21 -- common/autotest_common.sh@936 -- # '[' -z 74071 ']' 00:10:02.925 11:20:21 -- common/autotest_common.sh@940 -- # kill -0 74071 00:10:02.925 11:20:21 -- common/autotest_common.sh@941 -- # uname 00:10:02.925 11:20:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.925 11:20:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74071 00:10:02.925 11:20:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:02.925 11:20:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:02.925 killing process with pid 74071 00:10:02.925 11:20:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74071' 00:10:02.925 11:20:21 -- common/autotest_common.sh@955 -- # kill 74071 00:10:02.925 11:20:21 -- common/autotest_common.sh@960 -- # wait 74071 00:10:03.492 11:20:21 -- event/cpu_locks.sh@108 -- # killprocess 74087 00:10:03.492 11:20:21 -- common/autotest_common.sh@936 -- # '[' -z 74087 ']' 00:10:03.492 11:20:21 -- common/autotest_common.sh@940 -- # kill -0 74087 00:10:03.492 11:20:21 -- common/autotest_common.sh@941 -- # uname 00:10:03.492 11:20:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.492 11:20:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74087 00:10:03.492 killing process with pid 74087 00:10:03.492 11:20:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:03.492 11:20:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:03.492 11:20:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74087' 00:10:03.492 11:20:21 -- common/autotest_common.sh@955 -- # kill 74087 00:10:03.492 11:20:21 -- common/autotest_common.sh@960 -- # wait 74087 00:10:04.058 00:10:04.058 real 0m3.939s 00:10:04.058 user 0m4.502s 00:10:04.058 sys 0m1.185s 00:10:04.058 11:20:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.058 ************************************ 00:10:04.058 11:20:21 -- common/autotest_common.sh@10 -- # set +x 00:10:04.058 END TEST locking_app_on_unlocked_coremask 00:10:04.058 ************************************ 00:10:04.058 11:20:22 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:04.058 11:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:04.058 11:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.058 11:20:22 -- common/autotest_common.sh@10 -- # set +x 00:10:04.058 ************************************ 00:10:04.058 START TEST locking_app_on_locked_coremask 00:10:04.058 ************************************ 00:10:04.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.058 11:20:22 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:10:04.058 11:20:22 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74156 00:10:04.058 11:20:22 -- event/cpu_locks.sh@116 -- # waitforlisten 74156 /var/tmp/spdk.sock 00:10:04.058 11:20:22 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:04.058 11:20:22 -- common/autotest_common.sh@829 -- # '[' -z 74156 ']' 00:10:04.058 11:20:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.058 11:20:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.058 11:20:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.058 11:20:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.058 11:20:22 -- common/autotest_common.sh@10 -- # set +x 00:10:04.058 [2024-11-26 11:20:22.103488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.058 [2024-11-26 11:20:22.103843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74156 ] 00:10:04.058 [2024-11-26 11:20:22.257550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.058 [2024-11-26 11:20:22.294452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.058 [2024-11-26 11:20:22.294724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.993 11:20:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.993 11:20:23 -- common/autotest_common.sh@862 -- # return 0 00:10:04.993 11:20:23 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74171 00:10:04.993 11:20:23 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:04.993 11:20:23 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74171 /var/tmp/spdk2.sock 00:10:04.993 11:20:23 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.993 11:20:23 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74171 /var/tmp/spdk2.sock 00:10:04.993 11:20:23 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:04.993 11:20:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.993 11:20:23 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:04.993 11:20:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.993 11:20:23 -- common/autotest_common.sh@653 -- # waitforlisten 74171 /var/tmp/spdk2.sock 00:10:04.993 11:20:23 -- common/autotest_common.sh@829 -- # '[' -z 74171 ']' 00:10:04.993 11:20:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.993 11:20:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.993 11:20:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.993 11:20:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.993 11:20:23 -- common/autotest_common.sh@10 -- # set +x 00:10:04.993 [2024-11-26 11:20:23.137021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.993 [2024-11-26 11:20:23.137486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74171 ] 00:10:05.252 [2024-11-26 11:20:23.324726] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74156 has claimed it. 00:10:05.252 [2024-11-26 11:20:23.324848] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:05.820 ERROR: process (pid: 74171) is no longer running 00:10:05.820 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74171) - No such process 00:10:05.821 11:20:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.821 11:20:23 -- common/autotest_common.sh@862 -- # return 1 00:10:05.821 11:20:23 -- common/autotest_common.sh@653 -- # es=1 00:10:05.821 11:20:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.821 11:20:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.821 11:20:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.821 11:20:23 -- event/cpu_locks.sh@122 -- # locks_exist 74156 00:10:05.821 11:20:23 -- event/cpu_locks.sh@22 -- # lslocks -p 74156 00:10:05.821 11:20:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:06.080 11:20:24 -- event/cpu_locks.sh@124 -- # killprocess 74156 00:10:06.080 11:20:24 -- common/autotest_common.sh@936 -- # '[' -z 74156 ']' 00:10:06.080 11:20:24 -- common/autotest_common.sh@940 -- # kill -0 74156 00:10:06.080 11:20:24 -- common/autotest_common.sh@941 -- # uname 00:10:06.080 11:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.080 11:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74156 00:10:06.080 killing process with pid 74156 00:10:06.080 11:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:06.080 11:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:06.080 11:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74156' 00:10:06.080 11:20:24 -- common/autotest_common.sh@955 -- # kill 74156 00:10:06.080 11:20:24 -- common/autotest_common.sh@960 -- # wait 74156 00:10:06.648 00:10:06.648 real 0m2.542s 00:10:06.648 user 0m3.009s 00:10:06.648 sys 0m0.656s 00:10:06.648 11:20:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:06.648 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:10:06.648 ************************************ 00:10:06.648 END TEST locking_app_on_locked_coremask 00:10:06.648 ************************************ 00:10:06.648 11:20:24 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:06.648 11:20:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:06.648 11:20:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.648 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:10:06.648 ************************************ 00:10:06.648 START TEST locking_overlapped_coremask 00:10:06.648 ************************************ 00:10:06.648 11:20:24 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:10:06.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.648 11:20:24 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74220 00:10:06.648 11:20:24 -- event/cpu_locks.sh@133 -- # waitforlisten 74220 /var/tmp/spdk.sock 00:10:06.648 11:20:24 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:06.648 11:20:24 -- common/autotest_common.sh@829 -- # '[' -z 74220 ']' 00:10:06.648 11:20:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.648 11:20:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.648 11:20:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.648 11:20:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.648 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:10:06.648 [2024-11-26 11:20:24.703370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.648 [2024-11-26 11:20:24.703509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74220 ] 00:10:06.648 [2024-11-26 11:20:24.857552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.907 [2024-11-26 11:20:24.896308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:06.907 [2024-11-26 11:20:24.896818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.907 [2024-11-26 11:20:24.897574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.907 [2024-11-26 11:20:24.897607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.474 11:20:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.474 11:20:25 -- common/autotest_common.sh@862 -- # return 0 00:10:07.474 11:20:25 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74238 00:10:07.474 11:20:25 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:07.474 11:20:25 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74238 /var/tmp/spdk2.sock 00:10:07.474 11:20:25 -- common/autotest_common.sh@650 -- # local es=0 00:10:07.474 11:20:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74238 /var/tmp/spdk2.sock 00:10:07.474 11:20:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:07.474 11:20:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.474 11:20:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:07.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.474 11:20:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.475 11:20:25 -- common/autotest_common.sh@653 -- # waitforlisten 74238 /var/tmp/spdk2.sock 00:10:07.475 11:20:25 -- common/autotest_common.sh@829 -- # '[' -z 74238 ']' 00:10:07.475 11:20:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.475 11:20:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.475 11:20:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.475 11:20:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.475 11:20:25 -- common/autotest_common.sh@10 -- # set +x 00:10:07.733 [2024-11-26 11:20:25.733728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:07.733 [2024-11-26 11:20:25.733971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74238 ] 00:10:07.733 [2024-11-26 11:20:25.909020] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74220 has claimed it. 00:10:07.733 [2024-11-26 11:20:25.909262] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:08.300 ERROR: process (pid: 74238) is no longer running 00:10:08.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74238) - No such process 00:10:08.300 11:20:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.300 11:20:26 -- common/autotest_common.sh@862 -- # return 1 00:10:08.300 11:20:26 -- common/autotest_common.sh@653 -- # es=1 00:10:08.300 11:20:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.300 11:20:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:08.300 11:20:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.300 11:20:26 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:08.300 11:20:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:08.300 11:20:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:08.300 11:20:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:08.300 11:20:26 -- event/cpu_locks.sh@141 -- # killprocess 74220 00:10:08.300 11:20:26 -- common/autotest_common.sh@936 -- # '[' -z 74220 ']' 00:10:08.300 11:20:26 -- common/autotest_common.sh@940 -- # kill -0 74220 00:10:08.300 11:20:26 -- common/autotest_common.sh@941 -- # uname 00:10:08.300 11:20:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.300 11:20:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74220 00:10:08.300 killing process with pid 74220 00:10:08.300 11:20:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:08.300 11:20:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:08.300 11:20:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74220' 00:10:08.300 11:20:26 -- common/autotest_common.sh@955 -- # kill 74220 00:10:08.300 11:20:26 -- common/autotest_common.sh@960 -- # wait 74220 00:10:08.560 00:10:08.560 real 0m2.091s 00:10:08.560 user 0m5.855s 00:10:08.560 sys 0m0.445s 00:10:08.560 11:20:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.560 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.560 ************************************ 00:10:08.560 END TEST locking_overlapped_coremask 00:10:08.560 ************************************ 00:10:08.560 11:20:26 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:08.560 11:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.560 11:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.560 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.560 ************************************ 00:10:08.560 START TEST locking_overlapped_coremask_via_rpc 00:10:08.560 ************************************ 00:10:08.560 11:20:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:10:08.560 11:20:26 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74280 00:10:08.560 11:20:26 -- event/cpu_locks.sh@149 -- # waitforlisten 74280 /var/tmp/spdk.sock 00:10:08.560 11:20:26 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:08.560 11:20:26 -- common/autotest_common.sh@829 -- # '[' -z 74280 ']' 00:10:08.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.560 11:20:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.560 11:20:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.560 11:20:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.560 11:20:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.560 11:20:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.819 [2024-11-26 11:20:26.855281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.819 [2024-11-26 11:20:26.855433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74280 ] 00:10:08.819 [2024-11-26 11:20:27.022645] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.819 [2024-11-26 11:20:27.022725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.078 [2024-11-26 11:20:27.061559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:09.078 [2024-11-26 11:20:27.062020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.078 [2024-11-26 11:20:27.062909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.078 [2024-11-26 11:20:27.062948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.646 11:20:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.646 11:20:27 -- common/autotest_common.sh@862 -- # return 0 00:10:09.646 11:20:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:09.646 11:20:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74298 00:10:09.646 11:20:27 -- event/cpu_locks.sh@153 -- # waitforlisten 74298 /var/tmp/spdk2.sock 00:10:09.646 11:20:27 -- common/autotest_common.sh@829 -- # '[' -z 74298 ']' 00:10:09.646 11:20:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.646 11:20:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.646 11:20:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.646 11:20:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.646 11:20:27 -- common/autotest_common.sh@10 -- # set +x 00:10:09.646 [2024-11-26 11:20:27.854887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:09.646 [2024-11-26 11:20:27.855068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74298 ] 00:10:09.906 [2024-11-26 11:20:28.022145] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:09.906 [2024-11-26 11:20:28.022219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.906 [2024-11-26 11:20:28.097671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:09.906 [2024-11-26 11:20:28.098113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.906 [2024-11-26 11:20:28.102058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.906 [2024-11-26 11:20:28.102083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.842 11:20:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.842 11:20:28 -- common/autotest_common.sh@862 -- # return 0 00:10:10.842 11:20:28 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:10.842 11:20:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.842 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.842 11:20:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.842 11:20:28 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.842 11:20:28 -- common/autotest_common.sh@650 -- # local es=0 00:10:10.842 11:20:28 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.842 11:20:28 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:10.842 11:20:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.842 11:20:28 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:10.842 11:20:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.842 11:20:28 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.842 11:20:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.842 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.842 [2024-11-26 11:20:28.824103] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74280 has claimed it. 00:10:10.842 request: 00:10:10.842 { 00:10:10.842 "method": "framework_enable_cpumask_locks", 00:10:10.842 "req_id": 1 00:10:10.842 } 00:10:10.842 Got JSON-RPC error response 00:10:10.842 response: 00:10:10.842 { 00:10:10.842 "code": -32603, 00:10:10.842 "message": "Failed to claim CPU core: 2" 00:10:10.842 } 00:10:10.842 11:20:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:10.842 11:20:28 -- common/autotest_common.sh@653 -- # es=1 00:10:10.842 11:20:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.842 11:20:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:10.842 11:20:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.843 11:20:28 -- event/cpu_locks.sh@158 -- # waitforlisten 74280 /var/tmp/spdk.sock 00:10:10.843 11:20:28 -- common/autotest_common.sh@829 -- # '[' -z 74280 ']' 00:10:10.843 11:20:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.843 11:20:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.843 11:20:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.843 11:20:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.843 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.843 11:20:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.843 11:20:29 -- common/autotest_common.sh@862 -- # return 0 00:10:10.843 11:20:29 -- event/cpu_locks.sh@159 -- # waitforlisten 74298 /var/tmp/spdk2.sock 00:10:10.843 11:20:29 -- common/autotest_common.sh@829 -- # '[' -z 74298 ']' 00:10:10.843 11:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.843 11:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.843 11:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.843 11:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.843 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:10:11.105 11:20:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.105 11:20:29 -- common/autotest_common.sh@862 -- # return 0 00:10:11.105 11:20:29 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:11.105 11:20:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:11.105 11:20:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:11.105 11:20:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:11.105 00:10:11.105 real 0m2.531s 00:10:11.105 user 0m1.297s 00:10:11.105 sys 0m0.178s 00:10:11.105 11:20:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.105 11:20:29 -- common/autotest_common.sh@10 -- # set +x 00:10:11.105 ************************************ 00:10:11.105 END TEST locking_overlapped_coremask_via_rpc 00:10:11.105 ************************************ 00:10:11.363 11:20:29 -- event/cpu_locks.sh@174 -- # cleanup 00:10:11.363 11:20:29 -- event/cpu_locks.sh@15 -- # [[ -z 74280 ]] 00:10:11.363 11:20:29 -- event/cpu_locks.sh@15 -- # killprocess 74280 00:10:11.363 11:20:29 -- common/autotest_common.sh@936 -- # '[' -z 74280 ']' 00:10:11.363 11:20:29 -- common/autotest_common.sh@940 -- # kill -0 74280 00:10:11.363 11:20:29 -- common/autotest_common.sh@941 -- # uname 00:10:11.363 11:20:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.363 11:20:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74280 00:10:11.363 11:20:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:11.363 11:20:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:11.363 killing process with pid 74280 00:10:11.363 11:20:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74280' 00:10:11.363 11:20:29 -- common/autotest_common.sh@955 -- # kill 74280 00:10:11.363 11:20:29 -- common/autotest_common.sh@960 -- # wait 74280 00:10:11.621 11:20:29 -- event/cpu_locks.sh@16 -- # [[ -z 74298 ]] 00:10:11.621 11:20:29 -- event/cpu_locks.sh@16 -- # killprocess 74298 00:10:11.621 11:20:29 -- common/autotest_common.sh@936 -- # '[' -z 74298 ']' 00:10:11.621 11:20:29 -- common/autotest_common.sh@940 -- # kill -0 74298 00:10:11.621 11:20:29 -- common/autotest_common.sh@941 -- # uname 00:10:11.621 11:20:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.621 11:20:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74298 00:10:11.621 11:20:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:11.621 11:20:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:11.621 killing process with pid 74298 00:10:11.621 11:20:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74298' 00:10:11.621 11:20:29 -- common/autotest_common.sh@955 -- # kill 74298 00:10:11.621 11:20:29 -- common/autotest_common.sh@960 -- # wait 74298 00:10:11.879 11:20:30 -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.879 11:20:30 -- event/cpu_locks.sh@1 -- # cleanup 00:10:11.879 11:20:30 -- event/cpu_locks.sh@15 -- # [[ -z 74280 ]] 00:10:11.879 11:20:30 -- event/cpu_locks.sh@15 -- # killprocess 74280 00:10:11.879 11:20:30 -- common/autotest_common.sh@936 -- # '[' -z 74280 ']' 00:10:11.879 11:20:30 -- common/autotest_common.sh@940 -- # kill -0 74280 00:10:11.879 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (74280) - No such process 00:10:11.879 11:20:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 74280 is not found' 00:10:11.879 Process with pid 74280 is not found 00:10:11.879 11:20:30 -- event/cpu_locks.sh@16 -- # [[ -z 74298 ]] 00:10:11.879 11:20:30 -- event/cpu_locks.sh@16 -- # killprocess 74298 00:10:11.879 11:20:30 -- common/autotest_common.sh@936 -- # '[' -z 74298 ']' 00:10:11.879 11:20:30 -- common/autotest_common.sh@940 -- # kill -0 74298 00:10:11.879 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (74298) - No such process 00:10:11.879 Process with pid 74298 is not found 00:10:11.879 11:20:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 74298 is not found' 00:10:11.879 11:20:30 -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.879 00:10:11.879 real 0m19.458s 00:10:11.879 user 0m34.330s 00:10:11.879 sys 0m5.386s 00:10:11.879 11:20:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.879 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:10:11.879 ************************************ 00:10:11.879 END TEST cpu_locks 00:10:11.879 ************************************ 00:10:11.879 00:10:11.879 real 0m45.031s 00:10:11.879 user 1m27.168s 00:10:11.879 sys 0m8.804s 00:10:11.879 11:20:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.879 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:10:11.879 ************************************ 00:10:11.879 END TEST event 00:10:11.879 ************************************ 00:10:12.137 11:20:30 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:12.137 11:20:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.137 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 ************************************ 00:10:12.137 START TEST thread 00:10:12.137 ************************************ 00:10:12.137 11:20:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:12.137 * Looking for test storage... 00:10:12.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:12.137 11:20:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:12.137 11:20:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:12.137 11:20:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:12.137 11:20:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:12.137 11:20:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:12.137 11:20:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:12.137 11:20:30 -- scripts/common.sh@335 -- # IFS=.-: 00:10:12.137 11:20:30 -- scripts/common.sh@335 -- # read -ra ver1 00:10:12.137 11:20:30 -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.137 11:20:30 -- scripts/common.sh@336 -- # read -ra ver2 00:10:12.137 11:20:30 -- scripts/common.sh@337 -- # local 'op=<' 00:10:12.137 11:20:30 -- scripts/common.sh@339 -- # ver1_l=2 00:10:12.137 11:20:30 -- scripts/common.sh@340 -- # ver2_l=1 00:10:12.137 11:20:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:12.137 11:20:30 -- scripts/common.sh@343 -- # case "$op" in 00:10:12.137 11:20:30 -- scripts/common.sh@344 -- # : 1 00:10:12.137 11:20:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:12.137 11:20:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.137 11:20:30 -- scripts/common.sh@364 -- # decimal 1 00:10:12.137 11:20:30 -- scripts/common.sh@352 -- # local d=1 00:10:12.137 11:20:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.137 11:20:30 -- scripts/common.sh@354 -- # echo 1 00:10:12.137 11:20:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:12.137 11:20:30 -- scripts/common.sh@365 -- # decimal 2 00:10:12.137 11:20:30 -- scripts/common.sh@352 -- # local d=2 00:10:12.137 11:20:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.137 11:20:30 -- scripts/common.sh@354 -- # echo 2 00:10:12.137 11:20:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:12.137 11:20:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:12.137 11:20:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:12.137 11:20:30 -- scripts/common.sh@367 -- # return 0 00:10:12.137 11:20:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.137 --rc genhtml_branch_coverage=1 00:10:12.137 --rc genhtml_function_coverage=1 00:10:12.137 --rc genhtml_legend=1 00:10:12.137 --rc geninfo_all_blocks=1 00:10:12.137 --rc geninfo_unexecuted_blocks=1 00:10:12.137 00:10:12.137 ' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.137 --rc genhtml_branch_coverage=1 00:10:12.137 --rc genhtml_function_coverage=1 00:10:12.137 --rc genhtml_legend=1 00:10:12.137 --rc geninfo_all_blocks=1 00:10:12.137 --rc geninfo_unexecuted_blocks=1 00:10:12.137 00:10:12.137 ' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.137 --rc genhtml_branch_coverage=1 00:10:12.137 --rc genhtml_function_coverage=1 00:10:12.137 --rc genhtml_legend=1 00:10:12.137 --rc geninfo_all_blocks=1 00:10:12.137 --rc geninfo_unexecuted_blocks=1 00:10:12.137 00:10:12.137 ' 00:10:12.137 11:20:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:12.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.137 --rc genhtml_branch_coverage=1 00:10:12.137 --rc genhtml_function_coverage=1 00:10:12.137 --rc genhtml_legend=1 00:10:12.138 --rc geninfo_all_blocks=1 00:10:12.138 --rc geninfo_unexecuted_blocks=1 00:10:12.138 00:10:12.138 ' 00:10:12.138 11:20:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.138 11:20:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:12.138 11:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.138 11:20:30 -- common/autotest_common.sh@10 -- # set +x 00:10:12.138 ************************************ 00:10:12.138 START TEST thread_poller_perf 00:10:12.138 ************************************ 00:10:12.138 11:20:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.138 [2024-11-26 11:20:30.340847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.138 [2024-11-26 11:20:30.341087] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74425 ] 00:10:12.396 [2024-11-26 11:20:30.511786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.396 [2024-11-26 11:20:30.552916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.396 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:13.770 [2024-11-26T11:20:32.000Z] ====================================== 00:10:13.770 [2024-11-26T11:20:32.000Z] busy:2218017984 (cyc) 00:10:13.770 [2024-11-26T11:20:32.000Z] total_run_count: 279000 00:10:13.770 [2024-11-26T11:20:32.000Z] tsc_hz: 2200000000 (cyc) 00:10:13.770 [2024-11-26T11:20:32.000Z] ====================================== 00:10:13.770 [2024-11-26T11:20:32.000Z] poller_cost: 7949 (cyc), 3613 (nsec) 00:10:13.770 00:10:13.770 real 0m1.334s 00:10:13.770 user 0m1.150s 00:10:13.770 sys 0m0.083s 00:10:13.770 11:20:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:13.770 11:20:31 -- common/autotest_common.sh@10 -- # set +x 00:10:13.770 ************************************ 00:10:13.770 END TEST thread_poller_perf 00:10:13.770 ************************************ 00:10:13.770 11:20:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.770 11:20:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:13.770 11:20:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.770 11:20:31 -- common/autotest_common.sh@10 -- # set +x 00:10:13.770 ************************************ 00:10:13.770 START TEST thread_poller_perf 00:10:13.770 ************************************ 00:10:13.770 11:20:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.770 [2024-11-26 11:20:31.725562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:13.770 [2024-11-26 11:20:31.725757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74462 ] 00:10:13.770 [2024-11-26 11:20:31.882613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.770 [2024-11-26 11:20:31.919467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.770 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:15.149 [2024-11-26T11:20:33.379Z] ====================================== 00:10:15.149 [2024-11-26T11:20:33.379Z] busy:2205045530 (cyc) 00:10:15.149 [2024-11-26T11:20:33.379Z] total_run_count: 4000000 00:10:15.149 [2024-11-26T11:20:33.379Z] tsc_hz: 2200000000 (cyc) 00:10:15.149 [2024-11-26T11:20:33.379Z] ====================================== 00:10:15.149 [2024-11-26T11:20:33.379Z] poller_cost: 551 (cyc), 250 (nsec) 00:10:15.149 00:10:15.149 real 0m1.300s 00:10:15.149 user 0m1.124s 00:10:15.149 sys 0m0.075s 00:10:15.149 11:20:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.149 11:20:32 -- common/autotest_common.sh@10 -- # set +x 00:10:15.149 ************************************ 00:10:15.149 END TEST thread_poller_perf 00:10:15.149 ************************************ 00:10:15.149 11:20:33 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:15.149 11:20:33 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:15.149 11:20:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:15.149 11:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.149 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:10:15.149 ************************************ 00:10:15.149 START TEST thread_spdk_lock 00:10:15.149 ************************************ 00:10:15.149 11:20:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:15.149 [2024-11-26 11:20:33.079342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.149 [2024-11-26 11:20:33.079525] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74498 ] 00:10:15.149 [2024-11-26 11:20:33.244158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:15.149 [2024-11-26 11:20:33.276016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.149 [2024-11-26 11:20:33.276083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.719 [2024-11-26 11:20:33.819798] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:15.719 [2024-11-26 11:20:33.819912] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:15.719 [2024-11-26 11:20:33.819939] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x6092efe68c40 00:10:15.719 [2024-11-26 11:20:33.821311] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:15.719 [2024-11-26 11:20:33.821410] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:15.719 [2024-11-26 11:20:33.821441] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:15.719 Starting test contend 00:10:15.719 Worker Delay Wait us Hold us Total us 00:10:15.719 0 3 135368 202613 337982 00:10:15.719 1 5 64764 306271 371036 00:10:15.719 PASS test contend 00:10:15.719 Starting test hold_by_poller 00:10:15.719 PASS test hold_by_poller 00:10:15.719 Starting test hold_by_message 00:10:15.719 PASS test hold_by_message 00:10:15.719 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:15.719 100014 assertions passed 00:10:15.719 0 assertions failed 00:10:15.719 00:10:15.719 real 0m0.845s 00:10:15.719 user 0m1.215s 00:10:15.719 sys 0m0.076s 00:10:15.719 11:20:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.719 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 ************************************ 00:10:15.719 END TEST thread_spdk_lock 00:10:15.719 ************************************ 00:10:15.719 ************************************ 00:10:15.719 END TEST thread 00:10:15.719 ************************************ 00:10:15.719 00:10:15.719 real 0m3.814s 00:10:15.719 user 0m3.638s 00:10:15.719 sys 0m0.416s 00:10:15.719 11:20:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.719 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:10:15.978 11:20:33 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:15.978 11:20:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:15.978 11:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.978 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:10:15.978 ************************************ 00:10:15.978 START TEST accel 00:10:15.978 ************************************ 00:10:15.978 11:20:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:15.978 * Looking for test storage... 00:10:15.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:15.978 11:20:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:15.978 11:20:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:15.978 11:20:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:15.978 11:20:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:15.978 11:20:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:15.978 11:20:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:15.978 11:20:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:15.978 11:20:34 -- scripts/common.sh@335 -- # IFS=.-: 00:10:15.978 11:20:34 -- scripts/common.sh@335 -- # read -ra ver1 00:10:15.978 11:20:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.978 11:20:34 -- scripts/common.sh@336 -- # read -ra ver2 00:10:15.978 11:20:34 -- scripts/common.sh@337 -- # local 'op=<' 00:10:15.978 11:20:34 -- scripts/common.sh@339 -- # ver1_l=2 00:10:15.978 11:20:34 -- scripts/common.sh@340 -- # ver2_l=1 00:10:15.978 11:20:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:15.978 11:20:34 -- scripts/common.sh@343 -- # case "$op" in 00:10:15.978 11:20:34 -- scripts/common.sh@344 -- # : 1 00:10:15.978 11:20:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:15.978 11:20:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.978 11:20:34 -- scripts/common.sh@364 -- # decimal 1 00:10:15.978 11:20:34 -- scripts/common.sh@352 -- # local d=1 00:10:15.978 11:20:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.978 11:20:34 -- scripts/common.sh@354 -- # echo 1 00:10:15.978 11:20:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:15.978 11:20:34 -- scripts/common.sh@365 -- # decimal 2 00:10:15.978 11:20:34 -- scripts/common.sh@352 -- # local d=2 00:10:15.978 11:20:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.978 11:20:34 -- scripts/common.sh@354 -- # echo 2 00:10:15.978 11:20:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:15.978 11:20:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:15.978 11:20:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:15.978 11:20:34 -- scripts/common.sh@367 -- # return 0 00:10:15.978 11:20:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.978 11:20:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:15.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.978 --rc genhtml_branch_coverage=1 00:10:15.978 --rc genhtml_function_coverage=1 00:10:15.978 --rc genhtml_legend=1 00:10:15.978 --rc geninfo_all_blocks=1 00:10:15.978 --rc geninfo_unexecuted_blocks=1 00:10:15.978 00:10:15.978 ' 00:10:15.978 11:20:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:15.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.978 --rc genhtml_branch_coverage=1 00:10:15.978 --rc genhtml_function_coverage=1 00:10:15.978 --rc genhtml_legend=1 00:10:15.978 --rc geninfo_all_blocks=1 00:10:15.978 --rc geninfo_unexecuted_blocks=1 00:10:15.978 00:10:15.978 ' 00:10:15.978 11:20:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:15.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.978 --rc genhtml_branch_coverage=1 00:10:15.978 --rc genhtml_function_coverage=1 00:10:15.978 --rc genhtml_legend=1 00:10:15.978 --rc geninfo_all_blocks=1 00:10:15.978 --rc geninfo_unexecuted_blocks=1 00:10:15.978 00:10:15.978 ' 00:10:15.978 11:20:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:15.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.978 --rc genhtml_branch_coverage=1 00:10:15.978 --rc genhtml_function_coverage=1 00:10:15.978 --rc genhtml_legend=1 00:10:15.978 --rc geninfo_all_blocks=1 00:10:15.978 --rc geninfo_unexecuted_blocks=1 00:10:15.978 00:10:15.978 ' 00:10:15.978 11:20:34 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:15.978 11:20:34 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:15.978 11:20:34 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:15.978 11:20:34 -- accel/accel.sh@59 -- # spdk_tgt_pid=74576 00:10:15.978 11:20:34 -- accel/accel.sh@60 -- # waitforlisten 74576 00:10:15.978 11:20:34 -- common/autotest_common.sh@829 -- # '[' -z 74576 ']' 00:10:15.978 11:20:34 -- accel/accel.sh@58 -- # build_accel_config 00:10:15.978 11:20:34 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:15.978 11:20:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.978 11:20:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.979 11:20:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.979 11:20:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.979 11:20:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.979 11:20:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.979 11:20:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.979 11:20:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.979 11:20:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.979 11:20:34 -- accel/accel.sh@42 -- # jq -r . 00:10:15.979 11:20:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.979 11:20:34 -- common/autotest_common.sh@10 -- # set +x 00:10:15.979 [2024-11-26 11:20:34.207638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.979 [2024-11-26 11:20:34.207784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74576 ] 00:10:16.238 [2024-11-26 11:20:34.359796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.239 [2024-11-26 11:20:34.395506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.239 [2024-11-26 11:20:34.395786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.176 11:20:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.176 11:20:35 -- common/autotest_common.sh@862 -- # return 0 00:10:17.176 11:20:35 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:17.176 11:20:35 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:17.176 11:20:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.176 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:10:17.176 11:20:35 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:17.176 11:20:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # IFS== 00:10:17.176 11:20:35 -- accel/accel.sh@64 -- # read -r opc module 00:10:17.176 11:20:35 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:17.176 11:20:35 -- accel/accel.sh@67 -- # killprocess 74576 00:10:17.176 11:20:35 -- common/autotest_common.sh@936 -- # '[' -z 74576 ']' 00:10:17.176 11:20:35 -- common/autotest_common.sh@940 -- # kill -0 74576 00:10:17.176 11:20:35 -- common/autotest_common.sh@941 -- # uname 00:10:17.176 11:20:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.176 11:20:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74576 00:10:17.176 11:20:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:17.176 killing process with pid 74576 00:10:17.176 11:20:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:17.176 11:20:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74576' 00:10:17.176 11:20:35 -- common/autotest_common.sh@955 -- # kill 74576 00:10:17.176 11:20:35 -- common/autotest_common.sh@960 -- # wait 74576 00:10:17.436 11:20:35 -- accel/accel.sh@68 -- # trap - ERR 00:10:17.436 11:20:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:17.436 11:20:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:17.436 11:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.436 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 11:20:35 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:10:17.436 11:20:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:17.436 11:20:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.436 11:20:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.436 11:20:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.436 11:20:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.436 11:20:35 -- accel/accel.sh@42 -- # jq -r . 00:10:17.436 11:20:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.436 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 11:20:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:17.436 11:20:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:17.436 11:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.436 11:20:35 -- common/autotest_common.sh@10 -- # set +x 00:10:17.436 ************************************ 00:10:17.436 START TEST accel_missing_filename 00:10:17.436 ************************************ 00:10:17.436 11:20:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:10:17.436 11:20:35 -- common/autotest_common.sh@650 -- # local es=0 00:10:17.436 11:20:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:17.436 11:20:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:17.436 11:20:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.436 11:20:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:17.436 11:20:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.436 11:20:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:10:17.436 11:20:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:17.436 11:20:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.436 11:20:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.436 11:20:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.436 11:20:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.436 11:20:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.436 11:20:35 -- accel/accel.sh@42 -- # jq -r . 00:10:17.436 [2024-11-26 11:20:35.651818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:17.436 [2024-11-26 11:20:35.652011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74630 ] 00:10:17.695 [2024-11-26 11:20:35.816922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.695 [2024-11-26 11:20:35.848516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.695 [2024-11-26 11:20:35.879589] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:17.695 [2024-11-26 11:20:35.927177] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:17.955 A filename is required. 00:10:17.955 11:20:36 -- common/autotest_common.sh@653 -- # es=234 00:10:17.955 11:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.955 11:20:36 -- common/autotest_common.sh@662 -- # es=106 00:10:17.955 11:20:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:17.955 11:20:36 -- common/autotest_common.sh@670 -- # es=1 00:10:17.955 11:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.955 00:10:17.955 real 0m0.386s 00:10:17.955 user 0m0.184s 00:10:17.955 sys 0m0.110s 00:10:17.955 11:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.955 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 ************************************ 00:10:17.955 END TEST accel_missing_filename 00:10:17.955 ************************************ 00:10:17.955 11:20:36 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:17.955 11:20:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:17.955 11:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.955 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 ************************************ 00:10:17.955 START TEST accel_compress_verify 00:10:17.955 ************************************ 00:10:17.955 11:20:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:17.955 11:20:36 -- common/autotest_common.sh@650 -- # local es=0 00:10:17.955 11:20:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:17.955 11:20:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:17.955 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.955 11:20:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:17.955 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.955 11:20:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:17.955 11:20:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:17.955 11:20:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.955 11:20:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.955 11:20:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.955 11:20:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.955 11:20:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.955 11:20:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.955 11:20:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.955 11:20:36 -- accel/accel.sh@42 -- # jq -r . 00:10:17.955 [2024-11-26 11:20:36.092161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:17.955 [2024-11-26 11:20:36.092846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74650 ] 00:10:18.214 [2024-11-26 11:20:36.262454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.214 [2024-11-26 11:20:36.294107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.214 [2024-11-26 11:20:36.325680] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.214 [2024-11-26 11:20:36.373191] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:18.214 00:10:18.214 Compression does not support the verify option, aborting. 00:10:18.214 11:20:36 -- common/autotest_common.sh@653 -- # es=161 00:10:18.214 11:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.214 11:20:36 -- common/autotest_common.sh@662 -- # es=33 00:10:18.214 11:20:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:18.214 11:20:36 -- common/autotest_common.sh@670 -- # es=1 00:10:18.214 11:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.214 00:10:18.214 real 0m0.397s 00:10:18.214 user 0m0.182s 00:10:18.214 sys 0m0.120s 00:10:18.214 11:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.473 ************************************ 00:10:18.473 END TEST accel_compress_verify 00:10:18.473 ************************************ 00:10:18.473 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.473 11:20:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:18.473 11:20:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:18.473 11:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.473 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.473 ************************************ 00:10:18.473 START TEST accel_wrong_workload 00:10:18.473 ************************************ 00:10:18.473 11:20:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:10:18.473 11:20:36 -- common/autotest_common.sh@650 -- # local es=0 00:10:18.473 11:20:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:18.473 11:20:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.473 11:20:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:10:18.473 11:20:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:18.473 11:20:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.473 11:20:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.473 11:20:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.473 11:20:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.473 11:20:36 -- accel/accel.sh@42 -- # jq -r . 00:10:18.473 Unsupported workload type: foobar 00:10:18.473 [2024-11-26 11:20:36.533931] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:18.473 accel_perf options: 00:10:18.473 [-h help message] 00:10:18.473 [-q queue depth per core] 00:10:18.473 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:18.473 [-T number of threads per core 00:10:18.473 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:18.473 [-t time in seconds] 00:10:18.473 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:18.473 [ dif_verify, , dif_generate, dif_generate_copy 00:10:18.473 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:18.473 [-l for compress/decompress workloads, name of uncompressed input file 00:10:18.473 [-S for crc32c workload, use this seed value (default 0) 00:10:18.473 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:18.473 [-f for fill workload, use this BYTE value (default 255) 00:10:18.473 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:18.473 [-y verify result if this switch is on] 00:10:18.473 [-a tasks to allocate per core (default: same value as -q)] 00:10:18.473 Can be used to spread operations across a wider range of memory. 00:10:18.473 11:20:36 -- common/autotest_common.sh@653 -- # es=1 00:10:18.473 11:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.473 11:20:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.473 11:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.473 00:10:18.473 real 0m0.061s 00:10:18.473 user 0m0.038s 00:10:18.473 sys 0m0.032s 00:10:18.473 ************************************ 00:10:18.473 END TEST accel_wrong_workload 00:10:18.473 ************************************ 00:10:18.473 11:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.473 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.473 11:20:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:18.473 11:20:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:18.473 11:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.473 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.473 ************************************ 00:10:18.473 START TEST accel_negative_buffers 00:10:18.473 ************************************ 00:10:18.473 11:20:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:18.473 11:20:36 -- common/autotest_common.sh@650 -- # local es=0 00:10:18.473 11:20:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:18.473 11:20:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:18.473 11:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.473 11:20:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:10:18.473 11:20:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:18.473 11:20:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.473 11:20:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.473 11:20:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.473 11:20:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.473 11:20:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.473 11:20:36 -- accel/accel.sh@42 -- # jq -r . 00:10:18.473 -x option must be non-negative. 00:10:18.473 [2024-11-26 11:20:36.647424] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:18.473 accel_perf options: 00:10:18.473 [-h help message] 00:10:18.473 [-q queue depth per core] 00:10:18.473 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:18.473 [-T number of threads per core 00:10:18.473 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:18.473 [-t time in seconds] 00:10:18.473 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:18.473 [ dif_verify, , dif_generate, dif_generate_copy 00:10:18.473 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:18.473 [-l for compress/decompress workloads, name of uncompressed input file 00:10:18.473 [-S for crc32c workload, use this seed value (default 0) 00:10:18.473 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:18.473 [-f for fill workload, use this BYTE value (default 255) 00:10:18.473 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:18.473 [-y verify result if this switch is on] 00:10:18.474 [-a tasks to allocate per core (default: same value as -q)] 00:10:18.474 Can be used to spread operations across a wider range of memory. 00:10:18.474 11:20:36 -- common/autotest_common.sh@653 -- # es=1 00:10:18.474 11:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.474 11:20:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.474 11:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.474 00:10:18.474 real 0m0.062s 00:10:18.474 user 0m0.041s 00:10:18.474 sys 0m0.029s 00:10:18.474 11:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.474 ************************************ 00:10:18.474 END TEST accel_negative_buffers 00:10:18.474 ************************************ 00:10:18.474 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.733 11:20:36 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:18.733 11:20:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:18.733 11:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.733 11:20:36 -- common/autotest_common.sh@10 -- # set +x 00:10:18.733 ************************************ 00:10:18.733 START TEST accel_crc32c 00:10:18.733 ************************************ 00:10:18.733 11:20:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:18.733 11:20:36 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.733 11:20:36 -- accel/accel.sh@17 -- # local accel_module 00:10:18.733 11:20:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:18.733 11:20:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:18.733 11:20:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.733 11:20:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.733 11:20:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.733 11:20:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.733 11:20:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.733 11:20:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.733 11:20:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.733 11:20:36 -- accel/accel.sh@42 -- # jq -r . 00:10:18.733 [2024-11-26 11:20:36.762498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.733 [2024-11-26 11:20:36.762645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74717 ] 00:10:18.733 [2024-11-26 11:20:36.928376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.733 [2024-11-26 11:20:36.962380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.111 11:20:38 -- accel/accel.sh@18 -- # out=' 00:10:20.111 SPDK Configuration: 00:10:20.112 Core mask: 0x1 00:10:20.112 00:10:20.112 Accel Perf Configuration: 00:10:20.112 Workload Type: crc32c 00:10:20.112 CRC-32C seed: 32 00:10:20.112 Transfer size: 4096 bytes 00:10:20.112 Vector count 1 00:10:20.112 Module: software 00:10:20.112 Queue depth: 32 00:10:20.112 Allocate depth: 32 00:10:20.112 # threads/core: 1 00:10:20.112 Run time: 1 seconds 00:10:20.112 Verify: Yes 00:10:20.112 00:10:20.112 Running for 1 seconds... 00:10:20.112 00:10:20.112 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:20.112 ------------------------------------------------------------------------------------ 00:10:20.112 0,0 449248/s 1754 MiB/s 0 0 00:10:20.112 ==================================================================================== 00:10:20.112 Total 449248/s 1754 MiB/s 0 0' 00:10:20.112 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.112 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.112 11:20:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:20.112 11:20:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:20.112 11:20:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.112 11:20:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.112 11:20:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.112 11:20:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.112 11:20:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.112 11:20:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.112 11:20:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.112 11:20:38 -- accel/accel.sh@42 -- # jq -r . 00:10:20.112 [2024-11-26 11:20:38.156922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:20.112 [2024-11-26 11:20:38.157080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74732 ] 00:10:20.112 [2024-11-26 11:20:38.318174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.371 [2024-11-26 11:20:38.352765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=0x1 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=crc32c 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=32 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=software 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@23 -- # accel_module=software 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=32 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=32 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=1 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val=Yes 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:20.371 11:20:38 -- accel/accel.sh@21 -- # val= 00:10:20.371 11:20:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # IFS=: 00:10:20.371 11:20:38 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@21 -- # val= 00:10:21.306 11:20:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # IFS=: 00:10:21.306 11:20:39 -- accel/accel.sh@20 -- # read -r var val 00:10:21.306 11:20:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:21.306 11:20:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:21.306 11:20:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.306 00:10:21.306 real 0m2.795s 00:10:21.306 user 0m2.370s 00:10:21.306 sys 0m0.239s 00:10:21.306 11:20:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.306 ************************************ 00:10:21.306 END TEST accel_crc32c 00:10:21.306 ************************************ 00:10:21.306 11:20:39 -- common/autotest_common.sh@10 -- # set +x 00:10:21.566 11:20:39 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:21.566 11:20:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:21.566 11:20:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.566 11:20:39 -- common/autotest_common.sh@10 -- # set +x 00:10:21.566 ************************************ 00:10:21.566 START TEST accel_crc32c_C2 00:10:21.566 ************************************ 00:10:21.566 11:20:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:21.566 11:20:39 -- accel/accel.sh@16 -- # local accel_opc 00:10:21.566 11:20:39 -- accel/accel.sh@17 -- # local accel_module 00:10:21.566 11:20:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:21.566 11:20:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:21.566 11:20:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.566 11:20:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.566 11:20:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.566 11:20:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.566 11:20:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.566 11:20:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.566 11:20:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.566 11:20:39 -- accel/accel.sh@42 -- # jq -r . 00:10:21.566 [2024-11-26 11:20:39.608204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.566 [2024-11-26 11:20:39.608360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74773 ] 00:10:21.566 [2024-11-26 11:20:39.772647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.824 [2024-11-26 11:20:39.805171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.761 11:20:40 -- accel/accel.sh@18 -- # out=' 00:10:22.761 SPDK Configuration: 00:10:22.761 Core mask: 0x1 00:10:22.761 00:10:22.761 Accel Perf Configuration: 00:10:22.761 Workload Type: crc32c 00:10:22.761 CRC-32C seed: 0 00:10:22.762 Transfer size: 4096 bytes 00:10:22.762 Vector count 2 00:10:22.762 Module: software 00:10:22.762 Queue depth: 32 00:10:22.762 Allocate depth: 32 00:10:22.762 # threads/core: 1 00:10:22.762 Run time: 1 seconds 00:10:22.762 Verify: Yes 00:10:22.762 00:10:22.762 Running for 1 seconds... 00:10:22.762 00:10:22.762 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:22.762 ------------------------------------------------------------------------------------ 00:10:22.762 0,0 344512/s 2691 MiB/s 0 0 00:10:22.762 ==================================================================================== 00:10:22.762 Total 344512/s 1345 MiB/s 0 0' 00:10:22.762 11:20:40 -- accel/accel.sh@20 -- # IFS=: 00:10:22.762 11:20:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:22.762 11:20:40 -- accel/accel.sh@20 -- # read -r var val 00:10:22.762 11:20:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:22.762 11:20:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.762 11:20:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.762 11:20:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.762 11:20:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.762 11:20:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.762 11:20:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.762 11:20:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.762 11:20:40 -- accel/accel.sh@42 -- # jq -r . 00:10:22.762 [2024-11-26 11:20:40.989855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:22.762 [2024-11-26 11:20:40.990044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74788 ] 00:10:23.020 [2024-11-26 11:20:41.143087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.020 [2024-11-26 11:20:41.175043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.020 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.020 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val=0x1 00:10:23.020 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.020 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.020 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.020 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.020 11:20:41 -- accel/accel.sh@21 -- # val=crc32c 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=0 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=software 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@23 -- # accel_module=software 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=32 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=32 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=1 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val=Yes 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:23.021 11:20:41 -- accel/accel.sh@21 -- # val= 00:10:23.021 11:20:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # IFS=: 00:10:23.021 11:20:41 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@21 -- # val= 00:10:24.397 11:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.397 11:20:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.397 11:20:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:24.397 11:20:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:24.397 11:20:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.397 00:10:24.397 real 0m2.756s 00:10:24.397 user 0m2.335s 00:10:24.397 sys 0m0.241s 00:10:24.397 11:20:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.397 11:20:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.397 ************************************ 00:10:24.397 END TEST accel_crc32c_C2 00:10:24.397 ************************************ 00:10:24.397 11:20:42 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:24.397 11:20:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:24.397 11:20:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.397 11:20:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.397 ************************************ 00:10:24.397 START TEST accel_copy 00:10:24.397 ************************************ 00:10:24.397 11:20:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:10:24.397 11:20:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.397 11:20:42 -- accel/accel.sh@17 -- # local accel_module 00:10:24.397 11:20:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:24.397 11:20:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:24.397 11:20:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.397 11:20:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.397 11:20:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.397 11:20:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.397 11:20:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.397 11:20:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.397 11:20:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.397 11:20:42 -- accel/accel.sh@42 -- # jq -r . 00:10:24.397 [2024-11-26 11:20:42.419724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.397 [2024-11-26 11:20:42.419929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74818 ] 00:10:24.397 [2024-11-26 11:20:42.580957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.397 [2024-11-26 11:20:42.614437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.776 11:20:43 -- accel/accel.sh@18 -- # out=' 00:10:25.776 SPDK Configuration: 00:10:25.776 Core mask: 0x1 00:10:25.776 00:10:25.776 Accel Perf Configuration: 00:10:25.776 Workload Type: copy 00:10:25.776 Transfer size: 4096 bytes 00:10:25.776 Vector count 1 00:10:25.776 Module: software 00:10:25.776 Queue depth: 32 00:10:25.776 Allocate depth: 32 00:10:25.776 # threads/core: 1 00:10:25.776 Run time: 1 seconds 00:10:25.776 Verify: Yes 00:10:25.776 00:10:25.776 Running for 1 seconds... 00:10:25.776 00:10:25.776 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.776 ------------------------------------------------------------------------------------ 00:10:25.776 0,0 278784/s 1089 MiB/s 0 0 00:10:25.776 ==================================================================================== 00:10:25.776 Total 278784/s 1089 MiB/s 0 0' 00:10:25.776 11:20:43 -- accel/accel.sh@20 -- # IFS=: 00:10:25.776 11:20:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:25.776 11:20:43 -- accel/accel.sh@20 -- # read -r var val 00:10:25.776 11:20:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:25.776 11:20:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.776 11:20:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.776 11:20:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.776 11:20:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.776 11:20:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.776 11:20:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.776 11:20:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.776 11:20:43 -- accel/accel.sh@42 -- # jq -r . 00:10:25.776 [2024-11-26 11:20:43.822315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.776 [2024-11-26 11:20:43.822499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74844 ] 00:10:25.776 [2024-11-26 11:20:43.986486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.035 [2024-11-26 11:20:44.019243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.035 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.035 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.035 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.035 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.035 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.035 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.035 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.035 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.035 11:20:44 -- accel/accel.sh@21 -- # val=0x1 00:10:26.035 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=copy 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=software 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=32 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=32 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=1 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val=Yes 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.036 11:20:44 -- accel/accel.sh@21 -- # val= 00:10:26.036 11:20:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.036 11:20:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.972 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.972 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.972 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.972 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.972 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.972 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.972 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.972 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.973 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.973 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.973 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.973 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.973 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.973 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.973 11:20:45 -- accel/accel.sh@21 -- # val= 00:10:26.973 11:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # IFS=: 00:10:26.973 11:20:45 -- accel/accel.sh@20 -- # read -r var val 00:10:26.973 11:20:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:26.973 11:20:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:26.973 ************************************ 00:10:26.973 END TEST accel_copy 00:10:26.973 ************************************ 00:10:26.973 11:20:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.973 00:10:26.973 real 0m2.798s 00:10:26.973 user 0m2.355s 00:10:26.973 sys 0m0.259s 00:10:26.973 11:20:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.973 11:20:45 -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 11:20:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:27.232 11:20:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:27.232 11:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.232 11:20:45 -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 ************************************ 00:10:27.232 START TEST accel_fill 00:10:27.232 ************************************ 00:10:27.232 11:20:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:27.232 11:20:45 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.232 11:20:45 -- accel/accel.sh@17 -- # local accel_module 00:10:27.232 11:20:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:27.232 11:20:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:27.232 11:20:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.232 11:20:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.232 11:20:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.232 11:20:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.232 11:20:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.232 11:20:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.232 11:20:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.232 11:20:45 -- accel/accel.sh@42 -- # jq -r . 00:10:27.232 [2024-11-26 11:20:45.271112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.232 [2024-11-26 11:20:45.271313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74874 ] 00:10:27.232 [2024-11-26 11:20:45.435757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.491 [2024-11-26 11:20:45.469268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.428 11:20:46 -- accel/accel.sh@18 -- # out=' 00:10:28.428 SPDK Configuration: 00:10:28.428 Core mask: 0x1 00:10:28.428 00:10:28.428 Accel Perf Configuration: 00:10:28.428 Workload Type: fill 00:10:28.428 Fill pattern: 0x80 00:10:28.428 Transfer size: 4096 bytes 00:10:28.428 Vector count 1 00:10:28.428 Module: software 00:10:28.428 Queue depth: 64 00:10:28.428 Allocate depth: 64 00:10:28.428 # threads/core: 1 00:10:28.428 Run time: 1 seconds 00:10:28.428 Verify: Yes 00:10:28.428 00:10:28.428 Running for 1 seconds... 00:10:28.428 00:10:28.428 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:28.428 ------------------------------------------------------------------------------------ 00:10:28.428 0,0 398912/s 1558 MiB/s 0 0 00:10:28.428 ==================================================================================== 00:10:28.428 Total 398912/s 1558 MiB/s 0 0' 00:10:28.428 11:20:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.428 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.428 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.428 11:20:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.428 11:20:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.428 11:20:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.428 11:20:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.428 11:20:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.428 11:20:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.428 11:20:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.428 11:20:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.428 11:20:46 -- accel/accel.sh@42 -- # jq -r . 00:10:28.687 [2024-11-26 11:20:46.666322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.687 [2024-11-26 11:20:46.666514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74889 ] 00:10:28.687 [2024-11-26 11:20:46.832519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.687 [2024-11-26 11:20:46.865974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=0x1 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=fill 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=0x80 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=software 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@23 -- # accel_module=software 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=64 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=64 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=1 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val=Yes 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.687 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.687 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.687 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.688 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.688 11:20:46 -- accel/accel.sh@21 -- # val= 00:10:28.688 11:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.688 11:20:46 -- accel/accel.sh@20 -- # IFS=: 00:10:28.688 11:20:46 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@21 -- # val= 00:10:30.090 11:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # IFS=: 00:10:30.090 11:20:48 -- accel/accel.sh@20 -- # read -r var val 00:10:30.090 11:20:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:30.090 11:20:48 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:30.090 11:20:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.090 00:10:30.090 real 0m2.813s 00:10:30.090 user 0m2.391s 00:10:30.090 sys 0m0.249s 00:10:30.090 11:20:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.090 11:20:48 -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 ************************************ 00:10:30.090 END TEST accel_fill 00:10:30.090 ************************************ 00:10:30.090 11:20:48 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:30.090 11:20:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:30.090 11:20:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.090 11:20:48 -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 ************************************ 00:10:30.090 START TEST accel_copy_crc32c 00:10:30.090 ************************************ 00:10:30.090 11:20:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:30.090 11:20:48 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.090 11:20:48 -- accel/accel.sh@17 -- # local accel_module 00:10:30.090 11:20:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:30.090 11:20:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:30.090 11:20:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.090 11:20:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.090 11:20:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.090 11:20:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.090 11:20:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.090 11:20:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.090 11:20:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.090 11:20:48 -- accel/accel.sh@42 -- # jq -r . 00:10:30.090 [2024-11-26 11:20:48.132640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.090 [2024-11-26 11:20:48.132824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74929 ] 00:10:30.090 [2024-11-26 11:20:48.300108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.349 [2024-11-26 11:20:48.333890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.287 11:20:49 -- accel/accel.sh@18 -- # out=' 00:10:31.287 SPDK Configuration: 00:10:31.287 Core mask: 0x1 00:10:31.287 00:10:31.287 Accel Perf Configuration: 00:10:31.287 Workload Type: copy_crc32c 00:10:31.287 CRC-32C seed: 0 00:10:31.287 Vector size: 4096 bytes 00:10:31.287 Transfer size: 4096 bytes 00:10:31.287 Vector count 1 00:10:31.287 Module: software 00:10:31.287 Queue depth: 32 00:10:31.287 Allocate depth: 32 00:10:31.287 # threads/core: 1 00:10:31.287 Run time: 1 seconds 00:10:31.287 Verify: Yes 00:10:31.287 00:10:31.287 Running for 1 seconds... 00:10:31.287 00:10:31.287 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.287 ------------------------------------------------------------------------------------ 00:10:31.287 0,0 217184/s 848 MiB/s 0 0 00:10:31.287 ==================================================================================== 00:10:31.287 Total 217184/s 848 MiB/s 0 0' 00:10:31.287 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.287 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.287 11:20:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:31.287 11:20:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:31.287 11:20:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.287 11:20:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.287 11:20:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.287 11:20:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.287 11:20:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.287 11:20:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.287 11:20:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.287 11:20:49 -- accel/accel.sh@42 -- # jq -r . 00:10:31.546 [2024-11-26 11:20:49.537041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:31.546 [2024-11-26 11:20:49.537787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74945 ] 00:10:31.546 [2024-11-26 11:20:49.707250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.546 [2024-11-26 11:20:49.739812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.546 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.546 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.546 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.546 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.547 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val=0x1 00:10:31.547 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.547 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.547 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:31.547 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.547 11:20:49 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.547 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.547 11:20:49 -- accel/accel.sh@21 -- # val=0 00:10:31.806 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.806 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.806 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.806 11:20:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.806 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.806 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.806 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.806 11:20:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val=software 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val=32 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val=32 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val=1 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val=Yes 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:31.807 11:20:49 -- accel/accel.sh@21 -- # val= 00:10:31.807 11:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # IFS=: 00:10:31.807 11:20:49 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@21 -- # val= 00:10:32.756 11:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # IFS=: 00:10:32.756 11:20:50 -- accel/accel.sh@20 -- # read -r var val 00:10:32.756 11:20:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.756 11:20:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:32.756 11:20:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.756 00:10:32.756 real 0m2.806s 00:10:32.756 user 0m2.361s 00:10:32.756 sys 0m0.274s 00:10:32.756 11:20:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.756 11:20:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.756 ************************************ 00:10:32.756 END TEST accel_copy_crc32c 00:10:32.756 ************************************ 00:10:32.756 11:20:50 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:32.756 11:20:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:32.756 11:20:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.756 11:20:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.756 ************************************ 00:10:32.756 START TEST accel_copy_crc32c_C2 00:10:32.756 ************************************ 00:10:32.756 11:20:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:32.756 11:20:50 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.756 11:20:50 -- accel/accel.sh@17 -- # local accel_module 00:10:32.756 11:20:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:32.756 11:20:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:32.756 11:20:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.756 11:20:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.756 11:20:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.756 11:20:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.756 11:20:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.756 11:20:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.756 11:20:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.756 11:20:50 -- accel/accel.sh@42 -- # jq -r . 00:10:32.756 [2024-11-26 11:20:50.988145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.756 [2024-11-26 11:20:50.988340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74981 ] 00:10:33.015 [2024-11-26 11:20:51.152547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.016 [2024-11-26 11:20:51.184011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.394 11:20:52 -- accel/accel.sh@18 -- # out=' 00:10:34.394 SPDK Configuration: 00:10:34.394 Core mask: 0x1 00:10:34.394 00:10:34.394 Accel Perf Configuration: 00:10:34.394 Workload Type: copy_crc32c 00:10:34.394 CRC-32C seed: 0 00:10:34.394 Vector size: 4096 bytes 00:10:34.394 Transfer size: 8192 bytes 00:10:34.394 Vector count 2 00:10:34.394 Module: software 00:10:34.394 Queue depth: 32 00:10:34.394 Allocate depth: 32 00:10:34.394 # threads/core: 1 00:10:34.394 Run time: 1 seconds 00:10:34.394 Verify: Yes 00:10:34.394 00:10:34.394 Running for 1 seconds... 00:10:34.394 00:10:34.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.394 ------------------------------------------------------------------------------------ 00:10:34.394 0,0 162880/s 1272 MiB/s 0 0 00:10:34.394 ==================================================================================== 00:10:34.394 Total 162880/s 636 MiB/s 0 0' 00:10:34.394 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.394 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.394 11:20:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:34.394 11:20:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:34.394 11:20:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.394 11:20:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.394 11:20:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.394 11:20:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.394 11:20:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.394 11:20:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.394 11:20:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.394 11:20:52 -- accel/accel.sh@42 -- # jq -r . 00:10:34.394 [2024-11-26 11:20:52.376948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.394 [2024-11-26 11:20:52.377443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74996 ] 00:10:34.394 [2024-11-26 11:20:52.566375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.394 [2024-11-26 11:20:52.597904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=0x1 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=0 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=software 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=32 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=32 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=1 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.653 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.653 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.653 11:20:52 -- accel/accel.sh@21 -- # val=Yes 00:10:34.654 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.654 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.654 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:34.654 11:20:52 -- accel/accel.sh@21 -- # val= 00:10:34.654 11:20:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # IFS=: 00:10:34.654 11:20:52 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@21 -- # val= 00:10:35.593 11:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # IFS=: 00:10:35.593 11:20:53 -- accel/accel.sh@20 -- # read -r var val 00:10:35.593 11:20:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.593 11:20:53 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:35.593 11:20:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.593 ************************************ 00:10:35.593 END TEST accel_copy_crc32c_C2 00:10:35.593 ************************************ 00:10:35.593 00:10:35.593 real 0m2.798s 00:10:35.593 user 0m2.335s 00:10:35.593 sys 0m0.278s 00:10:35.593 11:20:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:35.593 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:10:35.593 11:20:53 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:35.593 11:20:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:35.593 11:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.593 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:10:35.593 ************************************ 00:10:35.593 START TEST accel_dualcast 00:10:35.593 ************************************ 00:10:35.593 11:20:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:35.593 11:20:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.593 11:20:53 -- accel/accel.sh@17 -- # local accel_module 00:10:35.593 11:20:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:35.593 11:20:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:35.593 11:20:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.593 11:20:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.593 11:20:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.593 11:20:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.593 11:20:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.593 11:20:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.593 11:20:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.593 11:20:53 -- accel/accel.sh@42 -- # jq -r . 00:10:35.853 [2024-11-26 11:20:53.839027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.853 [2024-11-26 11:20:53.839214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75031 ] 00:10:35.853 [2024-11-26 11:20:54.003051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.853 [2024-11-26 11:20:54.035210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.232 11:20:55 -- accel/accel.sh@18 -- # out=' 00:10:37.232 SPDK Configuration: 00:10:37.232 Core mask: 0x1 00:10:37.232 00:10:37.232 Accel Perf Configuration: 00:10:37.232 Workload Type: dualcast 00:10:37.232 Transfer size: 4096 bytes 00:10:37.232 Vector count 1 00:10:37.232 Module: software 00:10:37.232 Queue depth: 32 00:10:37.232 Allocate depth: 32 00:10:37.232 # threads/core: 1 00:10:37.232 Run time: 1 seconds 00:10:37.232 Verify: Yes 00:10:37.232 00:10:37.232 Running for 1 seconds... 00:10:37.232 00:10:37.232 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.232 ------------------------------------------------------------------------------------ 00:10:37.232 0,0 310208/s 1211 MiB/s 0 0 00:10:37.232 ==================================================================================== 00:10:37.232 Total 310208/s 1211 MiB/s 0 0' 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:37.232 11:20:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:37.232 11:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.232 11:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.232 11:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.232 11:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.232 11:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.232 11:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.232 11:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.232 11:20:55 -- accel/accel.sh@42 -- # jq -r . 00:10:37.232 [2024-11-26 11:20:55.226933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.232 [2024-11-26 11:20:55.227113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75052 ] 00:10:37.232 [2024-11-26 11:20:55.391082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.232 [2024-11-26 11:20:55.422417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=0x1 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=dualcast 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=software 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@23 -- # accel_module=software 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=32 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=32 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=1 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:37.232 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.232 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.232 11:20:55 -- accel/accel.sh@21 -- # val=Yes 00:10:37.492 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.492 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.492 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:37.492 11:20:55 -- accel/accel.sh@21 -- # val= 00:10:37.492 11:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # IFS=: 00:10:37.492 11:20:55 -- accel/accel.sh@20 -- # read -r var val 00:10:38.429 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.429 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.429 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.429 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.429 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.429 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.429 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.429 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.430 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.430 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.430 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.430 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.430 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.430 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.430 11:20:56 -- accel/accel.sh@21 -- # val= 00:10:38.430 11:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.430 11:20:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.430 11:20:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.430 11:20:56 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:38.430 11:20:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.430 00:10:38.430 real 0m2.793s 00:10:38.430 user 0m2.384s 00:10:38.430 sys 0m0.225s 00:10:38.430 11:20:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:38.430 11:20:56 -- common/autotest_common.sh@10 -- # set +x 00:10:38.430 ************************************ 00:10:38.430 END TEST accel_dualcast 00:10:38.430 ************************************ 00:10:38.430 11:20:56 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:38.430 11:20:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:38.430 11:20:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.430 11:20:56 -- common/autotest_common.sh@10 -- # set +x 00:10:38.430 ************************************ 00:10:38.430 START TEST accel_compare 00:10:38.430 ************************************ 00:10:38.430 11:20:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:38.430 11:20:56 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.430 11:20:56 -- accel/accel.sh@17 -- # local accel_module 00:10:38.430 11:20:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:38.430 11:20:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:38.430 11:20:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.430 11:20:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.430 11:20:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.430 11:20:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.430 11:20:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.430 11:20:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.430 11:20:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.430 11:20:56 -- accel/accel.sh@42 -- # jq -r . 00:10:38.690 [2024-11-26 11:20:56.678101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:38.690 [2024-11-26 11:20:56.678982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:10:38.690 [2024-11-26 11:20:56.843517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.690 [2024-11-26 11:20:56.882811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.068 11:20:58 -- accel/accel.sh@18 -- # out=' 00:10:40.068 SPDK Configuration: 00:10:40.068 Core mask: 0x1 00:10:40.068 00:10:40.068 Accel Perf Configuration: 00:10:40.068 Workload Type: compare 00:10:40.068 Transfer size: 4096 bytes 00:10:40.068 Vector count 1 00:10:40.068 Module: software 00:10:40.068 Queue depth: 32 00:10:40.068 Allocate depth: 32 00:10:40.068 # threads/core: 1 00:10:40.068 Run time: 1 seconds 00:10:40.068 Verify: Yes 00:10:40.068 00:10:40.068 Running for 1 seconds... 00:10:40.068 00:10:40.068 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.068 ------------------------------------------------------------------------------------ 00:10:40.068 0,0 394528/s 1541 MiB/s 0 0 00:10:40.068 ==================================================================================== 00:10:40.068 Total 394528/s 1541 MiB/s 0 0' 00:10:40.068 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.068 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.068 11:20:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:40.068 11:20:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:40.068 11:20:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.068 11:20:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.068 11:20:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.068 11:20:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.068 11:20:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.068 11:20:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.068 11:20:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.068 11:20:58 -- accel/accel.sh@42 -- # jq -r . 00:10:40.068 [2024-11-26 11:20:58.095726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.068 [2024-11-26 11:20:58.095959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75108 ] 00:10:40.068 [2024-11-26 11:20:58.257631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.068 [2024-11-26 11:20:58.291069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.327 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.327 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.327 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.327 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.327 11:20:58 -- accel/accel.sh@21 -- # val=0x1 00:10:40.327 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.327 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.327 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.327 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=compare 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=software 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=32 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=32 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=1 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val=Yes 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:40.328 11:20:58 -- accel/accel.sh@21 -- # val= 00:10:40.328 11:20:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # IFS=: 00:10:40.328 11:20:58 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@21 -- # val= 00:10:41.264 11:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # IFS=: 00:10:41.264 11:20:59 -- accel/accel.sh@20 -- # read -r var val 00:10:41.264 11:20:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:41.264 11:20:59 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:41.264 11:20:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.264 00:10:41.264 real 0m2.805s 00:10:41.264 user 0m2.373s 00:10:41.264 sys 0m0.245s 00:10:41.264 11:20:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:41.264 ************************************ 00:10:41.264 END TEST accel_compare 00:10:41.264 ************************************ 00:10:41.264 11:20:59 -- common/autotest_common.sh@10 -- # set +x 00:10:41.264 11:20:59 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:41.264 11:20:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:41.264 11:20:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.264 11:20:59 -- common/autotest_common.sh@10 -- # set +x 00:10:41.523 ************************************ 00:10:41.523 START TEST accel_xor 00:10:41.523 ************************************ 00:10:41.523 11:20:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:41.523 11:20:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:41.523 11:20:59 -- accel/accel.sh@17 -- # local accel_module 00:10:41.523 11:20:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:41.523 11:20:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:41.523 11:20:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.523 11:20:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.523 11:20:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.523 11:20:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.523 11:20:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.524 11:20:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.524 11:20:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.524 11:20:59 -- accel/accel.sh@42 -- # jq -r . 00:10:41.524 [2024-11-26 11:20:59.528226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.524 [2024-11-26 11:20:59.528399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75138 ] 00:10:41.524 [2024-11-26 11:20:59.677786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.524 [2024-11-26 11:20:59.710926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.903 11:21:00 -- accel/accel.sh@18 -- # out=' 00:10:42.903 SPDK Configuration: 00:10:42.903 Core mask: 0x1 00:10:42.903 00:10:42.903 Accel Perf Configuration: 00:10:42.903 Workload Type: xor 00:10:42.903 Source buffers: 2 00:10:42.903 Transfer size: 4096 bytes 00:10:42.903 Vector count 1 00:10:42.903 Module: software 00:10:42.903 Queue depth: 32 00:10:42.903 Allocate depth: 32 00:10:42.903 # threads/core: 1 00:10:42.903 Run time: 1 seconds 00:10:42.903 Verify: Yes 00:10:42.903 00:10:42.903 Running for 1 seconds... 00:10:42.903 00:10:42.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.903 ------------------------------------------------------------------------------------ 00:10:42.903 0,0 206176/s 805 MiB/s 0 0 00:10:42.903 ==================================================================================== 00:10:42.903 Total 206176/s 805 MiB/s 0 0' 00:10:42.903 11:21:00 -- accel/accel.sh@20 -- # IFS=: 00:10:42.903 11:21:00 -- accel/accel.sh@20 -- # read -r var val 00:10:42.903 11:21:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:42.903 11:21:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:42.903 11:21:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.903 11:21:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.903 11:21:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.903 11:21:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.903 11:21:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.903 11:21:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.903 11:21:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.903 11:21:00 -- accel/accel.sh@42 -- # jq -r . 00:10:42.903 [2024-11-26 11:21:00.908697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.903 [2024-11-26 11:21:00.908868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75153 ] 00:10:42.903 [2024-11-26 11:21:01.075471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.903 [2024-11-26 11:21:01.109712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=0x1 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=xor 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=2 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=software 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=32 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=32 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val=1 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.162 11:21:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.162 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.162 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.163 11:21:01 -- accel/accel.sh@21 -- # val=Yes 00:10:43.163 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.163 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.163 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:43.163 11:21:01 -- accel/accel.sh@21 -- # val= 00:10:43.163 11:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # IFS=: 00:10:43.163 11:21:01 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@21 -- # val= 00:10:44.100 11:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # IFS=: 00:10:44.100 11:21:02 -- accel/accel.sh@20 -- # read -r var val 00:10:44.100 11:21:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:44.100 11:21:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:44.100 11:21:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.100 00:10:44.100 real 0m2.786s 00:10:44.100 user 0m2.392s 00:10:44.100 sys 0m0.213s 00:10:44.100 11:21:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.100 11:21:02 -- common/autotest_common.sh@10 -- # set +x 00:10:44.100 ************************************ 00:10:44.100 END TEST accel_xor 00:10:44.100 ************************************ 00:10:44.100 11:21:02 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:44.100 11:21:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:44.100 11:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.100 11:21:02 -- common/autotest_common.sh@10 -- # set +x 00:10:44.359 ************************************ 00:10:44.359 START TEST accel_xor 00:10:44.359 ************************************ 00:10:44.359 11:21:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:44.359 11:21:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:44.359 11:21:02 -- accel/accel.sh@17 -- # local accel_module 00:10:44.359 11:21:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:44.359 11:21:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:44.359 11:21:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.359 11:21:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.359 11:21:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.359 11:21:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.359 11:21:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.359 11:21:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.359 11:21:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.359 11:21:02 -- accel/accel.sh@42 -- # jq -r . 00:10:44.359 [2024-11-26 11:21:02.370261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:44.360 [2024-11-26 11:21:02.370488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75194 ] 00:10:44.360 [2024-11-26 11:21:02.537675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.360 [2024-11-26 11:21:02.573541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.760 11:21:03 -- accel/accel.sh@18 -- # out=' 00:10:45.760 SPDK Configuration: 00:10:45.760 Core mask: 0x1 00:10:45.760 00:10:45.760 Accel Perf Configuration: 00:10:45.760 Workload Type: xor 00:10:45.760 Source buffers: 3 00:10:45.760 Transfer size: 4096 bytes 00:10:45.760 Vector count 1 00:10:45.760 Module: software 00:10:45.760 Queue depth: 32 00:10:45.760 Allocate depth: 32 00:10:45.760 # threads/core: 1 00:10:45.760 Run time: 1 seconds 00:10:45.760 Verify: Yes 00:10:45.760 00:10:45.760 Running for 1 seconds... 00:10:45.760 00:10:45.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.760 ------------------------------------------------------------------------------------ 00:10:45.760 0,0 196160/s 766 MiB/s 0 0 00:10:45.760 ==================================================================================== 00:10:45.760 Total 196160/s 766 MiB/s 0 0' 00:10:45.760 11:21:03 -- accel/accel.sh@20 -- # IFS=: 00:10:45.760 11:21:03 -- accel/accel.sh@20 -- # read -r var val 00:10:45.760 11:21:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:45.760 11:21:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:45.760 11:21:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.760 11:21:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.760 11:21:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.760 11:21:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.760 11:21:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.760 11:21:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.760 11:21:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.760 11:21:03 -- accel/accel.sh@42 -- # jq -r . 00:10:45.760 [2024-11-26 11:21:03.780463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:45.760 [2024-11-26 11:21:03.780629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75209 ] 00:10:45.760 [2024-11-26 11:21:03.950553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.760 [2024-11-26 11:21:03.985104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=0x1 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=xor 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=3 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=software 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=32 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=32 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=1 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val=Yes 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.019 11:21:04 -- accel/accel.sh@21 -- # val= 00:10:46.019 11:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # IFS=: 00:10:46.019 11:21:04 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@21 -- # val= 00:10:46.957 11:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # IFS=: 00:10:46.957 11:21:05 -- accel/accel.sh@20 -- # read -r var val 00:10:46.957 11:21:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:46.957 11:21:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:46.957 11:21:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.957 00:10:46.957 real 0m2.817s 00:10:46.957 user 0m2.387s 00:10:46.957 sys 0m0.246s 00:10:46.957 11:21:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.957 ************************************ 00:10:46.957 END TEST accel_xor 00:10:46.957 ************************************ 00:10:46.957 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:10:47.217 11:21:05 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:47.217 11:21:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:47.217 11:21:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.217 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:10:47.217 ************************************ 00:10:47.217 START TEST accel_dif_verify 00:10:47.217 ************************************ 00:10:47.217 11:21:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:47.217 11:21:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.217 11:21:05 -- accel/accel.sh@17 -- # local accel_module 00:10:47.217 11:21:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:47.217 11:21:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:47.217 11:21:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.217 11:21:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.217 11:21:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.217 11:21:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.217 11:21:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.217 11:21:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.217 11:21:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.217 11:21:05 -- accel/accel.sh@42 -- # jq -r . 00:10:47.217 [2024-11-26 11:21:05.233602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.217 [2024-11-26 11:21:05.233750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75245 ] 00:10:47.217 [2024-11-26 11:21:05.386320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.217 [2024-11-26 11:21:05.419943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.597 11:21:06 -- accel/accel.sh@18 -- # out=' 00:10:48.597 SPDK Configuration: 00:10:48.597 Core mask: 0x1 00:10:48.597 00:10:48.597 Accel Perf Configuration: 00:10:48.597 Workload Type: dif_verify 00:10:48.597 Vector size: 4096 bytes 00:10:48.597 Transfer size: 4096 bytes 00:10:48.597 Block size: 512 bytes 00:10:48.597 Metadata size: 8 bytes 00:10:48.597 Vector count 1 00:10:48.597 Module: software 00:10:48.597 Queue depth: 32 00:10:48.597 Allocate depth: 32 00:10:48.597 # threads/core: 1 00:10:48.597 Run time: 1 seconds 00:10:48.597 Verify: No 00:10:48.597 00:10:48.597 Running for 1 seconds... 00:10:48.597 00:10:48.597 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.597 ------------------------------------------------------------------------------------ 00:10:48.597 0,0 94080/s 373 MiB/s 0 0 00:10:48.597 ==================================================================================== 00:10:48.597 Total 94080/s 367 MiB/s 0 0' 00:10:48.597 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.597 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.597 11:21:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:48.597 11:21:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:48.597 11:21:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.597 11:21:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.597 11:21:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.597 11:21:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.598 11:21:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.598 11:21:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.598 11:21:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.598 11:21:06 -- accel/accel.sh@42 -- # jq -r . 00:10:48.598 [2024-11-26 11:21:06.621156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.598 [2024-11-26 11:21:06.621317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75261 ] 00:10:48.598 [2024-11-26 11:21:06.786646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.598 [2024-11-26 11:21:06.819672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.857 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.857 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.857 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.857 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.857 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.857 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=0x1 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=dif_verify 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=software 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=32 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=32 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=1 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val=No 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.858 11:21:06 -- accel/accel.sh@21 -- # val= 00:10:48.858 11:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.858 11:21:06 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@21 -- # val= 00:10:49.795 11:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # IFS=: 00:10:49.795 11:21:07 -- accel/accel.sh@20 -- # read -r var val 00:10:49.795 11:21:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:49.795 11:21:07 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:49.795 11:21:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.795 00:10:49.795 real 0m2.780s 00:10:49.795 user 0m2.353s 00:10:49.795 sys 0m0.243s 00:10:49.795 11:21:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:49.795 11:21:07 -- common/autotest_common.sh@10 -- # set +x 00:10:49.795 ************************************ 00:10:49.795 END TEST accel_dif_verify 00:10:49.795 ************************************ 00:10:49.795 11:21:08 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:49.795 11:21:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:50.054 11:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.054 11:21:08 -- common/autotest_common.sh@10 -- # set +x 00:10:50.054 ************************************ 00:10:50.054 START TEST accel_dif_generate 00:10:50.054 ************************************ 00:10:50.054 11:21:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:50.054 11:21:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.054 11:21:08 -- accel/accel.sh@17 -- # local accel_module 00:10:50.054 11:21:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:50.055 11:21:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:50.055 11:21:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.055 11:21:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.055 11:21:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.055 11:21:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.055 11:21:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.055 11:21:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.055 11:21:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.055 11:21:08 -- accel/accel.sh@42 -- # jq -r . 00:10:50.055 [2024-11-26 11:21:08.077515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.055 [2024-11-26 11:21:08.077696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75295 ] 00:10:50.055 [2024-11-26 11:21:08.243241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.055 [2024-11-26 11:21:08.275637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.433 11:21:09 -- accel/accel.sh@18 -- # out=' 00:10:51.433 SPDK Configuration: 00:10:51.433 Core mask: 0x1 00:10:51.433 00:10:51.433 Accel Perf Configuration: 00:10:51.433 Workload Type: dif_generate 00:10:51.433 Vector size: 4096 bytes 00:10:51.433 Transfer size: 4096 bytes 00:10:51.433 Block size: 512 bytes 00:10:51.433 Metadata size: 8 bytes 00:10:51.433 Vector count 1 00:10:51.433 Module: software 00:10:51.433 Queue depth: 32 00:10:51.433 Allocate depth: 32 00:10:51.433 # threads/core: 1 00:10:51.433 Run time: 1 seconds 00:10:51.434 Verify: No 00:10:51.434 00:10:51.434 Running for 1 seconds... 00:10:51.434 00:10:51.434 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:51.434 ------------------------------------------------------------------------------------ 00:10:51.434 0,0 116640/s 462 MiB/s 0 0 00:10:51.434 ==================================================================================== 00:10:51.434 Total 116640/s 455 MiB/s 0 0' 00:10:51.434 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.434 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.434 11:21:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:51.434 11:21:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:51.434 11:21:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.434 11:21:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.434 11:21:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.434 11:21:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.434 11:21:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.434 11:21:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.434 11:21:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.434 11:21:09 -- accel/accel.sh@42 -- # jq -r . 00:10:51.434 [2024-11-26 11:21:09.470998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.434 [2024-11-26 11:21:09.471178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:10:51.434 [2024-11-26 11:21:09.634807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.434 [2024-11-26 11:21:09.667873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=0x1 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=dif_generate 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=software 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@23 -- # accel_module=software 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=32 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=32 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=1 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val=No 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:51.694 11:21:09 -- accel/accel.sh@21 -- # val= 00:10:51.694 11:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # IFS=: 00:10:51.694 11:21:09 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@21 -- # val= 00:10:52.631 11:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # IFS=: 00:10:52.631 11:21:10 -- accel/accel.sh@20 -- # read -r var val 00:10:52.631 11:21:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.631 11:21:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:52.631 11:21:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.631 00:10:52.631 real 0m2.795s 00:10:52.631 user 0m2.376s 00:10:52.631 sys 0m0.235s 00:10:52.631 11:21:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:52.631 ************************************ 00:10:52.631 END TEST accel_dif_generate 00:10:52.631 ************************************ 00:10:52.631 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:52.891 11:21:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:52.891 11:21:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:52.891 11:21:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.891 11:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:52.891 ************************************ 00:10:52.891 START TEST accel_dif_generate_copy 00:10:52.891 ************************************ 00:10:52.891 11:21:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:10:52.891 11:21:10 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.891 11:21:10 -- accel/accel.sh@17 -- # local accel_module 00:10:52.891 11:21:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:52.891 11:21:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:52.891 11:21:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.891 11:21:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.891 11:21:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.891 11:21:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.891 11:21:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.891 11:21:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.891 11:21:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.891 11:21:10 -- accel/accel.sh@42 -- # jq -r . 00:10:52.891 [2024-11-26 11:21:10.918138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:52.891 [2024-11-26 11:21:10.918343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75346 ] 00:10:52.891 [2024-11-26 11:21:11.084080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.891 [2024-11-26 11:21:11.119162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.268 11:21:12 -- accel/accel.sh@18 -- # out=' 00:10:54.268 SPDK Configuration: 00:10:54.268 Core mask: 0x1 00:10:54.268 00:10:54.268 Accel Perf Configuration: 00:10:54.268 Workload Type: dif_generate_copy 00:10:54.268 Vector size: 4096 bytes 00:10:54.268 Transfer size: 4096 bytes 00:10:54.268 Vector count 1 00:10:54.268 Module: software 00:10:54.268 Queue depth: 32 00:10:54.268 Allocate depth: 32 00:10:54.269 # threads/core: 1 00:10:54.269 Run time: 1 seconds 00:10:54.269 Verify: No 00:10:54.269 00:10:54.269 Running for 1 seconds... 00:10:54.269 00:10:54.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.269 ------------------------------------------------------------------------------------ 00:10:54.269 0,0 79200/s 314 MiB/s 0 0 00:10:54.269 ==================================================================================== 00:10:54.269 Total 79200/s 309 MiB/s 0 0' 00:10:54.269 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.269 11:21:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:54.269 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.269 11:21:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:54.269 11:21:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.269 11:21:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.269 11:21:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.269 11:21:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.269 11:21:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.269 11:21:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.269 11:21:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.269 11:21:12 -- accel/accel.sh@42 -- # jq -r . 00:10:54.269 [2024-11-26 11:21:12.325511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.269 [2024-11-26 11:21:12.325689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75372 ] 00:10:54.269 [2024-11-26 11:21:12.491113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.529 [2024-11-26 11:21:12.527041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=0x1 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=software 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=32 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=32 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=1 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val=No 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:54.529 11:21:12 -- accel/accel.sh@21 -- # val= 00:10:54.529 11:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # IFS=: 00:10:54.529 11:21:12 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@21 -- # val= 00:10:55.468 11:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 11:21:13 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 11:21:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:55.468 11:21:13 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:55.468 11:21:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.468 00:10:55.468 real 0m2.815s 00:10:55.468 user 0m2.380s 00:10:55.468 sys 0m0.250s 00:10:55.468 ************************************ 00:10:55.468 END TEST accel_dif_generate_copy 00:10:55.468 ************************************ 00:10:55.468 11:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:55.468 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:55.727 11:21:13 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:55.727 11:21:13 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.727 11:21:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:55.727 11:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.727 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:55.727 ************************************ 00:10:55.727 START TEST accel_comp 00:10:55.727 ************************************ 00:10:55.727 11:21:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.727 11:21:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:55.727 11:21:13 -- accel/accel.sh@17 -- # local accel_module 00:10:55.727 11:21:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.727 11:21:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.727 11:21:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.727 11:21:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.727 11:21:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.727 11:21:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.727 11:21:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.727 11:21:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.727 11:21:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.727 11:21:13 -- accel/accel.sh@42 -- # jq -r . 00:10:55.727 [2024-11-26 11:21:13.783270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.727 [2024-11-26 11:21:13.783438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:10:55.727 [2024-11-26 11:21:13.940242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.987 [2024-11-26 11:21:13.977619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.924 11:21:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:56.924 00:10:56.924 SPDK Configuration: 00:10:56.924 Core mask: 0x1 00:10:56.924 00:10:56.924 Accel Perf Configuration: 00:10:56.924 Workload Type: compress 00:10:56.924 Transfer size: 4096 bytes 00:10:56.924 Vector count 1 00:10:56.924 Module: software 00:10:56.924 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:56.924 Queue depth: 32 00:10:56.924 Allocate depth: 32 00:10:56.924 # threads/core: 1 00:10:56.924 Run time: 1 seconds 00:10:56.924 Verify: No 00:10:56.924 00:10:56.924 Running for 1 seconds... 00:10:56.924 00:10:56.924 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:56.924 ------------------------------------------------------------------------------------ 00:10:56.924 0,0 44640/s 186 MiB/s 0 0 00:10:56.924 ==================================================================================== 00:10:56.924 Total 44640/s 174 MiB/s 0 0' 00:10:56.924 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:56.924 11:21:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:56.924 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.184 11:21:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.184 11:21:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.184 11:21:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.184 11:21:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.184 11:21:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.184 11:21:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.184 11:21:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.184 11:21:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.184 11:21:15 -- accel/accel.sh@42 -- # jq -r . 00:10:57.184 [2024-11-26 11:21:15.194971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:57.184 [2024-11-26 11:21:15.195158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:10:57.184 [2024-11-26 11:21:15.374051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.184 [2024-11-26 11:21:15.410192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val=0x1 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.444 11:21:15 -- accel/accel.sh@21 -- # val=compress 00:10:57.444 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.444 11:21:15 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:57.444 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=software 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=32 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=32 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=1 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val=No 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.445 11:21:15 -- accel/accel.sh@21 -- # val= 00:10:57.445 11:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.445 11:21:15 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@21 -- # val= 00:10:58.381 11:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # IFS=: 00:10:58.381 11:21:16 -- accel/accel.sh@20 -- # read -r var val 00:10:58.381 11:21:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:58.381 11:21:16 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:58.381 ************************************ 00:10:58.381 END TEST accel_comp 00:10:58.381 ************************************ 00:10:58.381 11:21:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.381 00:10:58.381 real 0m2.835s 00:10:58.381 user 0m2.398s 00:10:58.381 sys 0m0.253s 00:10:58.381 11:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:58.381 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:58.640 11:21:16 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:58.640 11:21:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:58.640 11:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.640 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:58.640 ************************************ 00:10:58.640 START TEST accel_decomp 00:10:58.640 ************************************ 00:10:58.640 11:21:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:58.640 11:21:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.640 11:21:16 -- accel/accel.sh@17 -- # local accel_module 00:10:58.640 11:21:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:58.640 11:21:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:58.640 11:21:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.640 11:21:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.640 11:21:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.640 11:21:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.640 11:21:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.640 11:21:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.640 11:21:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.640 11:21:16 -- accel/accel.sh@42 -- # jq -r . 00:10:58.640 [2024-11-26 11:21:16.672404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:58.640 [2024-11-26 11:21:16.672571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:10:58.640 [2024-11-26 11:21:16.844160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.901 [2024-11-26 11:21:16.879506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.885 11:21:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:59.885 00:10:59.885 SPDK Configuration: 00:10:59.885 Core mask: 0x1 00:10:59.885 00:10:59.885 Accel Perf Configuration: 00:10:59.885 Workload Type: decompress 00:10:59.885 Transfer size: 4096 bytes 00:10:59.885 Vector count 1 00:10:59.885 Module: software 00:10:59.885 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:59.885 Queue depth: 32 00:10:59.885 Allocate depth: 32 00:10:59.885 # threads/core: 1 00:10:59.885 Run time: 1 seconds 00:10:59.885 Verify: Yes 00:10:59.885 00:10:59.885 Running for 1 seconds... 00:10:59.885 00:10:59.885 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:59.885 ------------------------------------------------------------------------------------ 00:10:59.885 0,0 57888/s 106 MiB/s 0 0 00:10:59.885 ==================================================================================== 00:10:59.885 Total 57888/s 226 MiB/s 0 0' 00:10:59.885 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.885 11:21:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.885 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.885 11:21:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.885 11:21:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.885 11:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.885 11:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.885 11:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.885 11:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.885 11:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.885 11:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.885 11:21:18 -- accel/accel.sh@42 -- # jq -r . 00:10:59.885 [2024-11-26 11:21:18.085792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:59.885 [2024-11-26 11:21:18.086033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75473 ] 00:11:00.143 [2024-11-26 11:21:18.251140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.143 [2024-11-26 11:21:18.286308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.143 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.143 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.143 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.143 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.143 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.143 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.143 11:21:18 -- accel/accel.sh@21 -- # val=0x1 00:11:00.143 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.143 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=decompress 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=software 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@23 -- # accel_module=software 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=32 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=32 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=1 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val=Yes 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:00.144 11:21:18 -- accel/accel.sh@21 -- # val= 00:11:00.144 11:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # IFS=: 00:11:00.144 11:21:18 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@21 -- # val= 00:11:01.519 11:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # IFS=: 00:11:01.519 11:21:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.519 11:21:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:01.519 11:21:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:01.519 11:21:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:01.519 00:11:01.519 real 0m2.822s 00:11:01.519 user 0m2.384s 00:11:01.519 sys 0m0.254s 00:11:01.519 11:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:01.519 11:21:19 -- common/autotest_common.sh@10 -- # set +x 00:11:01.519 ************************************ 00:11:01.519 END TEST accel_decomp 00:11:01.519 ************************************ 00:11:01.519 11:21:19 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:01.519 11:21:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:01.519 11:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.519 11:21:19 -- common/autotest_common.sh@10 -- # set +x 00:11:01.519 ************************************ 00:11:01.519 START TEST accel_decmop_full 00:11:01.519 ************************************ 00:11:01.519 11:21:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:01.519 11:21:19 -- accel/accel.sh@16 -- # local accel_opc 00:11:01.519 11:21:19 -- accel/accel.sh@17 -- # local accel_module 00:11:01.519 11:21:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:01.519 11:21:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.519 11:21:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:01.519 11:21:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.519 11:21:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.519 11:21:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.519 11:21:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.519 11:21:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.519 11:21:19 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.519 11:21:19 -- accel/accel.sh@42 -- # jq -r . 00:11:01.519 [2024-11-26 11:21:19.538548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:01.519 [2024-11-26 11:21:19.538707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75509 ] 00:11:01.519 [2024-11-26 11:21:19.704215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.519 [2024-11-26 11:21:19.745018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.896 11:21:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:02.896 00:11:02.896 SPDK Configuration: 00:11:02.896 Core mask: 0x1 00:11:02.896 00:11:02.896 Accel Perf Configuration: 00:11:02.896 Workload Type: decompress 00:11:02.896 Transfer size: 111250 bytes 00:11:02.896 Vector count 1 00:11:02.896 Module: software 00:11:02.896 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.896 Queue depth: 32 00:11:02.896 Allocate depth: 32 00:11:02.896 # threads/core: 1 00:11:02.896 Run time: 1 seconds 00:11:02.896 Verify: Yes 00:11:02.896 00:11:02.896 Running for 1 seconds... 00:11:02.896 00:11:02.896 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.896 ------------------------------------------------------------------------------------ 00:11:02.896 0,0 4352/s 179 MiB/s 0 0 00:11:02.896 ==================================================================================== 00:11:02.896 Total 4352/s 461 MiB/s 0 0' 00:11:02.896 11:21:20 -- accel/accel.sh@20 -- # IFS=: 00:11:02.896 11:21:20 -- accel/accel.sh@20 -- # read -r var val 00:11:02.896 11:21:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:02.896 11:21:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:02.896 11:21:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.896 11:21:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.896 11:21:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.896 11:21:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.896 11:21:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.896 11:21:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.896 11:21:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.896 11:21:20 -- accel/accel.sh@42 -- # jq -r . 00:11:02.896 [2024-11-26 11:21:20.969580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.896 [2024-11-26 11:21:20.969773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ] 00:11:03.156 [2024-11-26 11:21:21.135694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.156 [2024-11-26 11:21:21.172196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=0x1 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=decompress 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=software 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@23 -- # accel_module=software 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=32 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=32 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=1 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val=Yes 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:03.156 11:21:21 -- accel/accel.sh@21 -- # val= 00:11:03.156 11:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # IFS=: 00:11:03.156 11:21:21 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 11:21:22 -- accel/accel.sh@21 -- # val= 00:11:04.531 11:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.531 11:21:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.531 ************************************ 00:11:04.531 END TEST accel_decmop_full 00:11:04.531 ************************************ 00:11:04.531 11:21:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:04.531 11:21:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:04.531 11:21:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:04.531 00:11:04.531 real 0m2.844s 00:11:04.531 user 0m2.416s 00:11:04.531 sys 0m0.242s 00:11:04.531 11:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.531 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:11:04.531 11:21:22 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:04.531 11:21:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:04.531 11:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.532 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:11:04.532 ************************************ 00:11:04.532 START TEST accel_decomp_mcore 00:11:04.532 ************************************ 00:11:04.532 11:21:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:04.532 11:21:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:04.532 11:21:22 -- accel/accel.sh@17 -- # local accel_module 00:11:04.532 11:21:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:04.532 11:21:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:04.532 11:21:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.532 11:21:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.532 11:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.532 11:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.532 11:21:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.532 11:21:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.532 11:21:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.532 11:21:22 -- accel/accel.sh@42 -- # jq -r . 00:11:04.532 [2024-11-26 11:21:22.432496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.532 [2024-11-26 11:21:22.432697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75564 ] 00:11:04.532 [2024-11-26 11:21:22.600061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.532 [2024-11-26 11:21:22.635173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.532 [2024-11-26 11:21:22.635337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.532 [2024-11-26 11:21:22.635362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.532 [2024-11-26 11:21:22.635469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.906 11:21:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:05.906 00:11:05.906 SPDK Configuration: 00:11:05.906 Core mask: 0xf 00:11:05.906 00:11:05.906 Accel Perf Configuration: 00:11:05.906 Workload Type: decompress 00:11:05.906 Transfer size: 4096 bytes 00:11:05.906 Vector count 1 00:11:05.906 Module: software 00:11:05.906 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.906 Queue depth: 32 00:11:05.906 Allocate depth: 32 00:11:05.906 # threads/core: 1 00:11:05.906 Run time: 1 seconds 00:11:05.906 Verify: Yes 00:11:05.906 00:11:05.906 Running for 1 seconds... 00:11:05.906 00:11:05.906 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:05.906 ------------------------------------------------------------------------------------ 00:11:05.906 0,0 53728/s 99 MiB/s 0 0 00:11:05.906 3,0 49696/s 91 MiB/s 0 0 00:11:05.906 2,0 51360/s 94 MiB/s 0 0 00:11:05.906 1,0 50272/s 92 MiB/s 0 0 00:11:05.906 ==================================================================================== 00:11:05.906 Total 205056/s 801 MiB/s 0 0' 00:11:05.906 11:21:23 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:05.906 11:21:23 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:05.906 11:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.906 11:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.906 11:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.906 11:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.906 11:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.906 11:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.906 11:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.906 11:21:23 -- accel/accel.sh@42 -- # jq -r . 00:11:05.906 [2024-11-26 11:21:23.854018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.906 [2024-11-26 11:21:23.854168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75583 ] 00:11:05.906 [2024-11-26 11:21:24.013577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.906 [2024-11-26 11:21:24.053256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.906 [2024-11-26 11:21:24.053367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.906 [2024-11-26 11:21:24.053483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.906 [2024-11-26 11:21:24.053417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val=0xf 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val=decompress 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.906 11:21:24 -- accel/accel.sh@21 -- # val=software 00:11:05.906 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.906 11:21:24 -- accel/accel.sh@23 -- # accel_module=software 00:11:05.906 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val=32 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val=32 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val=1 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val=Yes 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:05.907 11:21:24 -- accel/accel.sh@21 -- # val= 00:11:05.907 11:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # IFS=: 00:11:05.907 11:21:24 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@21 -- # val= 00:11:07.284 11:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.284 11:21:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.284 11:21:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:07.284 11:21:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:07.284 ************************************ 00:11:07.284 END TEST accel_decomp_mcore 00:11:07.284 ************************************ 00:11:07.284 11:21:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:07.284 00:11:07.284 real 0m2.835s 00:11:07.284 user 0m4.491s 00:11:07.284 sys 0m0.153s 00:11:07.284 11:21:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.284 11:21:25 -- common/autotest_common.sh@10 -- # set +x 00:11:07.284 11:21:25 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.284 11:21:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:07.284 11:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.284 11:21:25 -- common/autotest_common.sh@10 -- # set +x 00:11:07.284 ************************************ 00:11:07.284 START TEST accel_decomp_full_mcore 00:11:07.284 ************************************ 00:11:07.284 11:21:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.284 11:21:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:07.284 11:21:25 -- accel/accel.sh@17 -- # local accel_module 00:11:07.284 11:21:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.284 11:21:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:07.284 11:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.284 11:21:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.284 11:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.284 11:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.284 11:21:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.284 11:21:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.284 11:21:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.284 11:21:25 -- accel/accel.sh@42 -- # jq -r . 00:11:07.284 [2024-11-26 11:21:25.317811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.284 [2024-11-26 11:21:25.318037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75621 ] 00:11:07.284 [2024-11-26 11:21:25.483821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.542 [2024-11-26 11:21:25.520253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.542 [2024-11-26 11:21:25.520390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.542 [2024-11-26 11:21:25.520492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.542 [2024-11-26 11:21:25.520551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.918 11:21:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:08.919 00:11:08.919 SPDK Configuration: 00:11:08.919 Core mask: 0xf 00:11:08.919 00:11:08.919 Accel Perf Configuration: 00:11:08.919 Workload Type: decompress 00:11:08.919 Transfer size: 111250 bytes 00:11:08.919 Vector count 1 00:11:08.919 Module: software 00:11:08.919 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.919 Queue depth: 32 00:11:08.919 Allocate depth: 32 00:11:08.919 # threads/core: 1 00:11:08.919 Run time: 1 seconds 00:11:08.919 Verify: Yes 00:11:08.919 00:11:08.919 Running for 1 seconds... 00:11:08.919 00:11:08.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.919 ------------------------------------------------------------------------------------ 00:11:08.919 0,0 4288/s 177 MiB/s 0 0 00:11:08.919 3,0 4288/s 177 MiB/s 0 0 00:11:08.919 2,0 4256/s 175 MiB/s 0 0 00:11:08.919 1,0 4288/s 177 MiB/s 0 0 00:11:08.919 ==================================================================================== 00:11:08.919 Total 17120/s 1816 MiB/s 0 0' 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.919 11:21:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.919 11:21:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.919 11:21:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.919 11:21:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.919 11:21:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.919 11:21:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.919 11:21:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.919 11:21:26 -- accel/accel.sh@42 -- # jq -r . 00:11:08.919 [2024-11-26 11:21:26.747385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.919 [2024-11-26 11:21:26.747828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75645 ] 00:11:08.919 [2024-11-26 11:21:26.902486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.919 [2024-11-26 11:21:26.937488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.919 [2024-11-26 11:21:26.937602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.919 [2024-11-26 11:21:26.937671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.919 [2024-11-26 11:21:26.937722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=0xf 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=decompress 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=software 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=32 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=32 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=1 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val=Yes 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.919 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.919 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:08.919 11:21:26 -- accel/accel.sh@21 -- # val= 00:11:08.920 11:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.920 11:21:26 -- accel/accel.sh@20 -- # IFS=: 00:11:08.920 11:21:26 -- accel/accel.sh@20 -- # read -r var val 00:11:10.298 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.298 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@21 -- # val= 00:11:10.299 11:21:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # IFS=: 00:11:10.299 11:21:28 -- accel/accel.sh@20 -- # read -r var val 00:11:10.299 11:21:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.299 11:21:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:10.299 11:21:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.299 00:11:10.299 real 0m2.840s 00:11:10.299 user 0m9.036s 00:11:10.299 sys 0m0.281s 00:11:10.299 11:21:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.299 ************************************ 00:11:10.299 END TEST accel_decomp_full_mcore 00:11:10.299 ************************************ 00:11:10.299 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:11:10.299 11:21:28 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.299 11:21:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:10.299 11:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.299 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:11:10.299 ************************************ 00:11:10.299 START TEST accel_decomp_mthread 00:11:10.299 ************************************ 00:11:10.299 11:21:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.299 11:21:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.299 11:21:28 -- accel/accel.sh@17 -- # local accel_module 00:11:10.299 11:21:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.299 11:21:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.299 11:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.299 11:21:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.299 11:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.299 11:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.299 11:21:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.299 11:21:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.299 11:21:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.299 11:21:28 -- accel/accel.sh@42 -- # jq -r . 00:11:10.299 [2024-11-26 11:21:28.205199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:10.299 [2024-11-26 11:21:28.205349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75678 ] 00:11:10.299 [2024-11-26 11:21:28.356962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.299 [2024-11-26 11:21:28.392419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.678 11:21:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:11.678 00:11:11.678 SPDK Configuration: 00:11:11.678 Core mask: 0x1 00:11:11.678 00:11:11.678 Accel Perf Configuration: 00:11:11.678 Workload Type: decompress 00:11:11.678 Transfer size: 4096 bytes 00:11:11.678 Vector count 1 00:11:11.678 Module: software 00:11:11.678 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.678 Queue depth: 32 00:11:11.678 Allocate depth: 32 00:11:11.678 # threads/core: 2 00:11:11.678 Run time: 1 seconds 00:11:11.678 Verify: Yes 00:11:11.678 00:11:11.678 Running for 1 seconds... 00:11:11.678 00:11:11.678 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.678 ------------------------------------------------------------------------------------ 00:11:11.678 0,1 33952/s 62 MiB/s 0 0 00:11:11.678 0,0 33856/s 62 MiB/s 0 0 00:11:11.678 ==================================================================================== 00:11:11.678 Total 67808/s 264 MiB/s 0 0' 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:11.678 11:21:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.678 11:21:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.678 11:21:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.678 11:21:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.678 11:21:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.678 11:21:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.678 11:21:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.678 11:21:29 -- accel/accel.sh@42 -- # jq -r . 00:11:11.678 [2024-11-26 11:21:29.602357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.678 [2024-11-26 11:21:29.602540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75704 ] 00:11:11.678 [2024-11-26 11:21:29.768861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.678 [2024-11-26 11:21:29.800866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=0x1 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=decompress 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=software 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=32 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=32 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=2 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val=Yes 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:11.678 11:21:29 -- accel/accel.sh@21 -- # val= 00:11:11.678 11:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # IFS=: 00:11:11.678 11:21:29 -- accel/accel.sh@20 -- # read -r var val 00:11:13.080 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.080 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.080 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.080 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.080 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 11:21:30 -- accel/accel.sh@21 -- # val= 00:11:13.081 11:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # IFS=: 00:11:13.081 11:21:30 -- accel/accel.sh@20 -- # read -r var val 00:11:13.081 ************************************ 00:11:13.081 END TEST accel_decomp_mthread 00:11:13.081 ************************************ 00:11:13.081 11:21:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:13.081 11:21:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:13.081 11:21:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.081 00:11:13.081 real 0m2.793s 00:11:13.081 user 0m2.367s 00:11:13.081 sys 0m0.243s 00:11:13.081 11:21:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:13.081 11:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.081 11:21:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:13.081 11:21:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:13.081 11:21:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.081 11:21:31 -- common/autotest_common.sh@10 -- # set +x 00:11:13.081 ************************************ 00:11:13.081 START TEST accel_deomp_full_mthread 00:11:13.081 ************************************ 00:11:13.081 11:21:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:13.081 11:21:31 -- accel/accel.sh@16 -- # local accel_opc 00:11:13.081 11:21:31 -- accel/accel.sh@17 -- # local accel_module 00:11:13.081 11:21:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:13.081 11:21:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:13.081 11:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.081 11:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.081 11:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.081 11:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.081 11:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.081 11:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.081 11:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.081 11:21:31 -- accel/accel.sh@42 -- # jq -r . 00:11:13.081 [2024-11-26 11:21:31.045143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:13.081 [2024-11-26 11:21:31.045292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75734 ] 00:11:13.081 [2024-11-26 11:21:31.200576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.081 [2024-11-26 11:21:31.233178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.460 11:21:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:14.460 00:11:14.460 SPDK Configuration: 00:11:14.460 Core mask: 0x1 00:11:14.460 00:11:14.460 Accel Perf Configuration: 00:11:14.460 Workload Type: decompress 00:11:14.460 Transfer size: 111250 bytes 00:11:14.460 Vector count 1 00:11:14.460 Module: software 00:11:14.460 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:14.460 Queue depth: 32 00:11:14.460 Allocate depth: 32 00:11:14.460 # threads/core: 2 00:11:14.460 Run time: 1 seconds 00:11:14.460 Verify: Yes 00:11:14.460 00:11:14.460 Running for 1 seconds... 00:11:14.460 00:11:14.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:14.460 ------------------------------------------------------------------------------------ 00:11:14.460 0,1 2496/s 103 MiB/s 0 0 00:11:14.460 0,0 2496/s 103 MiB/s 0 0 00:11:14.460 ==================================================================================== 00:11:14.460 Total 4992/s 529 MiB/s 0 0' 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:14.460 11:21:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:14.460 11:21:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.460 11:21:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.460 11:21:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.460 11:21:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.460 11:21:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.460 11:21:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.460 11:21:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.460 11:21:32 -- accel/accel.sh@42 -- # jq -r . 00:11:14.460 [2024-11-26 11:21:32.449247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:14.460 [2024-11-26 11:21:32.449403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75749 ] 00:11:14.460 [2024-11-26 11:21:32.601483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.460 [2024-11-26 11:21:32.633280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val=0x1 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val=decompress 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val=software 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.460 11:21:32 -- accel/accel.sh@21 -- # val=32 00:11:14.460 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.460 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val=32 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val=2 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val=Yes 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.461 11:21:32 -- accel/accel.sh@21 -- # val= 00:11:14.461 11:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.461 11:21:32 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@21 -- # val= 00:11:15.840 11:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # IFS=: 00:11:15.840 11:21:33 -- accel/accel.sh@20 -- # read -r var val 00:11:15.840 11:21:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.840 11:21:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:15.840 11:21:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.840 00:11:15.840 real 0m2.810s 00:11:15.840 user 0m2.406s 00:11:15.840 sys 0m0.219s 00:11:15.840 ************************************ 00:11:15.840 END TEST accel_deomp_full_mthread 00:11:15.840 ************************************ 00:11:15.840 11:21:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.840 11:21:33 -- common/autotest_common.sh@10 -- # set +x 00:11:15.840 11:21:33 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:15.840 11:21:33 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:15.840 11:21:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:15.840 11:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.840 11:21:33 -- common/autotest_common.sh@10 -- # set +x 00:11:15.840 11:21:33 -- accel/accel.sh@129 -- # build_accel_config 00:11:15.840 11:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.840 11:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.840 11:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.840 11:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.840 11:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.840 11:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.840 11:21:33 -- accel/accel.sh@42 -- # jq -r . 00:11:15.840 ************************************ 00:11:15.840 START TEST accel_dif_functional_tests 00:11:15.840 ************************************ 00:11:15.840 11:21:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:15.840 [2024-11-26 11:21:33.933444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:15.840 [2024-11-26 11:21:33.933628] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75791 ] 00:11:16.100 [2024-11-26 11:21:34.098501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.100 [2024-11-26 11:21:34.131372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.100 [2024-11-26 11:21:34.131448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.100 [2024-11-26 11:21:34.131528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.100 00:11:16.100 00:11:16.100 CUnit - A unit testing framework for C - Version 2.1-3 00:11:16.100 http://cunit.sourceforge.net/ 00:11:16.100 00:11:16.100 00:11:16.100 Suite: accel_dif 00:11:16.100 Test: verify: DIF generated, GUARD check ...passed 00:11:16.100 Test: verify: DIF generated, APPTAG check ...passed 00:11:16.100 Test: verify: DIF generated, REFTAG check ...passed 00:11:16.100 Test: verify: DIF not generated, GUARD check ...passed 00:11:16.100 Test: verify: DIF not generated, APPTAG check ...[2024-11-26 11:21:34.182206] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:16.100 [2024-11-26 11:21:34.182458] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:16.100 [2024-11-26 11:21:34.182536] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:16.100 passed 00:11:16.100 Test: verify: DIF not generated, REFTAG check ...[2024-11-26 11:21:34.182669] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:16.100 [2024-11-26 11:21:34.182719] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:16.100 passed 00:11:16.100 Test: verify: APPTAG correct, APPTAG check ...[2024-11-26 11:21:34.182774] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:16.100 passed 00:11:16.100 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:16.100 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:16.100 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-11-26 11:21:34.183110] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:16.100 passed 00:11:16.100 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:16.100 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-26 11:21:34.183454] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:16.100 passed 00:11:16.100 Test: generate copy: DIF generated, GUARD check ...passed 00:11:16.100 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:16.100 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:16.100 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:16.100 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:16.100 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:16.100 Test: generate copy: iovecs-len validate ...[2024-11-26 11:21:34.184411] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:16.100 passed 00:11:16.100 Test: generate copy: buffer alignment validate ...passed 00:11:16.100 00:11:16.100 Run Summary: Type Total Ran Passed Failed Inactive 00:11:16.100 suites 1 1 n/a 0 0 00:11:16.100 tests 20 20 20 0 0 00:11:16.100 asserts 204 204 204 0 n/a 00:11:16.100 00:11:16.100 Elapsed time = 0.007 seconds 00:11:16.360 00:11:16.360 real 0m0.478s 00:11:16.360 user 0m0.480s 00:11:16.360 sys 0m0.147s 00:11:16.360 11:21:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.360 ************************************ 00:11:16.360 END TEST accel_dif_functional_tests 00:11:16.360 ************************************ 00:11:16.360 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:11:16.360 00:11:16.360 real 1m0.415s 00:11:16.360 user 1m3.625s 00:11:16.360 sys 0m6.707s 00:11:16.360 11:21:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.360 ************************************ 00:11:16.360 END TEST accel 00:11:16.360 ************************************ 00:11:16.360 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:11:16.360 11:21:34 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:16.360 11:21:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.360 11:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.360 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:11:16.360 ************************************ 00:11:16.360 START TEST accel_rpc 00:11:16.360 ************************************ 00:11:16.360 11:21:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:16.360 * Looking for test storage... 00:11:16.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:16.360 11:21:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:16.360 11:21:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:16.360 11:21:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.620 11:21:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.620 11:21:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.620 11:21:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.620 11:21:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.620 11:21:34 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.620 11:21:34 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.620 11:21:34 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.620 11:21:34 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.620 11:21:34 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.620 11:21:34 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.620 11:21:34 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.620 11:21:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.620 11:21:34 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.620 11:21:34 -- scripts/common.sh@344 -- # : 1 00:11:16.620 11:21:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.620 11:21:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.620 11:21:34 -- scripts/common.sh@364 -- # decimal 1 00:11:16.620 11:21:34 -- scripts/common.sh@352 -- # local d=1 00:11:16.620 11:21:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.620 11:21:34 -- scripts/common.sh@354 -- # echo 1 00:11:16.620 11:21:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.620 11:21:34 -- scripts/common.sh@365 -- # decimal 2 00:11:16.620 11:21:34 -- scripts/common.sh@352 -- # local d=2 00:11:16.620 11:21:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.620 11:21:34 -- scripts/common.sh@354 -- # echo 2 00:11:16.620 11:21:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.620 11:21:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.620 11:21:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.620 11:21:34 -- scripts/common.sh@367 -- # return 0 00:11:16.620 11:21:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.620 11:21:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.620 --rc genhtml_branch_coverage=1 00:11:16.620 --rc genhtml_function_coverage=1 00:11:16.620 --rc genhtml_legend=1 00:11:16.620 --rc geninfo_all_blocks=1 00:11:16.620 --rc geninfo_unexecuted_blocks=1 00:11:16.620 00:11:16.620 ' 00:11:16.620 11:21:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.620 --rc genhtml_branch_coverage=1 00:11:16.620 --rc genhtml_function_coverage=1 00:11:16.620 --rc genhtml_legend=1 00:11:16.620 --rc geninfo_all_blocks=1 00:11:16.620 --rc geninfo_unexecuted_blocks=1 00:11:16.620 00:11:16.620 ' 00:11:16.620 11:21:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.620 --rc genhtml_branch_coverage=1 00:11:16.620 --rc genhtml_function_coverage=1 00:11:16.620 --rc genhtml_legend=1 00:11:16.620 --rc geninfo_all_blocks=1 00:11:16.620 --rc geninfo_unexecuted_blocks=1 00:11:16.620 00:11:16.620 ' 00:11:16.620 11:21:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.620 --rc genhtml_branch_coverage=1 00:11:16.620 --rc genhtml_function_coverage=1 00:11:16.620 --rc genhtml_legend=1 00:11:16.620 --rc geninfo_all_blocks=1 00:11:16.620 --rc geninfo_unexecuted_blocks=1 00:11:16.620 00:11:16.620 ' 00:11:16.620 11:21:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:16.620 11:21:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=75863 00:11:16.620 11:21:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 75863 00:11:16.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.620 11:21:34 -- common/autotest_common.sh@829 -- # '[' -z 75863 ']' 00:11:16.620 11:21:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.620 11:21:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.620 11:21:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:16.620 11:21:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.620 11:21:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.620 11:21:34 -- common/autotest_common.sh@10 -- # set +x 00:11:16.620 [2024-11-26 11:21:34.692732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:16.620 [2024-11-26 11:21:34.692945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75863 ] 00:11:16.879 [2024-11-26 11:21:34.859590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.879 [2024-11-26 11:21:34.894141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.879 [2024-11-26 11:21:34.894554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.447 11:21:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.447 11:21:35 -- common/autotest_common.sh@862 -- # return 0 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:17.447 11:21:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.447 11:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.447 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.447 ************************************ 00:11:17.447 START TEST accel_assign_opcode 00:11:17.447 ************************************ 00:11:17.447 11:21:35 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:17.447 11:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.447 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.447 [2024-11-26 11:21:35.627388] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:17.447 11:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:17.447 11:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.447 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.447 [2024-11-26 11:21:35.635374] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:17.447 11:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.447 11:21:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:17.447 11:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.447 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.707 11:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.707 11:21:35 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:17.707 11:21:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.707 11:21:35 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:17.707 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.707 11:21:35 -- accel/accel_rpc.sh@42 -- # grep software 00:11:17.707 11:21:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.707 software 00:11:17.707 00:11:17.707 real 0m0.170s 00:11:17.707 user 0m0.016s 00:11:17.707 sys 0m0.011s 00:11:17.707 11:21:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.707 11:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:17.707 ************************************ 00:11:17.707 END TEST accel_assign_opcode 00:11:17.707 ************************************ 00:11:17.707 11:21:35 -- accel/accel_rpc.sh@55 -- # killprocess 75863 00:11:17.707 11:21:35 -- common/autotest_common.sh@936 -- # '[' -z 75863 ']' 00:11:17.707 11:21:35 -- common/autotest_common.sh@940 -- # kill -0 75863 00:11:17.707 11:21:35 -- common/autotest_common.sh@941 -- # uname 00:11:17.707 11:21:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.707 11:21:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75863 00:11:17.707 11:21:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:17.707 11:21:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:17.707 killing process with pid 75863 00:11:17.707 11:21:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75863' 00:11:17.707 11:21:35 -- common/autotest_common.sh@955 -- # kill 75863 00:11:17.707 11:21:35 -- common/autotest_common.sh@960 -- # wait 75863 00:11:17.966 ************************************ 00:11:17.966 END TEST accel_rpc 00:11:17.966 ************************************ 00:11:17.966 00:11:17.966 real 0m1.702s 00:11:17.966 user 0m1.759s 00:11:17.966 sys 0m0.424s 00:11:17.966 11:21:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.966 11:21:36 -- common/autotest_common.sh@10 -- # set +x 00:11:17.966 11:21:36 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:17.966 11:21:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.966 11:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.966 11:21:36 -- common/autotest_common.sh@10 -- # set +x 00:11:18.226 ************************************ 00:11:18.226 START TEST app_cmdline 00:11:18.226 ************************************ 00:11:18.226 11:21:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:18.226 * Looking for test storage... 00:11:18.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:18.226 11:21:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:18.226 11:21:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:18.226 11:21:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:18.226 11:21:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:18.226 11:21:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:18.226 11:21:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:18.226 11:21:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:18.226 11:21:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:18.226 11:21:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:18.226 11:21:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.226 11:21:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:18.226 11:21:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:18.226 11:21:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:18.226 11:21:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:18.226 11:21:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:18.226 11:21:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:18.226 11:21:36 -- scripts/common.sh@344 -- # : 1 00:11:18.226 11:21:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:18.226 11:21:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.226 11:21:36 -- scripts/common.sh@364 -- # decimal 1 00:11:18.226 11:21:36 -- scripts/common.sh@352 -- # local d=1 00:11:18.226 11:21:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.226 11:21:36 -- scripts/common.sh@354 -- # echo 1 00:11:18.226 11:21:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:18.226 11:21:36 -- scripts/common.sh@365 -- # decimal 2 00:11:18.226 11:21:36 -- scripts/common.sh@352 -- # local d=2 00:11:18.227 11:21:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.227 11:21:36 -- scripts/common.sh@354 -- # echo 2 00:11:18.227 11:21:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:18.227 11:21:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:18.227 11:21:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:18.227 11:21:36 -- scripts/common.sh@367 -- # return 0 00:11:18.227 11:21:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.227 11:21:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:18.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.227 --rc genhtml_branch_coverage=1 00:11:18.227 --rc genhtml_function_coverage=1 00:11:18.227 --rc genhtml_legend=1 00:11:18.227 --rc geninfo_all_blocks=1 00:11:18.227 --rc geninfo_unexecuted_blocks=1 00:11:18.227 00:11:18.227 ' 00:11:18.227 11:21:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:18.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.227 --rc genhtml_branch_coverage=1 00:11:18.227 --rc genhtml_function_coverage=1 00:11:18.227 --rc genhtml_legend=1 00:11:18.227 --rc geninfo_all_blocks=1 00:11:18.227 --rc geninfo_unexecuted_blocks=1 00:11:18.227 00:11:18.227 ' 00:11:18.227 11:21:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:18.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.227 --rc genhtml_branch_coverage=1 00:11:18.227 --rc genhtml_function_coverage=1 00:11:18.227 --rc genhtml_legend=1 00:11:18.227 --rc geninfo_all_blocks=1 00:11:18.227 --rc geninfo_unexecuted_blocks=1 00:11:18.227 00:11:18.227 ' 00:11:18.227 11:21:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:18.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.227 --rc genhtml_branch_coverage=1 00:11:18.227 --rc genhtml_function_coverage=1 00:11:18.227 --rc genhtml_legend=1 00:11:18.227 --rc geninfo_all_blocks=1 00:11:18.227 --rc geninfo_unexecuted_blocks=1 00:11:18.227 00:11:18.227 ' 00:11:18.227 11:21:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:18.227 11:21:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=75960 00:11:18.227 11:21:36 -- app/cmdline.sh@18 -- # waitforlisten 75960 00:11:18.227 11:21:36 -- common/autotest_common.sh@829 -- # '[' -z 75960 ']' 00:11:18.227 11:21:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.227 11:21:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.227 11:21:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.227 11:21:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:18.227 11:21:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.227 11:21:36 -- common/autotest_common.sh@10 -- # set +x 00:11:18.227 [2024-11-26 11:21:36.422733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:18.227 [2024-11-26 11:21:36.423536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75960 ] 00:11:18.486 [2024-11-26 11:21:36.593865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.486 [2024-11-26 11:21:36.630878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:18.486 [2024-11-26 11:21:36.631159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.436 11:21:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.436 11:21:37 -- common/autotest_common.sh@862 -- # return 0 00:11:19.436 11:21:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:19.436 { 00:11:19.437 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:11:19.437 "fields": { 00:11:19.437 "major": 24, 00:11:19.437 "minor": 1, 00:11:19.437 "patch": 1, 00:11:19.437 "suffix": "-pre", 00:11:19.437 "commit": "c13c99a5e" 00:11:19.437 } 00:11:19.437 } 00:11:19.437 11:21:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:19.437 11:21:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:19.437 11:21:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:19.437 11:21:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:19.437 11:21:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:19.437 11:21:37 -- app/cmdline.sh@26 -- # sort 00:11:19.437 11:21:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:19.437 11:21:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.437 11:21:37 -- common/autotest_common.sh@10 -- # set +x 00:11:19.437 11:21:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.437 11:21:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:19.437 11:21:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:19.437 11:21:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.437 11:21:37 -- common/autotest_common.sh@650 -- # local es=0 00:11:19.437 11:21:37 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.437 11:21:37 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.437 11:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.437 11:21:37 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.437 11:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.437 11:21:37 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.437 11:21:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.437 11:21:37 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.437 11:21:37 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:19.437 11:21:37 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:19.698 request: 00:11:19.698 { 00:11:19.698 "method": "env_dpdk_get_mem_stats", 00:11:19.698 "req_id": 1 00:11:19.698 } 00:11:19.698 Got JSON-RPC error response 00:11:19.698 response: 00:11:19.698 { 00:11:19.698 "code": -32601, 00:11:19.698 "message": "Method not found" 00:11:19.698 } 00:11:19.698 11:21:37 -- common/autotest_common.sh@653 -- # es=1 00:11:19.698 11:21:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:19.698 11:21:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:19.698 11:21:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:19.698 11:21:37 -- app/cmdline.sh@1 -- # killprocess 75960 00:11:19.698 11:21:37 -- common/autotest_common.sh@936 -- # '[' -z 75960 ']' 00:11:19.698 11:21:37 -- common/autotest_common.sh@940 -- # kill -0 75960 00:11:19.698 11:21:37 -- common/autotest_common.sh@941 -- # uname 00:11:19.698 11:21:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:19.698 11:21:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75960 00:11:19.698 11:21:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:19.698 11:21:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:19.698 killing process with pid 75960 00:11:19.698 11:21:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75960' 00:11:19.698 11:21:37 -- common/autotest_common.sh@955 -- # kill 75960 00:11:19.698 11:21:37 -- common/autotest_common.sh@960 -- # wait 75960 00:11:19.956 ************************************ 00:11:19.956 END TEST app_cmdline 00:11:19.956 ************************************ 00:11:19.956 00:11:19.956 real 0m1.940s 00:11:19.956 user 0m2.382s 00:11:19.956 sys 0m0.464s 00:11:19.956 11:21:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:19.956 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:11:19.956 11:21:38 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:19.956 11:21:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.956 11:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.956 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:11:20.215 ************************************ 00:11:20.215 START TEST version 00:11:20.215 ************************************ 00:11:20.215 11:21:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:20.215 * Looking for test storage... 00:11:20.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:20.215 11:21:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:20.215 11:21:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:20.215 11:21:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:20.215 11:21:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:20.215 11:21:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:20.215 11:21:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:20.215 11:21:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:20.215 11:21:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:20.215 11:21:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:20.215 11:21:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.215 11:21:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:20.215 11:21:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:20.215 11:21:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:20.215 11:21:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:20.215 11:21:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:20.215 11:21:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:20.215 11:21:38 -- scripts/common.sh@344 -- # : 1 00:11:20.215 11:21:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:20.215 11:21:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.215 11:21:38 -- scripts/common.sh@364 -- # decimal 1 00:11:20.215 11:21:38 -- scripts/common.sh@352 -- # local d=1 00:11:20.215 11:21:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.215 11:21:38 -- scripts/common.sh@354 -- # echo 1 00:11:20.215 11:21:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:20.215 11:21:38 -- scripts/common.sh@365 -- # decimal 2 00:11:20.215 11:21:38 -- scripts/common.sh@352 -- # local d=2 00:11:20.215 11:21:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.215 11:21:38 -- scripts/common.sh@354 -- # echo 2 00:11:20.215 11:21:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:20.215 11:21:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:20.215 11:21:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:20.215 11:21:38 -- scripts/common.sh@367 -- # return 0 00:11:20.215 11:21:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.215 11:21:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:20.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.215 --rc genhtml_branch_coverage=1 00:11:20.215 --rc genhtml_function_coverage=1 00:11:20.215 --rc genhtml_legend=1 00:11:20.215 --rc geninfo_all_blocks=1 00:11:20.215 --rc geninfo_unexecuted_blocks=1 00:11:20.215 00:11:20.215 ' 00:11:20.215 11:21:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:20.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.216 --rc genhtml_branch_coverage=1 00:11:20.216 --rc genhtml_function_coverage=1 00:11:20.216 --rc genhtml_legend=1 00:11:20.216 --rc geninfo_all_blocks=1 00:11:20.216 --rc geninfo_unexecuted_blocks=1 00:11:20.216 00:11:20.216 ' 00:11:20.216 11:21:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:20.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.216 --rc genhtml_branch_coverage=1 00:11:20.216 --rc genhtml_function_coverage=1 00:11:20.216 --rc genhtml_legend=1 00:11:20.216 --rc geninfo_all_blocks=1 00:11:20.216 --rc geninfo_unexecuted_blocks=1 00:11:20.216 00:11:20.216 ' 00:11:20.216 11:21:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:20.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.216 --rc genhtml_branch_coverage=1 00:11:20.216 --rc genhtml_function_coverage=1 00:11:20.216 --rc genhtml_legend=1 00:11:20.216 --rc geninfo_all_blocks=1 00:11:20.216 --rc geninfo_unexecuted_blocks=1 00:11:20.216 00:11:20.216 ' 00:11:20.216 11:21:38 -- app/version.sh@17 -- # get_header_version major 00:11:20.216 11:21:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:20.216 11:21:38 -- app/version.sh@14 -- # cut -f2 00:11:20.216 11:21:38 -- app/version.sh@14 -- # tr -d '"' 00:11:20.216 11:21:38 -- app/version.sh@17 -- # major=24 00:11:20.216 11:21:38 -- app/version.sh@18 -- # get_header_version minor 00:11:20.216 11:21:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:20.216 11:21:38 -- app/version.sh@14 -- # cut -f2 00:11:20.216 11:21:38 -- app/version.sh@14 -- # tr -d '"' 00:11:20.216 11:21:38 -- app/version.sh@18 -- # minor=1 00:11:20.216 11:21:38 -- app/version.sh@19 -- # get_header_version patch 00:11:20.216 11:21:38 -- app/version.sh@14 -- # cut -f2 00:11:20.216 11:21:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:20.216 11:21:38 -- app/version.sh@14 -- # tr -d '"' 00:11:20.216 11:21:38 -- app/version.sh@19 -- # patch=1 00:11:20.216 11:21:38 -- app/version.sh@20 -- # get_header_version suffix 00:11:20.216 11:21:38 -- app/version.sh@14 -- # cut -f2 00:11:20.216 11:21:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:20.216 11:21:38 -- app/version.sh@14 -- # tr -d '"' 00:11:20.216 11:21:38 -- app/version.sh@20 -- # suffix=-pre 00:11:20.216 11:21:38 -- app/version.sh@22 -- # version=24.1 00:11:20.216 11:21:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:20.216 11:21:38 -- app/version.sh@25 -- # version=24.1.1 00:11:20.216 11:21:38 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:20.216 11:21:38 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:20.216 11:21:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:20.216 11:21:38 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:20.216 11:21:38 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:20.216 00:11:20.216 real 0m0.241s 00:11:20.216 user 0m0.153s 00:11:20.216 sys 0m0.129s 00:11:20.216 11:21:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.216 ************************************ 00:11:20.216 END TEST version 00:11:20.216 ************************************ 00:11:20.216 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:11:20.475 11:21:38 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:11:20.475 11:21:38 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:20.475 11:21:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.475 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:11:20.475 ************************************ 00:11:20.475 START TEST blockdev_general 00:11:20.475 ************************************ 00:11:20.475 11:21:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:20.475 * Looking for test storage... 00:11:20.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:20.475 11:21:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:20.475 11:21:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:20.475 11:21:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:20.475 11:21:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:20.475 11:21:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:20.475 11:21:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:20.475 11:21:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:20.475 11:21:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:20.475 11:21:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.475 11:21:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:20.475 11:21:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:20.475 11:21:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:20.475 11:21:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:20.475 11:21:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:20.475 11:21:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:20.475 11:21:38 -- scripts/common.sh@344 -- # : 1 00:11:20.475 11:21:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:20.475 11:21:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.475 11:21:38 -- scripts/common.sh@364 -- # decimal 1 00:11:20.475 11:21:38 -- scripts/common.sh@352 -- # local d=1 00:11:20.475 11:21:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.475 11:21:38 -- scripts/common.sh@354 -- # echo 1 00:11:20.475 11:21:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:20.475 11:21:38 -- scripts/common.sh@365 -- # decimal 2 00:11:20.475 11:21:38 -- scripts/common.sh@352 -- # local d=2 00:11:20.475 11:21:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.475 11:21:38 -- scripts/common.sh@354 -- # echo 2 00:11:20.475 11:21:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:20.475 11:21:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:20.475 11:21:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:20.475 11:21:38 -- scripts/common.sh@367 -- # return 0 00:11:20.475 11:21:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.475 --rc genhtml_branch_coverage=1 00:11:20.475 --rc genhtml_function_coverage=1 00:11:20.475 --rc genhtml_legend=1 00:11:20.475 --rc geninfo_all_blocks=1 00:11:20.475 --rc geninfo_unexecuted_blocks=1 00:11:20.475 00:11:20.475 ' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.475 --rc genhtml_branch_coverage=1 00:11:20.475 --rc genhtml_function_coverage=1 00:11:20.475 --rc genhtml_legend=1 00:11:20.475 --rc geninfo_all_blocks=1 00:11:20.475 --rc geninfo_unexecuted_blocks=1 00:11:20.475 00:11:20.475 ' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.475 --rc genhtml_branch_coverage=1 00:11:20.475 --rc genhtml_function_coverage=1 00:11:20.475 --rc genhtml_legend=1 00:11:20.475 --rc geninfo_all_blocks=1 00:11:20.475 --rc geninfo_unexecuted_blocks=1 00:11:20.475 00:11:20.475 ' 00:11:20.475 11:21:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.475 --rc genhtml_branch_coverage=1 00:11:20.475 --rc genhtml_function_coverage=1 00:11:20.475 --rc genhtml_legend=1 00:11:20.475 --rc geninfo_all_blocks=1 00:11:20.475 --rc geninfo_unexecuted_blocks=1 00:11:20.475 00:11:20.475 ' 00:11:20.475 11:21:38 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:20.475 11:21:38 -- bdev/nbd_common.sh@6 -- # set -e 00:11:20.475 11:21:38 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:20.475 11:21:38 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:20.475 11:21:38 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:20.475 11:21:38 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:20.475 11:21:38 -- bdev/blockdev.sh@18 -- # : 00:11:20.475 11:21:38 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:20.475 11:21:38 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:20.475 11:21:38 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:20.475 11:21:38 -- bdev/blockdev.sh@672 -- # uname -s 00:11:20.475 11:21:38 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:20.475 11:21:38 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:20.475 11:21:38 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:20.475 11:21:38 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:20.475 11:21:38 -- bdev/blockdev.sh@682 -- # dek= 00:11:20.476 11:21:38 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:20.476 11:21:38 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:20.476 11:21:38 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:20.476 11:21:38 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:20.476 11:21:38 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:20.476 11:21:38 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:20.476 11:21:38 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=76117 00:11:20.476 11:21:38 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:20.476 11:21:38 -- bdev/blockdev.sh@47 -- # waitforlisten 76117 00:11:20.476 11:21:38 -- common/autotest_common.sh@829 -- # '[' -z 76117 ']' 00:11:20.476 11:21:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.476 11:21:38 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:20.476 11:21:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.476 11:21:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.476 11:21:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.476 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:11:20.735 [2024-11-26 11:21:38.720279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:20.735 [2024-11-26 11:21:38.720951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76117 ] 00:11:20.735 [2024-11-26 11:21:38.880517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.735 [2024-11-26 11:21:38.912050] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:20.735 [2024-11-26 11:21:38.912273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.672 11:21:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.672 11:21:39 -- common/autotest_common.sh@862 -- # return 0 00:11:21.672 11:21:39 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:21.672 11:21:39 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:21.672 11:21:39 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:21.672 11:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.672 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:11:21.672 [2024-11-26 11:21:39.749847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:21.672 [2024-11-26 11:21:39.749977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:21.672 00:11:21.672 [2024-11-26 11:21:39.757813] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:21.672 [2024-11-26 11:21:39.757920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:21.672 00:11:21.672 Malloc0 00:11:21.672 Malloc1 00:11:21.672 Malloc2 00:11:21.672 Malloc3 00:11:21.672 Malloc4 00:11:21.672 Malloc5 00:11:21.672 Malloc6 00:11:21.672 Malloc7 00:11:21.672 Malloc8 00:11:21.672 Malloc9 00:11:21.672 [2024-11-26 11:21:39.893372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:21.672 [2024-11-26 11:21:39.893466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.672 [2024-11-26 11:21:39.893497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:11:21.672 [2024-11-26 11:21:39.893512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.672 [2024-11-26 11:21:39.895870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.672 [2024-11-26 11:21:39.895971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:21.672 TestPT 00:11:21.931 11:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:39 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:21.931 5000+0 records in 00:11:21.931 5000+0 records out 00:11:21.931 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0212682 s, 481 MB/s 00:11:21.931 11:21:39 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:21.931 11:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 AIO0 00:11:21.931 11:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:39 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:21.931 11:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 11:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:39 -- bdev/blockdev.sh@738 -- # cat 00:11:21.931 11:21:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:21.931 11:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 11:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:21.931 11:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 11:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:21.931 11:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 11:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.931 11:21:40 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:21.931 11:21:40 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:21.931 11:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 11:21:40 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:22.191 11:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.191 11:21:40 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:22.191 11:21:40 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:22.192 11:21:40 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "698f39e3-8c9d-489c-a33d-76556f218d11"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "698f39e3-8c9d-489c-a33d-76556f218d11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f65310d7-4c7e-5773-8c4c-818569d85775"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f65310d7-4c7e-5773-8c4c-818569d85775",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ee03f5e8-7dd8-51a5-abf7-9c199352e96b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ee03f5e8-7dd8-51a5-abf7-9c199352e96b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b0bb2fdd-842b-5357-b5e7-ebfada01a24c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0bb2fdd-842b-5357-b5e7-ebfada01a24c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ba8f77a7-b711-59fd-8d1e-0247d5a776d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba8f77a7-b711-59fd-8d1e-0247d5a776d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c25b7ed7-1f61-5e07-8b18-b262574c841e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c25b7ed7-1f61-5e07-8b18-b262574c841e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81e2bc52-5c0e-5dc4-b818-4def4147a44d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81e2bc52-5c0e-5dc4-b818-4def4147a44d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bc7a501b-7e02-520e-a1be-c3f78c08b116"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc7a501b-7e02-520e-a1be-c3f78c08b116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ccfadf2d-05f0-51e6-8101-85edaf5d6539"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ccfadf2d-05f0-51e6-8101-85edaf5d6539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "cb93f81f-b78e-57e7-86a6-06ac30ed1124"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb93f81f-b78e-57e7-86a6-06ac30ed1124",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "69458d7f-179c-5546-b57d-84c3f7654c90"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "69458d7f-179c-5546-b57d-84c3f7654c90",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd80c3f6-c441-40d8-bd1d-553fb6950290"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "fab4c046-871b-4594-887d-68fdabfdc4c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b55fac00-afc8-45c9-832a-e6fbcbc1ceac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d01fb062-e4c2-455f-9287-2928c1056004"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "75561354-896f-4cc6-8fd7-61d4c81ca491",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "caddba2b-094c-4f18-bb67-eb761d400309",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b3b18259-0a4e-4001-8f63-ea5f346cea0b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "70c1a753-efac-4c8f-afa4-1be39a87fc17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3211baa6-5777-43f0-889e-0d6ea241dc8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3c669485-0d29-408e-b380-c0b8d28a9b96"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3c669485-0d29-408e-b380-c0b8d28a9b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:22.192 11:21:40 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:22.192 11:21:40 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:22.192 11:21:40 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:22.192 11:21:40 -- bdev/blockdev.sh@752 -- # killprocess 76117 00:11:22.192 11:21:40 -- common/autotest_common.sh@936 -- # '[' -z 76117 ']' 00:11:22.192 11:21:40 -- common/autotest_common.sh@940 -- # kill -0 76117 00:11:22.192 11:21:40 -- common/autotest_common.sh@941 -- # uname 00:11:22.192 11:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.192 11:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76117 00:11:22.192 11:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:22.192 11:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:22.192 killing process with pid 76117 00:11:22.192 11:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76117' 00:11:22.192 11:21:40 -- common/autotest_common.sh@955 -- # kill 76117 00:11:22.192 11:21:40 -- common/autotest_common.sh@960 -- # wait 76117 00:11:22.452 11:21:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:22.452 11:21:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:22.452 11:21:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:22.452 11:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.452 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:11:22.452 ************************************ 00:11:22.452 START TEST bdev_hello_world 00:11:22.452 ************************************ 00:11:22.452 11:21:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:22.711 [2024-11-26 11:21:40.720322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:22.711 [2024-11-26 11:21:40.720519] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76163 ] 00:11:22.711 [2024-11-26 11:21:40.886107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.711 [2024-11-26 11:21:40.917709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.970 [2024-11-26 11:21:41.018259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:22.970 [2024-11-26 11:21:41.018378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:22.970 [2024-11-26 11:21:41.026215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:22.970 [2024-11-26 11:21:41.026311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:22.970 [2024-11-26 11:21:41.034235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:22.970 [2024-11-26 11:21:41.034327] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:22.970 [2024-11-26 11:21:41.034343] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:22.970 [2024-11-26 11:21:41.105337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:22.971 [2024-11-26 11:21:41.105443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.971 [2024-11-26 11:21:41.105485] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:22.971 [2024-11-26 11:21:41.105498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.971 [2024-11-26 11:21:41.108258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.971 [2024-11-26 11:21:41.108313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:23.327 [2024-11-26 11:21:41.235998] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:23.327 [2024-11-26 11:21:41.236085] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:23.327 [2024-11-26 11:21:41.236133] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:23.327 [2024-11-26 11:21:41.236196] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:23.327 [2024-11-26 11:21:41.236253] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:23.327 [2024-11-26 11:21:41.236271] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:23.327 [2024-11-26 11:21:41.236309] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:23.327 00:11:23.327 [2024-11-26 11:21:41.236338] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:23.327 00:11:23.327 real 0m0.837s 00:11:23.327 user 0m0.475s 00:11:23.327 sys 0m0.235s 00:11:23.327 11:21:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.327 11:21:41 -- common/autotest_common.sh@10 -- # set +x 00:11:23.327 ************************************ 00:11:23.327 END TEST bdev_hello_world 00:11:23.327 ************************************ 00:11:23.327 11:21:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:23.327 11:21:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:23.327 11:21:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.327 11:21:41 -- common/autotest_common.sh@10 -- # set +x 00:11:23.327 ************************************ 00:11:23.327 START TEST bdev_bounds 00:11:23.327 ************************************ 00:11:23.327 11:21:41 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:23.327 11:21:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=76194 00:11:23.327 11:21:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:23.327 Process bdevio pid: 76194 00:11:23.327 11:21:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 76194' 00:11:23.327 11:21:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:23.327 11:21:41 -- bdev/blockdev.sh@291 -- # waitforlisten 76194 00:11:23.327 11:21:41 -- common/autotest_common.sh@829 -- # '[' -z 76194 ']' 00:11:23.327 11:21:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.327 11:21:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.327 11:21:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.327 11:21:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.327 11:21:41 -- common/autotest_common.sh@10 -- # set +x 00:11:23.605 [2024-11-26 11:21:41.611809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.605 [2024-11-26 11:21:41.612045] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76194 ] 00:11:23.605 [2024-11-26 11:21:41.775847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.605 [2024-11-26 11:21:41.812729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.605 [2024-11-26 11:21:41.812799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.605 [2024-11-26 11:21:41.812866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.864 [2024-11-26 11:21:41.915278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:23.864 [2024-11-26 11:21:41.915410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:23.864 [2024-11-26 11:21:41.923230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:23.864 [2024-11-26 11:21:41.923344] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:23.864 [2024-11-26 11:21:41.931266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:23.864 [2024-11-26 11:21:41.931379] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:23.864 [2024-11-26 11:21:41.931421] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:23.864 [2024-11-26 11:21:42.007371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:23.864 [2024-11-26 11:21:42.007550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.864 [2024-11-26 11:21:42.007582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:23.864 [2024-11-26 11:21:42.007618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.864 [2024-11-26 11:21:42.010319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.864 [2024-11-26 11:21:42.010376] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:24.431 11:21:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.431 11:21:42 -- common/autotest_common.sh@862 -- # return 0 00:11:24.431 11:21:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:24.690 I/O targets: 00:11:24.690 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:24.690 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:24.690 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:24.690 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:24.690 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:24.690 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:24.690 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:24.690 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:24.691 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:24.691 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:24.691 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:24.691 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:24.691 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:24.691 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:24.691 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:24.691 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:24.691 00:11:24.691 00:11:24.691 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.691 http://cunit.sourceforge.net/ 00:11:24.691 00:11:24.691 00:11:24.691 Suite: bdevio tests on: AIO0 00:11:24.691 Test: blockdev write read block ...passed 00:11:24.691 Test: blockdev write zeroes read block ...passed 00:11:24.691 Test: blockdev write zeroes read no split ...passed 00:11:24.691 Test: blockdev write zeroes read split ...passed 00:11:24.691 Test: blockdev write zeroes read split partial ...passed 00:11:24.691 Test: blockdev reset ...passed 00:11:24.691 Test: blockdev write read 8 blocks ...passed 00:11:24.691 Test: blockdev write read size > 128k ...passed 00:11:24.691 Test: blockdev write read invalid size ...passed 00:11:24.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.691 Test: blockdev write read max offset ...passed 00:11:24.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.691 Test: blockdev writev readv 8 blocks ...passed 00:11:24.691 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.691 Test: blockdev writev readv block ...passed 00:11:24.691 Test: blockdev writev readv size > 128k ...passed 00:11:24.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.691 Test: blockdev comparev and writev ...passed 00:11:24.691 Test: blockdev nvme passthru rw ...passed 00:11:24.691 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.691 Test: blockdev nvme admin passthru ...passed 00:11:24.691 Test: blockdev copy ...passed 00:11:24.691 Suite: bdevio tests on: raid1 00:11:24.691 Test: blockdev write read block ...passed 00:11:24.691 Test: blockdev write zeroes read block ...passed 00:11:24.691 Test: blockdev write zeroes read no split ...passed 00:11:24.691 Test: blockdev write zeroes read split ...passed 00:11:24.691 Test: blockdev write zeroes read split partial ...passed 00:11:24.691 Test: blockdev reset ...passed 00:11:24.691 Test: blockdev write read 8 blocks ...passed 00:11:24.691 Test: blockdev write read size > 128k ...passed 00:11:24.691 Test: blockdev write read invalid size ...passed 00:11:24.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.691 Test: blockdev write read max offset ...passed 00:11:24.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.691 Test: blockdev writev readv 8 blocks ...passed 00:11:24.691 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.691 Test: blockdev writev readv block ...passed 00:11:24.691 Test: blockdev writev readv size > 128k ...passed 00:11:24.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.691 Test: blockdev comparev and writev ...passed 00:11:24.691 Test: blockdev nvme passthru rw ...passed 00:11:24.691 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.691 Test: blockdev nvme admin passthru ...passed 00:11:24.691 Test: blockdev copy ...passed 00:11:24.691 Suite: bdevio tests on: concat0 00:11:24.691 Test: blockdev write read block ...passed 00:11:24.691 Test: blockdev write zeroes read block ...passed 00:11:24.691 Test: blockdev write zeroes read no split ...passed 00:11:24.691 Test: blockdev write zeroes read split ...passed 00:11:24.691 Test: blockdev write zeroes read split partial ...passed 00:11:24.691 Test: blockdev reset ...passed 00:11:24.691 Test: blockdev write read 8 blocks ...passed 00:11:24.691 Test: blockdev write read size > 128k ...passed 00:11:24.691 Test: blockdev write read invalid size ...passed 00:11:24.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.691 Test: blockdev write read max offset ...passed 00:11:24.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.691 Test: blockdev writev readv 8 blocks ...passed 00:11:24.691 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.691 Test: blockdev writev readv block ...passed 00:11:24.691 Test: blockdev writev readv size > 128k ...passed 00:11:24.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.691 Test: blockdev comparev and writev ...passed 00:11:24.691 Test: blockdev nvme passthru rw ...passed 00:11:24.691 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.691 Test: blockdev nvme admin passthru ...passed 00:11:24.691 Test: blockdev copy ...passed 00:11:24.691 Suite: bdevio tests on: raid0 00:11:24.691 Test: blockdev write read block ...passed 00:11:24.691 Test: blockdev write zeroes read block ...passed 00:11:24.691 Test: blockdev write zeroes read no split ...passed 00:11:24.691 Test: blockdev write zeroes read split ...passed 00:11:24.691 Test: blockdev write zeroes read split partial ...passed 00:11:24.691 Test: blockdev reset ...passed 00:11:24.691 Test: blockdev write read 8 blocks ...passed 00:11:24.691 Test: blockdev write read size > 128k ...passed 00:11:24.691 Test: blockdev write read invalid size ...passed 00:11:24.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.691 Test: blockdev write read max offset ...passed 00:11:24.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.691 Test: blockdev writev readv 8 blocks ...passed 00:11:24.691 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.691 Test: blockdev writev readv block ...passed 00:11:24.691 Test: blockdev writev readv size > 128k ...passed 00:11:24.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.691 Test: blockdev comparev and writev ...passed 00:11:24.691 Test: blockdev nvme passthru rw ...passed 00:11:24.691 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.691 Test: blockdev nvme admin passthru ...passed 00:11:24.691 Test: blockdev copy ...passed 00:11:24.691 Suite: bdevio tests on: TestPT 00:11:24.691 Test: blockdev write read block ...passed 00:11:24.691 Test: blockdev write zeroes read block ...passed 00:11:24.691 Test: blockdev write zeroes read no split ...passed 00:11:24.691 Test: blockdev write zeroes read split ...passed 00:11:24.691 Test: blockdev write zeroes read split partial ...passed 00:11:24.691 Test: blockdev reset ...passed 00:11:24.691 Test: blockdev write read 8 blocks ...passed 00:11:24.691 Test: blockdev write read size > 128k ...passed 00:11:24.691 Test: blockdev write read invalid size ...passed 00:11:24.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.691 Test: blockdev write read max offset ...passed 00:11:24.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.691 Test: blockdev writev readv 8 blocks ...passed 00:11:24.691 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.691 Test: blockdev writev readv block ...passed 00:11:24.691 Test: blockdev writev readv size > 128k ...passed 00:11:24.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.691 Test: blockdev comparev and writev ...passed 00:11:24.691 Test: blockdev nvme passthru rw ...passed 00:11:24.691 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.692 Test: blockdev nvme admin passthru ...passed 00:11:24.692 Test: blockdev copy ...passed 00:11:24.692 Suite: bdevio tests on: Malloc2p7 00:11:24.692 Test: blockdev write read block ...passed 00:11:24.692 Test: blockdev write zeroes read block ...passed 00:11:24.692 Test: blockdev write zeroes read no split ...passed 00:11:24.692 Test: blockdev write zeroes read split ...passed 00:11:24.692 Test: blockdev write zeroes read split partial ...passed 00:11:24.692 Test: blockdev reset ...passed 00:11:24.692 Test: blockdev write read 8 blocks ...passed 00:11:24.692 Test: blockdev write read size > 128k ...passed 00:11:24.692 Test: blockdev write read invalid size ...passed 00:11:24.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.692 Test: blockdev write read max offset ...passed 00:11:24.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.692 Test: blockdev writev readv 8 blocks ...passed 00:11:24.692 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.692 Test: blockdev writev readv block ...passed 00:11:24.692 Test: blockdev writev readv size > 128k ...passed 00:11:24.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.692 Test: blockdev comparev and writev ...passed 00:11:24.692 Test: blockdev nvme passthru rw ...passed 00:11:24.692 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.692 Test: blockdev nvme admin passthru ...passed 00:11:24.692 Test: blockdev copy ...passed 00:11:24.692 Suite: bdevio tests on: Malloc2p6 00:11:24.692 Test: blockdev write read block ...passed 00:11:24.692 Test: blockdev write zeroes read block ...passed 00:11:24.692 Test: blockdev write zeroes read no split ...passed 00:11:24.692 Test: blockdev write zeroes read split ...passed 00:11:24.692 Test: blockdev write zeroes read split partial ...passed 00:11:24.692 Test: blockdev reset ...passed 00:11:24.692 Test: blockdev write read 8 blocks ...passed 00:11:24.692 Test: blockdev write read size > 128k ...passed 00:11:24.692 Test: blockdev write read invalid size ...passed 00:11:24.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.692 Test: blockdev write read max offset ...passed 00:11:24.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.692 Test: blockdev writev readv 8 blocks ...passed 00:11:24.692 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.692 Test: blockdev writev readv block ...passed 00:11:24.692 Test: blockdev writev readv size > 128k ...passed 00:11:24.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.692 Test: blockdev comparev and writev ...passed 00:11:24.692 Test: blockdev nvme passthru rw ...passed 00:11:24.692 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.692 Test: blockdev nvme admin passthru ...passed 00:11:24.692 Test: blockdev copy ...passed 00:11:24.692 Suite: bdevio tests on: Malloc2p5 00:11:24.692 Test: blockdev write read block ...passed 00:11:24.692 Test: blockdev write zeroes read block ...passed 00:11:24.692 Test: blockdev write zeroes read no split ...passed 00:11:24.692 Test: blockdev write zeroes read split ...passed 00:11:24.692 Test: blockdev write zeroes read split partial ...passed 00:11:24.692 Test: blockdev reset ...passed 00:11:24.692 Test: blockdev write read 8 blocks ...passed 00:11:24.692 Test: blockdev write read size > 128k ...passed 00:11:24.692 Test: blockdev write read invalid size ...passed 00:11:24.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.692 Test: blockdev write read max offset ...passed 00:11:24.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.692 Test: blockdev writev readv 8 blocks ...passed 00:11:24.692 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.692 Test: blockdev writev readv block ...passed 00:11:24.692 Test: blockdev writev readv size > 128k ...passed 00:11:24.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.692 Test: blockdev comparev and writev ...passed 00:11:24.692 Test: blockdev nvme passthru rw ...passed 00:11:24.692 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.692 Test: blockdev nvme admin passthru ...passed 00:11:24.692 Test: blockdev copy ...passed 00:11:24.692 Suite: bdevio tests on: Malloc2p4 00:11:24.692 Test: blockdev write read block ...passed 00:11:24.692 Test: blockdev write zeroes read block ...passed 00:11:24.692 Test: blockdev write zeroes read no split ...passed 00:11:24.692 Test: blockdev write zeroes read split ...passed 00:11:24.692 Test: blockdev write zeroes read split partial ...passed 00:11:24.692 Test: blockdev reset ...passed 00:11:24.692 Test: blockdev write read 8 blocks ...passed 00:11:24.692 Test: blockdev write read size > 128k ...passed 00:11:24.692 Test: blockdev write read invalid size ...passed 00:11:24.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.692 Test: blockdev write read max offset ...passed 00:11:24.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.692 Test: blockdev writev readv 8 blocks ...passed 00:11:24.951 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc2p3 00:11:24.952 Test: blockdev write read block ...passed 00:11:24.952 Test: blockdev write zeroes read block ...passed 00:11:24.952 Test: blockdev write zeroes read no split ...passed 00:11:24.952 Test: blockdev write zeroes read split ...passed 00:11:24.952 Test: blockdev write zeroes read split partial ...passed 00:11:24.952 Test: blockdev reset ...passed 00:11:24.952 Test: blockdev write read 8 blocks ...passed 00:11:24.952 Test: blockdev write read size > 128k ...passed 00:11:24.952 Test: blockdev write read invalid size ...passed 00:11:24.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.952 Test: blockdev write read max offset ...passed 00:11:24.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.952 Test: blockdev writev readv 8 blocks ...passed 00:11:24.952 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc2p2 00:11:24.952 Test: blockdev write read block ...passed 00:11:24.952 Test: blockdev write zeroes read block ...passed 00:11:24.952 Test: blockdev write zeroes read no split ...passed 00:11:24.952 Test: blockdev write zeroes read split ...passed 00:11:24.952 Test: blockdev write zeroes read split partial ...passed 00:11:24.952 Test: blockdev reset ...passed 00:11:24.952 Test: blockdev write read 8 blocks ...passed 00:11:24.952 Test: blockdev write read size > 128k ...passed 00:11:24.952 Test: blockdev write read invalid size ...passed 00:11:24.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.952 Test: blockdev write read max offset ...passed 00:11:24.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.952 Test: blockdev writev readv 8 blocks ...passed 00:11:24.952 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc2p1 00:11:24.952 Test: blockdev write read block ...passed 00:11:24.952 Test: blockdev write zeroes read block ...passed 00:11:24.952 Test: blockdev write zeroes read no split ...passed 00:11:24.952 Test: blockdev write zeroes read split ...passed 00:11:24.952 Test: blockdev write zeroes read split partial ...passed 00:11:24.952 Test: blockdev reset ...passed 00:11:24.952 Test: blockdev write read 8 blocks ...passed 00:11:24.952 Test: blockdev write read size > 128k ...passed 00:11:24.952 Test: blockdev write read invalid size ...passed 00:11:24.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.952 Test: blockdev write read max offset ...passed 00:11:24.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.952 Test: blockdev writev readv 8 blocks ...passed 00:11:24.952 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc2p0 00:11:24.952 Test: blockdev write read block ...passed 00:11:24.952 Test: blockdev write zeroes read block ...passed 00:11:24.952 Test: blockdev write zeroes read no split ...passed 00:11:24.952 Test: blockdev write zeroes read split ...passed 00:11:24.952 Test: blockdev write zeroes read split partial ...passed 00:11:24.952 Test: blockdev reset ...passed 00:11:24.952 Test: blockdev write read 8 blocks ...passed 00:11:24.952 Test: blockdev write read size > 128k ...passed 00:11:24.952 Test: blockdev write read invalid size ...passed 00:11:24.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.952 Test: blockdev write read max offset ...passed 00:11:24.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.952 Test: blockdev writev readv 8 blocks ...passed 00:11:24.952 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc1p1 00:11:24.952 Test: blockdev write read block ...passed 00:11:24.952 Test: blockdev write zeroes read block ...passed 00:11:24.952 Test: blockdev write zeroes read no split ...passed 00:11:24.952 Test: blockdev write zeroes read split ...passed 00:11:24.952 Test: blockdev write zeroes read split partial ...passed 00:11:24.952 Test: blockdev reset ...passed 00:11:24.952 Test: blockdev write read 8 blocks ...passed 00:11:24.952 Test: blockdev write read size > 128k ...passed 00:11:24.952 Test: blockdev write read invalid size ...passed 00:11:24.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.952 Test: blockdev write read max offset ...passed 00:11:24.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.952 Test: blockdev writev readv 8 blocks ...passed 00:11:24.952 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.952 Test: blockdev writev readv block ...passed 00:11:24.952 Test: blockdev writev readv size > 128k ...passed 00:11:24.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.952 Test: blockdev comparev and writev ...passed 00:11:24.952 Test: blockdev nvme passthru rw ...passed 00:11:24.952 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.952 Test: blockdev nvme admin passthru ...passed 00:11:24.952 Test: blockdev copy ...passed 00:11:24.952 Suite: bdevio tests on: Malloc1p0 00:11:24.953 Test: blockdev write read block ...passed 00:11:24.953 Test: blockdev write zeroes read block ...passed 00:11:24.953 Test: blockdev write zeroes read no split ...passed 00:11:24.953 Test: blockdev write zeroes read split ...passed 00:11:24.953 Test: blockdev write zeroes read split partial ...passed 00:11:24.953 Test: blockdev reset ...passed 00:11:24.953 Test: blockdev write read 8 blocks ...passed 00:11:24.953 Test: blockdev write read size > 128k ...passed 00:11:24.953 Test: blockdev write read invalid size ...passed 00:11:24.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.953 Test: blockdev write read max offset ...passed 00:11:24.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.953 Test: blockdev writev readv 8 blocks ...passed 00:11:24.953 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.953 Test: blockdev writev readv block ...passed 00:11:24.953 Test: blockdev writev readv size > 128k ...passed 00:11:24.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.953 Test: blockdev comparev and writev ...passed 00:11:24.953 Test: blockdev nvme passthru rw ...passed 00:11:24.953 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.953 Test: blockdev nvme admin passthru ...passed 00:11:24.953 Test: blockdev copy ...passed 00:11:24.953 Suite: bdevio tests on: Malloc0 00:11:24.953 Test: blockdev write read block ...passed 00:11:24.953 Test: blockdev write zeroes read block ...passed 00:11:24.953 Test: blockdev write zeroes read no split ...passed 00:11:24.953 Test: blockdev write zeroes read split ...passed 00:11:24.953 Test: blockdev write zeroes read split partial ...passed 00:11:24.953 Test: blockdev reset ...passed 00:11:24.953 Test: blockdev write read 8 blocks ...passed 00:11:24.953 Test: blockdev write read size > 128k ...passed 00:11:24.953 Test: blockdev write read invalid size ...passed 00:11:24.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.953 Test: blockdev write read max offset ...passed 00:11:24.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.953 Test: blockdev writev readv 8 blocks ...passed 00:11:24.953 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.953 Test: blockdev writev readv block ...passed 00:11:24.953 Test: blockdev writev readv size > 128k ...passed 00:11:24.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.953 Test: blockdev comparev and writev ...passed 00:11:24.953 Test: blockdev nvme passthru rw ...passed 00:11:24.953 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.953 Test: blockdev nvme admin passthru ...passed 00:11:24.953 Test: blockdev copy ...passed 00:11:24.953 00:11:24.953 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.953 suites 16 16 n/a 0 0 00:11:24.953 tests 368 368 368 0 0 00:11:24.953 asserts 2224 2224 2224 0 n/a 00:11:24.953 00:11:24.953 Elapsed time = 0.780 seconds 00:11:24.953 0 00:11:24.953 11:21:43 -- bdev/blockdev.sh@293 -- # killprocess 76194 00:11:24.953 11:21:43 -- common/autotest_common.sh@936 -- # '[' -z 76194 ']' 00:11:24.953 11:21:43 -- common/autotest_common.sh@940 -- # kill -0 76194 00:11:24.953 11:21:43 -- common/autotest_common.sh@941 -- # uname 00:11:24.953 11:21:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:24.953 11:21:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76194 00:11:24.953 11:21:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:24.953 11:21:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:24.953 killing process with pid 76194 00:11:24.953 11:21:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76194' 00:11:24.953 11:21:43 -- common/autotest_common.sh@955 -- # kill 76194 00:11:24.953 11:21:43 -- common/autotest_common.sh@960 -- # wait 76194 00:11:25.212 11:21:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:25.212 00:11:25.212 real 0m1.777s 00:11:25.212 user 0m4.433s 00:11:25.212 sys 0m0.464s 00:11:25.212 11:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.212 ************************************ 00:11:25.212 11:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:25.212 END TEST bdev_bounds 00:11:25.212 ************************************ 00:11:25.212 11:21:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:25.212 11:21:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:25.212 11:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.212 11:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:25.212 ************************************ 00:11:25.212 START TEST bdev_nbd 00:11:25.212 ************************************ 00:11:25.212 11:21:43 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:25.212 11:21:43 -- bdev/blockdev.sh@298 -- # uname -s 00:11:25.212 11:21:43 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:25.212 11:21:43 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.212 11:21:43 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:25.212 11:21:43 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.212 11:21:43 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:25.212 11:21:43 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:25.212 11:21:43 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:25.212 11:21:43 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.212 11:21:43 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:25.212 11:21:43 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:25.212 11:21:43 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.212 11:21:43 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:25.212 11:21:43 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.212 11:21:43 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:25.212 11:21:43 -- bdev/blockdev.sh@316 -- # nbd_pid=76244 00:11:25.212 11:21:43 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:25.212 11:21:43 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:25.212 11:21:43 -- bdev/blockdev.sh@318 -- # waitforlisten 76244 /var/tmp/spdk-nbd.sock 00:11:25.212 11:21:43 -- common/autotest_common.sh@829 -- # '[' -z 76244 ']' 00:11:25.212 11:21:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:25.212 11:21:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.212 11:21:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:25.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:25.212 11:21:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.212 11:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:25.212 [2024-11-26 11:21:43.431330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:25.212 [2024-11-26 11:21:43.431528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.471 [2024-11-26 11:21:43.587729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.472 [2024-11-26 11:21:43.622723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.731 [2024-11-26 11:21:43.725941] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:25.731 [2024-11-26 11:21:43.726040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:25.731 [2024-11-26 11:21:43.733876] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:25.731 [2024-11-26 11:21:43.733952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:25.731 [2024-11-26 11:21:43.741901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:25.731 [2024-11-26 11:21:43.741972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:25.731 [2024-11-26 11:21:43.741989] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:25.731 [2024-11-26 11:21:43.819725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:25.731 [2024-11-26 11:21:43.819839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.731 [2024-11-26 11:21:43.819910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:25.731 [2024-11-26 11:21:43.819935] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.731 [2024-11-26 11:21:43.822522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.731 [2024-11-26 11:21:43.822582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:26.300 11:21:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.300 11:21:44 -- common/autotest_common.sh@862 -- # return 0 00:11:26.300 11:21:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@24 -- # local i 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.300 11:21:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:26.560 11:21:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:26.560 11:21:44 -- common/autotest_common.sh@867 -- # local i 00:11:26.560 11:21:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.560 11:21:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.560 11:21:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:26.560 11:21:44 -- common/autotest_common.sh@871 -- # break 00:11:26.560 11:21:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.560 11:21:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.560 11:21:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.560 1+0 records in 00:11:26.560 1+0 records out 00:11:26.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276525 s, 14.8 MB/s 00:11:26.560 11:21:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.560 11:21:44 -- common/autotest_common.sh@884 -- # size=4096 00:11:26.560 11:21:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.560 11:21:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.560 11:21:44 -- common/autotest_common.sh@887 -- # return 0 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.560 11:21:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:26.818 11:21:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:26.818 11:21:44 -- common/autotest_common.sh@867 -- # local i 00:11:26.818 11:21:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.818 11:21:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.818 11:21:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:26.818 11:21:44 -- common/autotest_common.sh@871 -- # break 00:11:26.818 11:21:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.818 11:21:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.818 11:21:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.818 1+0 records in 00:11:26.818 1+0 records out 00:11:26.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338693 s, 12.1 MB/s 00:11:26.818 11:21:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.818 11:21:44 -- common/autotest_common.sh@884 -- # size=4096 00:11:26.818 11:21:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.818 11:21:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.818 11:21:44 -- common/autotest_common.sh@887 -- # return 0 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.818 11:21:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:27.077 11:21:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:27.077 11:21:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:27.077 11:21:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:27.077 11:21:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:27.077 11:21:45 -- common/autotest_common.sh@867 -- # local i 00:11:27.077 11:21:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.077 11:21:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.077 11:21:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:27.077 11:21:45 -- common/autotest_common.sh@871 -- # break 00:11:27.077 11:21:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.077 11:21:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.078 11:21:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.078 1+0 records in 00:11:27.078 1+0 records out 00:11:27.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354993 s, 11.5 MB/s 00:11:27.078 11:21:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.078 11:21:45 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.078 11:21:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.078 11:21:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.078 11:21:45 -- common/autotest_common.sh@887 -- # return 0 00:11:27.078 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.078 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.078 11:21:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:27.337 11:21:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:27.337 11:21:45 -- common/autotest_common.sh@867 -- # local i 00:11:27.337 11:21:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.337 11:21:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.337 11:21:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:27.337 11:21:45 -- common/autotest_common.sh@871 -- # break 00:11:27.337 11:21:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.337 11:21:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.337 11:21:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.337 1+0 records in 00:11:27.337 1+0 records out 00:11:27.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508503 s, 8.1 MB/s 00:11:27.337 11:21:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.337 11:21:45 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.337 11:21:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.337 11:21:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.337 11:21:45 -- common/autotest_common.sh@887 -- # return 0 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.337 11:21:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:27.596 11:21:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:27.596 11:21:45 -- common/autotest_common.sh@867 -- # local i 00:11:27.596 11:21:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.596 11:21:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.596 11:21:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:27.596 11:21:45 -- common/autotest_common.sh@871 -- # break 00:11:27.596 11:21:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.596 11:21:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.596 11:21:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.596 1+0 records in 00:11:27.596 1+0 records out 00:11:27.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034443 s, 11.9 MB/s 00:11:27.596 11:21:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.596 11:21:45 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.596 11:21:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.596 11:21:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.596 11:21:45 -- common/autotest_common.sh@887 -- # return 0 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.596 11:21:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:27.855 11:21:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:27.855 11:21:45 -- common/autotest_common.sh@867 -- # local i 00:11:27.855 11:21:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.855 11:21:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.855 11:21:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:27.855 11:21:45 -- common/autotest_common.sh@871 -- # break 00:11:27.855 11:21:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.855 11:21:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.855 11:21:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.855 1+0 records in 00:11:27.855 1+0 records out 00:11:27.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369133 s, 11.1 MB/s 00:11:27.855 11:21:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.855 11:21:45 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.855 11:21:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.855 11:21:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.855 11:21:45 -- common/autotest_common.sh@887 -- # return 0 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.855 11:21:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:28.115 11:21:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:28.115 11:21:46 -- common/autotest_common.sh@867 -- # local i 00:11:28.115 11:21:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.115 11:21:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.115 11:21:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:28.115 11:21:46 -- common/autotest_common.sh@871 -- # break 00:11:28.115 11:21:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.115 11:21:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.115 11:21:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.115 1+0 records in 00:11:28.115 1+0 records out 00:11:28.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565346 s, 7.2 MB/s 00:11:28.115 11:21:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.115 11:21:46 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.115 11:21:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.115 11:21:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.115 11:21:46 -- common/autotest_common.sh@887 -- # return 0 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.115 11:21:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:28.374 11:21:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:28.374 11:21:46 -- common/autotest_common.sh@867 -- # local i 00:11:28.374 11:21:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.374 11:21:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.374 11:21:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:28.374 11:21:46 -- common/autotest_common.sh@871 -- # break 00:11:28.374 11:21:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.374 11:21:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.374 11:21:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.374 1+0 records in 00:11:28.374 1+0 records out 00:11:28.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335323 s, 12.2 MB/s 00:11:28.374 11:21:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.374 11:21:46 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.374 11:21:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.374 11:21:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.374 11:21:46 -- common/autotest_common.sh@887 -- # return 0 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.374 11:21:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:28.634 11:21:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:28.634 11:21:46 -- common/autotest_common.sh@867 -- # local i 00:11:28.634 11:21:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.634 11:21:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.634 11:21:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:28.634 11:21:46 -- common/autotest_common.sh@871 -- # break 00:11:28.634 11:21:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.634 11:21:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.634 11:21:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.634 1+0 records in 00:11:28.634 1+0 records out 00:11:28.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547608 s, 7.5 MB/s 00:11:28.634 11:21:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.634 11:21:46 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.634 11:21:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.634 11:21:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.634 11:21:46 -- common/autotest_common.sh@887 -- # return 0 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.634 11:21:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:28.893 11:21:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:28.893 11:21:47 -- common/autotest_common.sh@867 -- # local i 00:11:28.893 11:21:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.893 11:21:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.893 11:21:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:28.893 11:21:47 -- common/autotest_common.sh@871 -- # break 00:11:28.893 11:21:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.893 11:21:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.893 11:21:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.893 1+0 records in 00:11:28.893 1+0 records out 00:11:28.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501196 s, 8.2 MB/s 00:11:28.893 11:21:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.893 11:21:47 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.893 11:21:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.893 11:21:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.893 11:21:47 -- common/autotest_common.sh@887 -- # return 0 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.893 11:21:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:29.152 11:21:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:29.152 11:21:47 -- common/autotest_common.sh@867 -- # local i 00:11:29.152 11:21:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.152 11:21:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.152 11:21:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:29.152 11:21:47 -- common/autotest_common.sh@871 -- # break 00:11:29.152 11:21:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.152 11:21:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.152 11:21:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.152 1+0 records in 00:11:29.152 1+0 records out 00:11:29.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628159 s, 6.5 MB/s 00:11:29.152 11:21:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.152 11:21:47 -- common/autotest_common.sh@884 -- # size=4096 00:11:29.152 11:21:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.152 11:21:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.152 11:21:47 -- common/autotest_common.sh@887 -- # return 0 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.152 11:21:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:29.412 11:21:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:29.412 11:21:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:29.412 11:21:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:29.412 11:21:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:29.412 11:21:47 -- common/autotest_common.sh@867 -- # local i 00:11:29.412 11:21:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.412 11:21:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.412 11:21:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:29.412 11:21:47 -- common/autotest_common.sh@871 -- # break 00:11:29.412 11:21:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.412 11:21:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.412 11:21:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.412 1+0 records in 00:11:29.412 1+0 records out 00:11:29.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651735 s, 6.3 MB/s 00:11:29.412 11:21:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.412 11:21:47 -- common/autotest_common.sh@884 -- # size=4096 00:11:29.412 11:21:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.412 11:21:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.412 11:21:47 -- common/autotest_common.sh@887 -- # return 0 00:11:29.412 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.412 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.671 11:21:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:29.929 11:21:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:29.929 11:21:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:29.929 11:21:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:29.929 11:21:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:29.929 11:21:47 -- common/autotest_common.sh@867 -- # local i 00:11:29.929 11:21:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.929 11:21:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.929 11:21:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:29.929 11:21:47 -- common/autotest_common.sh@871 -- # break 00:11:29.929 11:21:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.929 11:21:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.930 11:21:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.930 1+0 records in 00:11:29.930 1+0 records out 00:11:29.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848549 s, 4.8 MB/s 00:11:29.930 11:21:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.930 11:21:47 -- common/autotest_common.sh@884 -- # size=4096 00:11:29.930 11:21:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.930 11:21:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.930 11:21:47 -- common/autotest_common.sh@887 -- # return 0 00:11:29.930 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.930 11:21:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.930 11:21:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:30.189 11:21:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:30.189 11:21:48 -- common/autotest_common.sh@867 -- # local i 00:11:30.189 11:21:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:30.189 11:21:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:30.189 11:21:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:30.189 11:21:48 -- common/autotest_common.sh@871 -- # break 00:11:30.189 11:21:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:30.189 11:21:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:30.189 11:21:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.189 1+0 records in 00:11:30.189 1+0 records out 00:11:30.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712949 s, 5.7 MB/s 00:11:30.189 11:21:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.189 11:21:48 -- common/autotest_common.sh@884 -- # size=4096 00:11:30.189 11:21:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.189 11:21:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:30.189 11:21:48 -- common/autotest_common.sh@887 -- # return 0 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.189 11:21:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:30.448 11:21:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:30.448 11:21:48 -- common/autotest_common.sh@867 -- # local i 00:11:30.448 11:21:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:30.448 11:21:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:30.448 11:21:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:30.448 11:21:48 -- common/autotest_common.sh@871 -- # break 00:11:30.448 11:21:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:30.448 11:21:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:30.448 11:21:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.448 1+0 records in 00:11:30.448 1+0 records out 00:11:30.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741613 s, 5.5 MB/s 00:11:30.448 11:21:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.448 11:21:48 -- common/autotest_common.sh@884 -- # size=4096 00:11:30.448 11:21:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.448 11:21:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:30.448 11:21:48 -- common/autotest_common.sh@887 -- # return 0 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.448 11:21:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:30.708 11:21:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:30.708 11:21:48 -- common/autotest_common.sh@867 -- # local i 00:11:30.708 11:21:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:30.708 11:21:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:30.708 11:21:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:30.708 11:21:48 -- common/autotest_common.sh@871 -- # break 00:11:30.708 11:21:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:30.708 11:21:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:30.708 11:21:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.708 1+0 records in 00:11:30.708 1+0 records out 00:11:30.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111801 s, 3.7 MB/s 00:11:30.708 11:21:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.708 11:21:48 -- common/autotest_common.sh@884 -- # size=4096 00:11:30.708 11:21:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.708 11:21:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:30.708 11:21:48 -- common/autotest_common.sh@887 -- # return 0 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.708 11:21:48 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.967 11:21:48 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd0", 00:11:30.967 "bdev_name": "Malloc0" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd1", 00:11:30.967 "bdev_name": "Malloc1p0" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd2", 00:11:30.967 "bdev_name": "Malloc1p1" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd3", 00:11:30.967 "bdev_name": "Malloc2p0" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd4", 00:11:30.967 "bdev_name": "Malloc2p1" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd5", 00:11:30.967 "bdev_name": "Malloc2p2" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd6", 00:11:30.967 "bdev_name": "Malloc2p3" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd7", 00:11:30.967 "bdev_name": "Malloc2p4" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd8", 00:11:30.967 "bdev_name": "Malloc2p5" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd9", 00:11:30.967 "bdev_name": "Malloc2p6" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd10", 00:11:30.967 "bdev_name": "Malloc2p7" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd11", 00:11:30.967 "bdev_name": "TestPT" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd12", 00:11:30.967 "bdev_name": "raid0" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd13", 00:11:30.967 "bdev_name": "concat0" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd14", 00:11:30.967 "bdev_name": "raid1" 00:11:30.967 }, 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd15", 00:11:30.967 "bdev_name": "AIO0" 00:11:30.967 } 00:11:30.967 ]' 00:11:30.967 11:21:48 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:30.967 11:21:49 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:30.967 { 00:11:30.967 "nbd_device": "/dev/nbd0", 00:11:30.968 "bdev_name": "Malloc0" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd1", 00:11:30.968 "bdev_name": "Malloc1p0" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd2", 00:11:30.968 "bdev_name": "Malloc1p1" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd3", 00:11:30.968 "bdev_name": "Malloc2p0" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd4", 00:11:30.968 "bdev_name": "Malloc2p1" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd5", 00:11:30.968 "bdev_name": "Malloc2p2" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd6", 00:11:30.968 "bdev_name": "Malloc2p3" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd7", 00:11:30.968 "bdev_name": "Malloc2p4" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd8", 00:11:30.968 "bdev_name": "Malloc2p5" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd9", 00:11:30.968 "bdev_name": "Malloc2p6" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd10", 00:11:30.968 "bdev_name": "Malloc2p7" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd11", 00:11:30.968 "bdev_name": "TestPT" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd12", 00:11:30.968 "bdev_name": "raid0" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd13", 00:11:30.968 "bdev_name": "concat0" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd14", 00:11:30.968 "bdev_name": "raid1" 00:11:30.968 }, 00:11:30.968 { 00:11:30.968 "nbd_device": "/dev/nbd15", 00:11:30.968 "bdev_name": "AIO0" 00:11:30.968 } 00:11:30.968 ]' 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@51 -- # local i 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.968 11:21:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@41 -- # break 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.226 11:21:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@41 -- # break 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@41 -- # break 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.484 11:21:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@41 -- # break 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.743 11:21:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@41 -- # break 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.001 11:21:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@41 -- # break 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.259 11:21:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@41 -- # break 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.517 11:21:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@41 -- # break 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.776 11:21:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@41 -- # break 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.033 11:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@41 -- # break 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.291 11:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@41 -- # break 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.550 11:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@41 -- # break 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.807 11:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@41 -- # break 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.807 11:21:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:34.064 11:21:52 -- bdev/nbd_common.sh@41 -- # break 00:11:34.065 11:21:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.065 11:21:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.065 11:21:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@41 -- # break 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.322 11:21:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:34.580 11:21:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:34.580 11:21:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:34.580 11:21:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:34.580 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.580 11:21:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@41 -- # break 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.581 11:21:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@65 -- # true 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@65 -- # count=0 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@122 -- # count=0 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@127 -- # return 0 00:11:34.839 11:21:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@12 -- # local i 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:34.839 11:21:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:35.098 /dev/nbd0 00:11:35.098 11:21:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:35.098 11:21:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:35.098 11:21:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:35.098 11:21:53 -- common/autotest_common.sh@867 -- # local i 00:11:35.098 11:21:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:35.098 11:21:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:35.098 11:21:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:35.098 11:21:53 -- common/autotest_common.sh@871 -- # break 00:11:35.098 11:21:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:35.098 11:21:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:35.098 11:21:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.098 1+0 records in 00:11:35.098 1+0 records out 00:11:35.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275182 s, 14.9 MB/s 00:11:35.098 11:21:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.098 11:21:53 -- common/autotest_common.sh@884 -- # size=4096 00:11:35.098 11:21:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.098 11:21:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:35.098 11:21:53 -- common/autotest_common.sh@887 -- # return 0 00:11:35.098 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.098 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.098 11:21:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:35.357 /dev/nbd1 00:11:35.357 11:21:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.357 11:21:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.357 11:21:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:35.357 11:21:53 -- common/autotest_common.sh@867 -- # local i 00:11:35.358 11:21:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:35.358 11:21:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:35.358 11:21:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:35.358 11:21:53 -- common/autotest_common.sh@871 -- # break 00:11:35.358 11:21:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:35.358 11:21:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:35.358 11:21:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.358 1+0 records in 00:11:35.358 1+0 records out 00:11:35.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325209 s, 12.6 MB/s 00:11:35.358 11:21:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.358 11:21:53 -- common/autotest_common.sh@884 -- # size=4096 00:11:35.358 11:21:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.358 11:21:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:35.358 11:21:53 -- common/autotest_common.sh@887 -- # return 0 00:11:35.358 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.358 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.358 11:21:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:35.617 /dev/nbd10 00:11:35.617 11:21:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:35.617 11:21:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:35.617 11:21:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:35.617 11:21:53 -- common/autotest_common.sh@867 -- # local i 00:11:35.617 11:21:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:35.617 11:21:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:35.617 11:21:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:35.617 11:21:53 -- common/autotest_common.sh@871 -- # break 00:11:35.617 11:21:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:35.617 11:21:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:35.617 11:21:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.617 1+0 records in 00:11:35.617 1+0 records out 00:11:35.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442319 s, 9.3 MB/s 00:11:35.617 11:21:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.617 11:21:53 -- common/autotest_common.sh@884 -- # size=4096 00:11:35.617 11:21:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.617 11:21:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:35.617 11:21:53 -- common/autotest_common.sh@887 -- # return 0 00:11:35.617 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.617 11:21:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.617 11:21:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:35.876 /dev/nbd11 00:11:35.876 11:21:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:35.876 11:21:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:35.876 11:21:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:35.876 11:21:53 -- common/autotest_common.sh@867 -- # local i 00:11:35.876 11:21:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:35.876 11:21:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:35.876 11:21:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:35.876 11:21:53 -- common/autotest_common.sh@871 -- # break 00:11:35.876 11:21:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:35.876 11:21:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:35.876 11:21:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.876 1+0 records in 00:11:35.876 1+0 records out 00:11:35.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410489 s, 10.0 MB/s 00:11:35.876 11:21:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.876 11:21:53 -- common/autotest_common.sh@884 -- # size=4096 00:11:35.876 11:21:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.876 11:21:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:35.876 11:21:54 -- common/autotest_common.sh@887 -- # return 0 00:11:35.876 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.876 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.877 11:21:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:36.137 /dev/nbd12 00:11:36.137 11:21:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:36.137 11:21:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:36.137 11:21:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:36.137 11:21:54 -- common/autotest_common.sh@867 -- # local i 00:11:36.137 11:21:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:36.137 11:21:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:36.137 11:21:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:36.137 11:21:54 -- common/autotest_common.sh@871 -- # break 00:11:36.137 11:21:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:36.137 11:21:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:36.137 11:21:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.137 1+0 records in 00:11:36.137 1+0 records out 00:11:36.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342025 s, 12.0 MB/s 00:11:36.137 11:21:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.137 11:21:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:36.137 11:21:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.137 11:21:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:36.137 11:21:54 -- common/autotest_common.sh@887 -- # return 0 00:11:36.137 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.137 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.137 11:21:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:36.397 /dev/nbd13 00:11:36.397 11:21:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:36.397 11:21:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:36.397 11:21:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:36.397 11:21:54 -- common/autotest_common.sh@867 -- # local i 00:11:36.397 11:21:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:36.397 11:21:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:36.397 11:21:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:36.397 11:21:54 -- common/autotest_common.sh@871 -- # break 00:11:36.397 11:21:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:36.397 11:21:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:36.397 11:21:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.397 1+0 records in 00:11:36.397 1+0 records out 00:11:36.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387638 s, 10.6 MB/s 00:11:36.397 11:21:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.397 11:21:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:36.397 11:21:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.397 11:21:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:36.397 11:21:54 -- common/autotest_common.sh@887 -- # return 0 00:11:36.397 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.397 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.397 11:21:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:36.657 /dev/nbd14 00:11:36.657 11:21:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:36.657 11:21:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:36.657 11:21:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:36.657 11:21:54 -- common/autotest_common.sh@867 -- # local i 00:11:36.657 11:21:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:36.657 11:21:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:36.657 11:21:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:36.657 11:21:54 -- common/autotest_common.sh@871 -- # break 00:11:36.657 11:21:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:36.657 11:21:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:36.657 11:21:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.657 1+0 records in 00:11:36.657 1+0 records out 00:11:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399882 s, 10.2 MB/s 00:11:36.657 11:21:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.657 11:21:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:36.657 11:21:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.657 11:21:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:36.657 11:21:54 -- common/autotest_common.sh@887 -- # return 0 00:11:36.657 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.657 11:21:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.657 11:21:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:36.916 /dev/nbd15 00:11:36.916 11:21:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:37.175 11:21:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:37.175 11:21:55 -- common/autotest_common.sh@867 -- # local i 00:11:37.175 11:21:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:37.175 11:21:55 -- common/autotest_common.sh@871 -- # break 00:11:37.175 11:21:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.175 1+0 records in 00:11:37.175 1+0 records out 00:11:37.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542123 s, 7.6 MB/s 00:11:37.175 11:21:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.175 11:21:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:37.175 11:21:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.175 11:21:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:37.175 11:21:55 -- common/autotest_common.sh@887 -- # return 0 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:37.175 /dev/nbd2 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:37.175 11:21:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:37.175 11:21:55 -- common/autotest_common.sh@867 -- # local i 00:11:37.175 11:21:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:37.175 11:21:55 -- common/autotest_common.sh@871 -- # break 00:11:37.175 11:21:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:37.175 11:21:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.175 1+0 records in 00:11:37.175 1+0 records out 00:11:37.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420041 s, 9.8 MB/s 00:11:37.175 11:21:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.175 11:21:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:37.175 11:21:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.175 11:21:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:37.175 11:21:55 -- common/autotest_common.sh@887 -- # return 0 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.175 11:21:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:37.434 /dev/nbd3 00:11:37.435 11:21:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:37.435 11:21:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:37.435 11:21:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:37.435 11:21:55 -- common/autotest_common.sh@867 -- # local i 00:11:37.435 11:21:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:37.435 11:21:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:37.435 11:21:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:37.435 11:21:55 -- common/autotest_common.sh@871 -- # break 00:11:37.435 11:21:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:37.435 11:21:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:37.435 11:21:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.435 1+0 records in 00:11:37.435 1+0 records out 00:11:37.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660959 s, 6.2 MB/s 00:11:37.435 11:21:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.435 11:21:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:37.435 11:21:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.435 11:21:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:37.435 11:21:55 -- common/autotest_common.sh@887 -- # return 0 00:11:37.435 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.435 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.435 11:21:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:37.693 /dev/nbd4 00:11:37.693 11:21:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:37.693 11:21:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:37.693 11:21:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:37.693 11:21:55 -- common/autotest_common.sh@867 -- # local i 00:11:37.693 11:21:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:37.693 11:21:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:37.693 11:21:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:37.693 11:21:55 -- common/autotest_common.sh@871 -- # break 00:11:37.693 11:21:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:37.693 11:21:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:37.693 11:21:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.693 1+0 records in 00:11:37.693 1+0 records out 00:11:37.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639815 s, 6.4 MB/s 00:11:37.693 11:21:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.951 11:21:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:37.951 11:21:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.951 11:21:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:37.951 11:21:55 -- common/autotest_common.sh@887 -- # return 0 00:11:37.951 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.951 11:21:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.951 11:21:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:38.210 /dev/nbd5 00:11:38.210 11:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:38.210 11:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:38.210 11:21:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:38.210 11:21:56 -- common/autotest_common.sh@867 -- # local i 00:11:38.210 11:21:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:38.210 11:21:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:38.210 11:21:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:38.210 11:21:56 -- common/autotest_common.sh@871 -- # break 00:11:38.210 11:21:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:38.210 11:21:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:38.210 11:21:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.210 1+0 records in 00:11:38.210 1+0 records out 00:11:38.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679739 s, 6.0 MB/s 00:11:38.210 11:21:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.210 11:21:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:38.210 11:21:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.210 11:21:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:38.210 11:21:56 -- common/autotest_common.sh@887 -- # return 0 00:11:38.210 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.210 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.210 11:21:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:38.470 /dev/nbd6 00:11:38.470 11:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:38.470 11:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:38.470 11:21:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:38.470 11:21:56 -- common/autotest_common.sh@867 -- # local i 00:11:38.470 11:21:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:38.470 11:21:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:38.470 11:21:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:38.470 11:21:56 -- common/autotest_common.sh@871 -- # break 00:11:38.470 11:21:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:38.470 11:21:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:38.470 11:21:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.470 1+0 records in 00:11:38.470 1+0 records out 00:11:38.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662301 s, 6.2 MB/s 00:11:38.470 11:21:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.470 11:21:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:38.470 11:21:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.470 11:21:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:38.470 11:21:56 -- common/autotest_common.sh@887 -- # return 0 00:11:38.470 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.470 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.470 11:21:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:38.729 /dev/nbd7 00:11:38.729 11:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:38.729 11:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:38.729 11:21:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:38.729 11:21:56 -- common/autotest_common.sh@867 -- # local i 00:11:38.729 11:21:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:38.729 11:21:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:38.729 11:21:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:38.729 11:21:56 -- common/autotest_common.sh@871 -- # break 00:11:38.729 11:21:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:38.729 11:21:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:38.729 11:21:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.729 1+0 records in 00:11:38.729 1+0 records out 00:11:38.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066782 s, 6.1 MB/s 00:11:38.729 11:21:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.729 11:21:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:38.729 11:21:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.729 11:21:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:38.729 11:21:56 -- common/autotest_common.sh@887 -- # return 0 00:11:38.729 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.729 11:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.729 11:21:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:38.988 /dev/nbd8 00:11:38.988 11:21:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:38.988 11:21:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:38.988 11:21:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:38.988 11:21:57 -- common/autotest_common.sh@867 -- # local i 00:11:38.988 11:21:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:38.988 11:21:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:38.988 11:21:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:38.988 11:21:57 -- common/autotest_common.sh@871 -- # break 00:11:38.988 11:21:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:38.988 11:21:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:38.988 11:21:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.988 1+0 records in 00:11:38.988 1+0 records out 00:11:38.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785125 s, 5.2 MB/s 00:11:38.988 11:21:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.988 11:21:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:38.988 11:21:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.988 11:21:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:38.988 11:21:57 -- common/autotest_common.sh@887 -- # return 0 00:11:38.988 11:21:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.988 11:21:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.988 11:21:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:39.247 /dev/nbd9 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:39.247 11:21:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:39.247 11:21:57 -- common/autotest_common.sh@867 -- # local i 00:11:39.247 11:21:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:39.247 11:21:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:39.247 11:21:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:39.247 11:21:57 -- common/autotest_common.sh@871 -- # break 00:11:39.247 11:21:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:39.247 11:21:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:39.247 11:21:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.247 1+0 records in 00:11:39.247 1+0 records out 00:11:39.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118323 s, 3.5 MB/s 00:11:39.247 11:21:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.247 11:21:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:39.247 11:21:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.247 11:21:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:39.247 11:21:57 -- common/autotest_common.sh@887 -- # return 0 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.247 11:21:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.507 11:21:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd0", 00:11:39.507 "bdev_name": "Malloc0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd1", 00:11:39.507 "bdev_name": "Malloc1p0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd10", 00:11:39.507 "bdev_name": "Malloc1p1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd11", 00:11:39.507 "bdev_name": "Malloc2p0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd12", 00:11:39.507 "bdev_name": "Malloc2p1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd13", 00:11:39.507 "bdev_name": "Malloc2p2" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd14", 00:11:39.507 "bdev_name": "Malloc2p3" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd15", 00:11:39.507 "bdev_name": "Malloc2p4" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd2", 00:11:39.507 "bdev_name": "Malloc2p5" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd3", 00:11:39.507 "bdev_name": "Malloc2p6" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd4", 00:11:39.507 "bdev_name": "Malloc2p7" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd5", 00:11:39.507 "bdev_name": "TestPT" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd6", 00:11:39.507 "bdev_name": "raid0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd7", 00:11:39.507 "bdev_name": "concat0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd8", 00:11:39.507 "bdev_name": "raid1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd9", 00:11:39.507 "bdev_name": "AIO0" 00:11:39.507 } 00:11:39.507 ]' 00:11:39.507 11:21:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd0", 00:11:39.507 "bdev_name": "Malloc0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd1", 00:11:39.507 "bdev_name": "Malloc1p0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd10", 00:11:39.507 "bdev_name": "Malloc1p1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd11", 00:11:39.507 "bdev_name": "Malloc2p0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd12", 00:11:39.507 "bdev_name": "Malloc2p1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd13", 00:11:39.507 "bdev_name": "Malloc2p2" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd14", 00:11:39.507 "bdev_name": "Malloc2p3" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd15", 00:11:39.507 "bdev_name": "Malloc2p4" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd2", 00:11:39.507 "bdev_name": "Malloc2p5" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd3", 00:11:39.507 "bdev_name": "Malloc2p6" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd4", 00:11:39.507 "bdev_name": "Malloc2p7" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd5", 00:11:39.507 "bdev_name": "TestPT" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd6", 00:11:39.507 "bdev_name": "raid0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd7", 00:11:39.507 "bdev_name": "concat0" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd8", 00:11:39.507 "bdev_name": "raid1" 00:11:39.507 }, 00:11:39.507 { 00:11:39.507 "nbd_device": "/dev/nbd9", 00:11:39.507 "bdev_name": "AIO0" 00:11:39.507 } 00:11:39.507 ]' 00:11:39.507 11:21:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.507 11:21:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:39.507 /dev/nbd1 00:11:39.507 /dev/nbd10 00:11:39.507 /dev/nbd11 00:11:39.507 /dev/nbd12 00:11:39.507 /dev/nbd13 00:11:39.507 /dev/nbd14 00:11:39.507 /dev/nbd15 00:11:39.507 /dev/nbd2 00:11:39.507 /dev/nbd3 00:11:39.507 /dev/nbd4 00:11:39.507 /dev/nbd5 00:11:39.507 /dev/nbd6 00:11:39.507 /dev/nbd7 00:11:39.507 /dev/nbd8 00:11:39.507 /dev/nbd9' 00:11:39.507 11:21:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:39.507 /dev/nbd1 00:11:39.507 /dev/nbd10 00:11:39.507 /dev/nbd11 00:11:39.507 /dev/nbd12 00:11:39.507 /dev/nbd13 00:11:39.507 /dev/nbd14 00:11:39.507 /dev/nbd15 00:11:39.507 /dev/nbd2 00:11:39.508 /dev/nbd3 00:11:39.508 /dev/nbd4 00:11:39.508 /dev/nbd5 00:11:39.508 /dev/nbd6 00:11:39.508 /dev/nbd7 00:11:39.508 /dev/nbd8 00:11:39.508 /dev/nbd9' 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@65 -- # count=16 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@95 -- # count=16 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:39.508 256+0 records in 00:11:39.508 256+0 records out 00:11:39.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065411 s, 160 MB/s 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.508 11:21:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:39.767 256+0 records in 00:11:39.767 256+0 records out 00:11:39.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150022 s, 7.0 MB/s 00:11:39.767 11:21:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.767 11:21:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:39.767 256+0 records in 00:11:39.767 256+0 records out 00:11:39.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16211 s, 6.5 MB/s 00:11:39.767 11:21:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.767 11:21:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:40.026 256+0 records in 00:11:40.026 256+0 records out 00:11:40.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164052 s, 6.4 MB/s 00:11:40.026 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.026 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:40.285 256+0 records in 00:11:40.285 256+0 records out 00:11:40.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155372 s, 6.7 MB/s 00:11:40.285 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.285 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:40.285 256+0 records in 00:11:40.285 256+0 records out 00:11:40.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148068 s, 7.1 MB/s 00:11:40.285 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.285 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:40.545 256+0 records in 00:11:40.545 256+0 records out 00:11:40.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162109 s, 6.5 MB/s 00:11:40.545 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.545 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:40.545 256+0 records in 00:11:40.545 256+0 records out 00:11:40.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148107 s, 7.1 MB/s 00:11:40.545 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.545 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:40.804 256+0 records in 00:11:40.804 256+0 records out 00:11:40.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150286 s, 7.0 MB/s 00:11:40.804 11:21:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.804 11:21:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:41.063 256+0 records in 00:11:41.063 256+0 records out 00:11:41.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16578 s, 6.3 MB/s 00:11:41.063 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.063 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:41.063 256+0 records in 00:11:41.063 256+0 records out 00:11:41.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155154 s, 6.8 MB/s 00:11:41.063 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.063 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:41.323 256+0 records in 00:11:41.323 256+0 records out 00:11:41.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143355 s, 7.3 MB/s 00:11:41.323 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.323 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:41.582 256+0 records in 00:11:41.582 256+0 records out 00:11:41.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153519 s, 6.8 MB/s 00:11:41.582 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.582 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:41.582 256+0 records in 00:11:41.582 256+0 records out 00:11:41.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153077 s, 6.8 MB/s 00:11:41.582 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.582 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:41.841 256+0 records in 00:11:41.841 256+0 records out 00:11:41.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159789 s, 6.6 MB/s 00:11:41.841 11:21:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.841 11:21:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:41.841 256+0 records in 00:11:41.841 256+0 records out 00:11:41.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163074 s, 6.4 MB/s 00:11:41.841 11:22:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.841 11:22:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:42.100 256+0 records in 00:11:42.100 256+0 records out 00:11:42.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.25691 s, 4.1 MB/s 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.100 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.359 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@51 -- # local i 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.360 11:22:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@41 -- # break 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.619 11:22:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@41 -- # break 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.879 11:22:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@41 -- # break 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.138 11:22:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@41 -- # break 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.397 11:22:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@41 -- # break 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.656 11:22:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:43.915 11:22:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:43.915 11:22:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:43.915 11:22:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@41 -- # break 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.916 11:22:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:44.174 11:22:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@41 -- # break 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.175 11:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@41 -- # break 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.434 11:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:44.693 11:22:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:44.693 11:22:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:44.693 11:22:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@41 -- # break 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@41 -- # break 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.694 11:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@41 -- # break 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.040 11:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@41 -- # break 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.319 11:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.578 11:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:45.579 11:22:03 -- bdev/nbd_common.sh@41 -- # break 00:11:45.579 11:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.579 11:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.579 11:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:45.579 11:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@41 -- # break 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.838 11:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@41 -- # break 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.838 11:22:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@41 -- # break 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.098 11:22:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@65 -- # true 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@65 -- # count=0 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@104 -- # count=0 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@109 -- # return 0 00:11:46.358 11:22:04 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:46.358 11:22:04 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:46.618 malloc_lvol_verify 00:11:46.618 11:22:04 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:46.878 cf7ca32b-e524-4d98-a43b-d47a8bc1de56 00:11:46.878 11:22:04 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:47.139 3cbb49b1-b708-4d18-bc80-b7632a7ccbbb 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:47.139 /dev/nbd0 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:47.139 mke2fs 1.47.0 (5-Feb-2023) 00:11:47.139 00:11:47.139 Filesystem too small for a journal 00:11:47.139 Discarding device blocks: 0/1024 done 00:11:47.139 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:47.139 00:11:47.139 Allocating group tables: 0/1 done 00:11:47.139 Writing inode tables: 0/1 done 00:11:47.139 Writing superblocks and filesystem accounting information: 0/1 done 00:11:47.139 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@51 -- # local i 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.139 11:22:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@41 -- # break 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:47.398 11:22:05 -- bdev/nbd_common.sh@147 -- # return 0 00:11:47.398 11:22:05 -- bdev/blockdev.sh@324 -- # killprocess 76244 00:11:47.398 11:22:05 -- common/autotest_common.sh@936 -- # '[' -z 76244 ']' 00:11:47.398 11:22:05 -- common/autotest_common.sh@940 -- # kill -0 76244 00:11:47.398 11:22:05 -- common/autotest_common.sh@941 -- # uname 00:11:47.398 11:22:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.398 11:22:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76244 00:11:47.398 killing process with pid 76244 00:11:47.398 11:22:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:47.398 11:22:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:47.398 11:22:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76244' 00:11:47.398 11:22:05 -- common/autotest_common.sh@955 -- # kill 76244 00:11:47.398 11:22:05 -- common/autotest_common.sh@960 -- # wait 76244 00:11:47.966 11:22:05 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:47.966 00:11:47.966 real 0m22.542s 00:11:47.966 user 0m31.842s 00:11:47.966 sys 0m8.852s 00:11:47.966 11:22:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:47.966 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:11:47.966 ************************************ 00:11:47.966 END TEST bdev_nbd 00:11:47.966 ************************************ 00:11:47.966 11:22:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:47.966 11:22:05 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:11:47.966 11:22:05 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:11:47.966 11:22:05 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.966 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:11:47.966 ************************************ 00:11:47.966 START TEST bdev_fio 00:11:47.966 ************************************ 00:11:47.966 11:22:05 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:47.966 11:22:05 -- bdev/blockdev.sh@329 -- # local env_context 00:11:47.966 11:22:05 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:47.966 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:47.966 11:22:05 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:47.966 11:22:05 -- bdev/blockdev.sh@337 -- # echo '' 00:11:47.966 11:22:05 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:47.966 11:22:05 -- bdev/blockdev.sh@337 -- # env_context= 00:11:47.966 11:22:05 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:47.966 11:22:05 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:47.966 11:22:05 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:47.966 11:22:05 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:47.966 11:22:05 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:47.966 11:22:05 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:47.966 11:22:05 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:47.966 11:22:05 -- common/autotest_common.sh@1290 -- # cat 00:11:47.967 11:22:05 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:47.967 11:22:05 -- common/autotest_common.sh@1303 -- # cat 00:11:47.967 11:22:05 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:47.967 11:22:05 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:47.967 11:22:06 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:47.967 11:22:06 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:11:47.967 11:22:06 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.967 11:22:06 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:11:47.967 11:22:06 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:47.967 11:22:06 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.967 11:22:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:47.967 11:22:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.967 11:22:06 -- common/autotest_common.sh@10 -- # set +x 00:11:47.967 ************************************ 00:11:47.967 START TEST bdev_fio_rw_verify 00:11:47.967 ************************************ 00:11:47.967 11:22:06 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.967 11:22:06 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.967 11:22:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:47.967 11:22:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:47.967 11:22:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:47.967 11:22:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:47.967 11:22:06 -- common/autotest_common.sh@1330 -- # shift 00:11:47.967 11:22:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:47.967 11:22:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:47.967 11:22:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:47.967 11:22:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:47.967 11:22:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:47.967 11:22:06 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:11:47.967 11:22:06 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:11:47.967 11:22:06 -- common/autotest_common.sh@1336 -- # break 00:11:47.967 11:22:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:47.967 11:22:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:48.226 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:48.226 fio-3.35 00:11:48.226 Starting 16 threads 00:12:00.435 00:12:00.435 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=77343: Tue Nov 26 11:22:17 2024 00:12:00.435 read: IOPS=82.2k, BW=321MiB/s (337MB/s)(3214MiB/10003msec) 00:12:00.435 slat (usec): min=2, max=11317, avg=33.76, stdev=229.03 00:12:00.435 clat (usec): min=10, max=14725, avg=267.45, stdev=662.60 00:12:00.435 lat (usec): min=27, max=14747, avg=301.21, stdev=699.18 00:12:00.435 clat percentiles (usec): 00:12:00.435 | 50.000th=[ 159], 99.000th=[ 4228], 99.900th=[ 7177], 99.990th=[ 8356], 00:12:00.435 | 99.999th=[11600] 00:12:00.435 write: IOPS=132k, BW=514MiB/s (539MB/s)(5088MiB/9891msec); 0 zone resets 00:12:00.435 slat (usec): min=5, max=24097, avg=60.08, stdev=321.77 00:12:00.435 clat (usec): min=9, max=24342, avg=349.66, stdev=782.05 00:12:00.435 lat (usec): min=40, max=24369, avg=409.74, stdev=842.46 00:12:00.435 clat percentiles (usec): 00:12:00.435 | 50.000th=[ 210], 99.000th=[ 4293], 99.900th=[ 7373], 99.990th=[12649], 00:12:00.435 | 99.999th=[20055] 00:12:00.435 bw ( KiB/s): min=357988, max=806343, per=98.89%, avg=520931.89, stdev=7975.02, samples=304 00:12:00.435 iops : min=89496, max=201584, avg=130232.53, stdev=1993.75, samples=304 00:12:00.435 lat (usec) : 10=0.01%, 20=0.01%, 50=0.75%, 100=15.26%, 250=57.10% 00:12:00.435 lat (usec) : 500=22.60%, 750=1.05%, 1000=0.11% 00:12:00.435 lat (msec) : 2=0.11%, 4=1.12%, 10=1.86%, 20=0.03%, 50=0.01% 00:12:00.435 cpu : usr=58.09%, sys=2.28%, ctx=230962, majf=0, minf=117865 00:12:00.435 IO depths : 1=11.3%, 2=23.9%, 4=51.8%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.435 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.435 issued rwts: total=822686,1302536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:00.435 00:12:00.435 Run status group 0 (all jobs): 00:12:00.435 READ: bw=321MiB/s (337MB/s), 321MiB/s-321MiB/s (337MB/s-337MB/s), io=3214MiB (3370MB), run=10003-10003msec 00:12:00.435 WRITE: bw=514MiB/s (539MB/s), 514MiB/s-514MiB/s (539MB/s-539MB/s), io=5088MiB (5335MB), run=9891-9891msec 00:12:00.435 ----------------------------------------------------- 00:12:00.435 Suppressions used: 00:12:00.435 count bytes template 00:12:00.435 16 140 /usr/src/fio/parse.c 00:12:00.435 11853 1137888 /usr/src/fio/iolog.c 00:12:00.435 1 904 libcrypto.so 00:12:00.435 ----------------------------------------------------- 00:12:00.435 00:12:00.435 00:12:00.435 real 0m11.638s 00:12:00.435 user 1m35.660s 00:12:00.435 sys 0m4.373s 00:12:00.435 11:22:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:00.435 ************************************ 00:12:00.435 END TEST bdev_fio_rw_verify 00:12:00.435 ************************************ 00:12:00.435 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:12:00.435 11:22:17 -- bdev/blockdev.sh@348 -- # rm -f 00:12:00.435 11:22:17 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:00.435 11:22:17 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:00.435 11:22:17 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:00.435 11:22:17 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:00.435 11:22:17 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:00.435 11:22:17 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:00.435 11:22:17 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:00.435 11:22:17 -- common/autotest_common.sh@1290 -- # cat 00:12:00.435 11:22:17 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:00.435 11:22:17 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:00.435 11:22:17 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:00.436 11:22:17 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "698f39e3-8c9d-489c-a33d-76556f218d11"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "698f39e3-8c9d-489c-a33d-76556f218d11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f65310d7-4c7e-5773-8c4c-818569d85775"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f65310d7-4c7e-5773-8c4c-818569d85775",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ee03f5e8-7dd8-51a5-abf7-9c199352e96b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ee03f5e8-7dd8-51a5-abf7-9c199352e96b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b0bb2fdd-842b-5357-b5e7-ebfada01a24c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0bb2fdd-842b-5357-b5e7-ebfada01a24c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ba8f77a7-b711-59fd-8d1e-0247d5a776d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba8f77a7-b711-59fd-8d1e-0247d5a776d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c25b7ed7-1f61-5e07-8b18-b262574c841e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c25b7ed7-1f61-5e07-8b18-b262574c841e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81e2bc52-5c0e-5dc4-b818-4def4147a44d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81e2bc52-5c0e-5dc4-b818-4def4147a44d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bc7a501b-7e02-520e-a1be-c3f78c08b116"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc7a501b-7e02-520e-a1be-c3f78c08b116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ccfadf2d-05f0-51e6-8101-85edaf5d6539"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ccfadf2d-05f0-51e6-8101-85edaf5d6539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "cb93f81f-b78e-57e7-86a6-06ac30ed1124"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb93f81f-b78e-57e7-86a6-06ac30ed1124",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "69458d7f-179c-5546-b57d-84c3f7654c90"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "69458d7f-179c-5546-b57d-84c3f7654c90",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd80c3f6-c441-40d8-bd1d-553fb6950290"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "fab4c046-871b-4594-887d-68fdabfdc4c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b55fac00-afc8-45c9-832a-e6fbcbc1ceac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d01fb062-e4c2-455f-9287-2928c1056004"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "75561354-896f-4cc6-8fd7-61d4c81ca491",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "caddba2b-094c-4f18-bb67-eb761d400309",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b3b18259-0a4e-4001-8f63-ea5f346cea0b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "70c1a753-efac-4c8f-afa4-1be39a87fc17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3211baa6-5777-43f0-889e-0d6ea241dc8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3c669485-0d29-408e-b380-c0b8d28a9b96"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3c669485-0d29-408e-b380-c0b8d28a9b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:00.436 11:22:17 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:00.436 Malloc1p0 00:12:00.436 Malloc1p1 00:12:00.436 Malloc2p0 00:12:00.436 Malloc2p1 00:12:00.436 Malloc2p2 00:12:00.436 Malloc2p3 00:12:00.436 Malloc2p4 00:12:00.436 Malloc2p5 00:12:00.436 Malloc2p6 00:12:00.436 Malloc2p7 00:12:00.436 TestPT 00:12:00.436 raid0 00:12:00.436 concat0 ]] 00:12:00.436 11:22:17 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "698f39e3-8c9d-489c-a33d-76556f218d11"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "698f39e3-8c9d-489c-a33d-76556f218d11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f65310d7-4c7e-5773-8c4c-818569d85775"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f65310d7-4c7e-5773-8c4c-818569d85775",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ee03f5e8-7dd8-51a5-abf7-9c199352e96b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ee03f5e8-7dd8-51a5-abf7-9c199352e96b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b0bb2fdd-842b-5357-b5e7-ebfada01a24c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0bb2fdd-842b-5357-b5e7-ebfada01a24c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ba8f77a7-b711-59fd-8d1e-0247d5a776d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba8f77a7-b711-59fd-8d1e-0247d5a776d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c25b7ed7-1f61-5e07-8b18-b262574c841e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c25b7ed7-1f61-5e07-8b18-b262574c841e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81e2bc52-5c0e-5dc4-b818-4def4147a44d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81e2bc52-5c0e-5dc4-b818-4def4147a44d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f3bb61e1-cc9c-53ce-a16c-0c508a9123e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "bc7a501b-7e02-520e-a1be-c3f78c08b116"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc7a501b-7e02-520e-a1be-c3f78c08b116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "ccfadf2d-05f0-51e6-8101-85edaf5d6539"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ccfadf2d-05f0-51e6-8101-85edaf5d6539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "cb93f81f-b78e-57e7-86a6-06ac30ed1124"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb93f81f-b78e-57e7-86a6-06ac30ed1124",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "69458d7f-179c-5546-b57d-84c3f7654c90"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "69458d7f-179c-5546-b57d-84c3f7654c90",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd80c3f6-c441-40d8-bd1d-553fb6950290"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd80c3f6-c441-40d8-bd1d-553fb6950290",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "fab4c046-871b-4594-887d-68fdabfdc4c8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b55fac00-afc8-45c9-832a-e6fbcbc1ceac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "d01fb062-e4c2-455f-9287-2928c1056004"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d01fb062-e4c2-455f-9287-2928c1056004",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "75561354-896f-4cc6-8fd7-61d4c81ca491",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "caddba2b-094c-4f18-bb67-eb761d400309",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b3b18259-0a4e-4001-8f63-ea5f346cea0b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b3b18259-0a4e-4001-8f63-ea5f346cea0b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "70c1a753-efac-4c8f-afa4-1be39a87fc17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3211baa6-5777-43f0-889e-0d6ea241dc8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "3c669485-0d29-408e-b380-c0b8d28a9b96"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "3c669485-0d29-408e-b380-c0b8d28a9b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:00.437 11:22:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:00.437 11:22:17 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:00.437 11:22:17 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:00.437 11:22:17 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:00.437 11:22:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:00.437 11:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.437 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:12:00.437 ************************************ 00:12:00.437 START TEST bdev_fio_trim 00:12:00.438 ************************************ 00:12:00.438 11:22:17 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:00.438 11:22:17 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:00.438 11:22:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:00.438 11:22:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:00.438 11:22:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:00.438 11:22:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:00.438 11:22:17 -- common/autotest_common.sh@1330 -- # shift 00:12:00.438 11:22:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:00.438 11:22:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:00.438 11:22:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:00.438 11:22:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:00.438 11:22:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:00.438 11:22:17 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:12:00.438 11:22:17 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:12:00.438 11:22:17 -- common/autotest_common.sh@1336 -- # break 00:12:00.438 11:22:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:00.438 11:22:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:00.438 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:00.438 fio-3.35 00:12:00.438 Starting 14 threads 00:12:10.435 00:12:10.435 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=77520: Tue Nov 26 11:22:28 2024 00:12:10.435 write: IOPS=170k, BW=664MiB/s (696MB/s)(6636MiB/10001msec); 0 zone resets 00:12:10.435 slat (usec): min=2, max=13040, avg=29.84, stdev=188.46 00:12:10.435 clat (usec): min=25, max=13208, avg=210.77, stdev=500.21 00:12:10.435 lat (usec): min=36, max=13225, avg=240.61, stdev=533.16 00:12:10.435 clat percentiles (usec): 00:12:10.435 | 50.000th=[ 139], 99.000th=[ 4080], 99.900th=[ 6128], 99.990th=[ 7242], 00:12:10.435 | 99.999th=[ 7504] 00:12:10.435 bw ( KiB/s): min=492091, max=944672, per=100.00%, avg=679703.21, stdev=11090.42, samples=266 00:12:10.435 iops : min=123022, max=236168, avg=169925.42, stdev=2772.60, samples=266 00:12:10.435 trim: IOPS=170k, BW=664MiB/s (696MB/s)(6636MiB/10001msec); 0 zone resets 00:12:10.435 slat (usec): min=4, max=7404, avg=20.04, stdev=155.31 00:12:10.435 clat (usec): min=4, max=13226, avg=221.83, stdev=517.13 00:12:10.435 lat (usec): min=14, max=13235, avg=241.87, stdev=539.47 00:12:10.435 clat percentiles (usec): 00:12:10.435 | 50.000th=[ 155], 99.000th=[ 4113], 99.900th=[ 6194], 99.990th=[ 7242], 00:12:10.435 | 99.999th=[ 7570] 00:12:10.435 bw ( KiB/s): min=492091, max=944672, per=100.00%, avg=679703.21, stdev=11090.15, samples=266 00:12:10.435 iops : min=123022, max=236168, avg=169925.42, stdev=2772.54, samples=266 00:12:10.435 lat (usec) : 10=0.15%, 20=0.36%, 50=1.26%, 100=17.14%, 250=75.14% 00:12:10.435 lat (usec) : 500=4.10%, 750=0.19%, 1000=0.01% 00:12:10.435 lat (msec) : 2=0.02%, 4=0.55%, 10=1.07%, 20=0.01% 00:12:10.435 cpu : usr=68.99%, sys=0.65%, ctx=153946, majf=0, minf=24044 00:12:10.435 IO depths : 1=12.2%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.435 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.435 issued rwts: total=0,1698927,1698931,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.435 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:10.435 00:12:10.435 Run status group 0 (all jobs): 00:12:10.435 WRITE: bw=664MiB/s (696MB/s), 664MiB/s-664MiB/s (696MB/s-696MB/s), io=6636MiB (6959MB), run=10001-10001msec 00:12:10.435 TRIM: bw=664MiB/s (696MB/s), 664MiB/s-664MiB/s (696MB/s-696MB/s), io=6636MiB (6959MB), run=10001-10001msec 00:12:11.004 ----------------------------------------------------- 00:12:11.004 Suppressions used: 00:12:11.004 count bytes template 00:12:11.004 14 129 /usr/src/fio/parse.c 00:12:11.004 1 904 libcrypto.so 00:12:11.004 ----------------------------------------------------- 00:12:11.004 00:12:11.004 00:12:11.004 real 0m11.329s 00:12:11.004 user 1m38.745s 00:12:11.004 sys 0m1.746s 00:12:11.004 11:22:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.004 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:11.004 ************************************ 00:12:11.004 END TEST bdev_fio_trim 00:12:11.004 ************************************ 00:12:11.004 11:22:29 -- bdev/blockdev.sh@366 -- # rm -f 00:12:11.004 11:22:29 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:11.004 /home/vagrant/spdk_repo/spdk 00:12:11.004 11:22:29 -- bdev/blockdev.sh@368 -- # popd 00:12:11.004 11:22:29 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:11.004 00:12:11.004 real 0m23.233s 00:12:11.004 user 3m14.503s 00:12:11.004 sys 0m6.253s 00:12:11.004 11:22:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.004 ************************************ 00:12:11.004 END TEST bdev_fio 00:12:11.004 ************************************ 00:12:11.004 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:11.263 11:22:29 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:11.263 11:22:29 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:11.263 11:22:29 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:11.263 11:22:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.263 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:11.263 ************************************ 00:12:11.263 START TEST bdev_verify 00:12:11.263 ************************************ 00:12:11.263 11:22:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:11.263 [2024-11-26 11:22:29.306990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.263 [2024-11-26 11:22:29.307154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77680 ] 00:12:11.263 [2024-11-26 11:22:29.461355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:11.263 [2024-11-26 11:22:29.494098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.263 [2024-11-26 11:22:29.494159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.522 [2024-11-26 11:22:29.595509] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.522 [2024-11-26 11:22:29.595632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.522 [2024-11-26 11:22:29.603463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.522 [2024-11-26 11:22:29.603542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.522 [2024-11-26 11:22:29.611491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.522 [2024-11-26 11:22:29.611545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:11.522 [2024-11-26 11:22:29.611580] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:11.522 [2024-11-26 11:22:29.684397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.522 [2024-11-26 11:22:29.684510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.522 [2024-11-26 11:22:29.684543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:11.522 [2024-11-26 11:22:29.684558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.522 [2024-11-26 11:22:29.687232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.522 [2024-11-26 11:22:29.687334] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:11.781 Running I/O for 5 seconds... 00:12:17.051 00:12:17.051 Latency(us) 00:12:17.051 [2024-11-26T11:22:35.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x1000 00:12:17.051 Malloc0 : 5.15 1705.47 6.66 0.00 0.00 74222.22 2159.71 141081.13 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x1000 length 0x1000 00:12:17.051 Malloc0 : 5.16 1676.29 6.55 0.00 0.00 75833.35 2115.03 207808.70 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x800 00:12:17.051 Malloc1p0 : 5.18 1178.21 4.60 0.00 0.00 108025.82 4259.84 129642.12 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x800 length 0x800 00:12:17.051 Malloc1p0 : 5.17 1165.86 4.55 0.00 0.00 108974.26 4349.21 123922.62 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x800 00:12:17.051 Malloc1p1 : 5.18 1177.76 4.60 0.00 0.00 107865.09 3991.74 125829.12 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x800 length 0x800 00:12:17.051 Malloc1p1 : 5.17 1165.59 4.55 0.00 0.00 108834.39 4051.32 119632.99 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x200 00:12:17.051 Malloc2p0 : 5.18 1177.43 4.60 0.00 0.00 107711.64 4051.32 122016.12 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x200 length 0x200 00:12:17.051 Malloc2p0 : 5.17 1165.29 4.55 0.00 0.00 108652.31 4081.11 116296.61 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x200 00:12:17.051 Malloc2p1 : 5.18 1177.16 4.60 0.00 0.00 107582.96 3813.00 118203.11 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x200 length 0x200 00:12:17.051 Malloc2p1 : 5.17 1165.00 4.55 0.00 0.00 108531.28 3753.43 113436.86 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x200 00:12:17.051 Malloc2p2 : 5.19 1176.85 4.60 0.00 0.00 107452.82 3753.43 114866.73 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x200 length 0x200 00:12:17.051 Malloc2p2 : 5.17 1164.71 4.55 0.00 0.00 108381.69 3664.06 109623.85 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.051 Verification LBA range: start 0x0 length 0x200 00:12:17.051 Malloc2p3 : 5.19 1176.47 4.60 0.00 0.00 107317.57 3783.21 111053.73 00:12:17.051 [2024-11-26T11:22:35.281Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x200 length 0x200 00:12:17.052 Malloc2p3 : 5.17 1164.42 4.55 0.00 0.00 108250.36 3678.95 106764.10 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x200 00:12:17.052 Malloc2p4 : 5.19 1175.80 4.59 0.00 0.00 107189.00 3842.79 107717.35 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x200 length 0x200 00:12:17.052 Malloc2p4 : 5.17 1164.13 4.55 0.00 0.00 108128.76 3783.21 102951.10 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x200 00:12:17.052 Malloc2p5 : 5.19 1175.17 4.59 0.00 0.00 107070.19 3366.17 104380.97 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x200 length 0x200 00:12:17.052 Malloc2p5 : 5.17 1163.86 4.55 0.00 0.00 107965.13 3381.06 100091.35 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x200 00:12:17.052 Malloc2p6 : 5.19 1174.62 4.59 0.00 0.00 106963.17 3470.43 101521.22 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x200 length 0x200 00:12:17.052 Malloc2p6 : 5.18 1163.58 4.55 0.00 0.00 107873.70 3470.43 96754.97 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x200 00:12:17.052 Malloc2p7 : 5.20 1174.34 4.59 0.00 0.00 106822.46 3902.37 98661.47 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x200 length 0x200 00:12:17.052 Malloc2p7 : 5.18 1163.29 4.54 0.00 0.00 107730.80 3872.58 92941.96 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x1000 00:12:17.052 TestPT : 5.20 1161.41 4.54 0.00 0.00 107809.18 8102.63 101997.85 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x1000 length 0x1000 00:12:17.052 TestPT : 5.18 1132.25 4.42 0.00 0.00 110437.05 7864.32 162052.65 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x2000 00:12:17.052 raid0 : 5.20 1173.88 4.59 0.00 0.00 106450.57 3723.64 86745.83 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x2000 length 0x2000 00:12:17.052 raid0 : 5.18 1162.61 4.54 0.00 0.00 107380.69 3559.80 81502.95 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x2000 00:12:17.052 concat0 : 5.20 1173.65 4.58 0.00 0.00 106303.67 3783.21 82932.83 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x2000 length 0x2000 00:12:17.052 concat0 : 5.19 1176.47 4.60 0.00 0.00 106308.46 3753.43 81502.95 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x1000 00:12:17.052 raid1 : 5.20 1173.39 4.58 0.00 0.00 106143.84 4527.94 81979.58 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x1000 length 0x1000 00:12:17.052 raid1 : 5.19 1175.79 4.59 0.00 0.00 106184.01 4587.52 81979.58 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x0 length 0x4e2 00:12:17.052 AIO0 : 5.20 1172.80 4.58 0.00 0.00 105996.04 4170.47 81979.58 00:12:17.052 [2024-11-26T11:22:35.282Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:17.052 Verification LBA range: start 0x4e2 length 0x4e2 00:12:17.052 AIO0 : 5.19 1174.43 4.59 0.00 0.00 106083.50 3991.74 82932.83 00:12:17.052 [2024-11-26T11:22:35.282Z] =================================================================================================================== 00:12:17.052 [2024-11-26T11:22:35.282Z] Total : 38467.97 150.27 0.00 0.00 104693.94 2115.03 207808.70 00:12:17.310 00:12:17.310 real 0m6.186s 00:12:17.310 user 0m11.309s 00:12:17.310 sys 0m0.512s 00:12:17.310 11:22:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:17.310 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:12:17.310 ************************************ 00:12:17.310 END TEST bdev_verify 00:12:17.310 ************************************ 00:12:17.311 11:22:35 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:17.311 11:22:35 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:17.311 11:22:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.311 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:12:17.311 ************************************ 00:12:17.311 START TEST bdev_verify_big_io 00:12:17.311 ************************************ 00:12:17.311 11:22:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:17.569 [2024-11-26 11:22:35.556484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.569 [2024-11-26 11:22:35.556693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77768 ] 00:12:17.569 [2024-11-26 11:22:35.721878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:17.569 [2024-11-26 11:22:35.754982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.569 [2024-11-26 11:22:35.755056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.828 [2024-11-26 11:22:35.857078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:17.828 [2024-11-26 11:22:35.857208] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:17.828 [2024-11-26 11:22:35.865003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:17.828 [2024-11-26 11:22:35.865099] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:17.828 [2024-11-26 11:22:35.873019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:17.828 [2024-11-26 11:22:35.873089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:17.828 [2024-11-26 11:22:35.873109] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:17.828 [2024-11-26 11:22:35.948784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:17.828 [2024-11-26 11:22:35.948943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.828 [2024-11-26 11:22:35.948979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:17.828 [2024-11-26 11:22:35.948995] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.828 [2024-11-26 11:22:35.951710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.828 [2024-11-26 11:22:35.951765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:18.087 [2024-11-26 11:22:36.085216] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.085989] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.087154] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.088331] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.089104] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.090154] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.090978] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.092068] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.092840] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.093994] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.094732] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.095824] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.096631] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.097727] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.098920] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.099668] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:18.087 [2024-11-26 11:22:36.115469] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:18.087 [2024-11-26 11:22:36.116957] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:18.087 Running I/O for 5 seconds... 00:12:24.660 00:12:24.660 Latency(us) 00:12:24.660 [2024-11-26T11:22:42.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x100 00:12:24.660 Malloc0 : 5.64 309.79 19.36 0.00 0.00 398536.23 28835.84 1067641.02 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x100 length 0x100 00:12:24.660 Malloc0 : 5.68 307.72 19.23 0.00 0.00 407348.06 25856.93 1227787.17 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x80 00:12:24.660 Malloc1p0 : 5.76 176.82 11.05 0.00 0.00 684327.05 53382.05 1304047.24 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x80 length 0x80 00:12:24.660 Malloc1p0 : 5.68 237.55 14.85 0.00 0.00 521802.66 50283.99 1105771.05 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x80 00:12:24.660 Malloc1p1 : 5.98 105.82 6.61 0.00 0.00 1105757.69 53620.36 2287802.18 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x80 length 0x80 00:12:24.660 Malloc1p1 : 5.96 112.06 7.00 0.00 0.00 1061917.59 50283.99 2303054.20 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x20 00:12:24.660 Malloc2p0 : 5.70 61.23 3.83 0.00 0.00 489381.50 9294.20 842673.80 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x20 length 0x20 00:12:24.660 Malloc2p0 : 5.69 61.39 3.84 0.00 0.00 486247.42 9234.62 720657.69 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x20 00:12:24.660 Malloc2p1 : 5.70 61.22 3.83 0.00 0.00 486847.12 9353.77 827421.79 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x20 length 0x20 00:12:24.660 Malloc2p1 : 5.69 61.37 3.84 0.00 0.00 483771.08 8877.15 705405.67 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x0 length 0x20 00:12:24.660 Malloc2p2 : 5.70 61.20 3.83 0.00 0.00 484556.52 8162.21 808356.77 00:12:24.660 [2024-11-26T11:22:42.890Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.660 Verification LBA range: start 0x20 length 0x20 00:12:24.660 Malloc2p2 : 5.69 61.36 3.83 0.00 0.00 481549.62 8460.10 690153.66 00:12:24.660 [2024-11-26T11:22:42.891Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x20 00:12:24.661 Malloc2p3 : 5.70 61.19 3.82 0.00 0.00 482384.91 8698.41 796917.76 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x20 length 0x20 00:12:24.661 Malloc2p3 : 5.69 61.34 3.83 0.00 0.00 479367.80 7506.85 674901.64 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x20 00:12:24.661 Malloc2p4 : 5.70 61.18 3.82 0.00 0.00 480222.42 8996.31 781665.75 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x20 length 0x20 00:12:24.661 Malloc2p4 : 5.69 61.33 3.83 0.00 0.00 477368.56 7536.64 659649.63 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x20 00:12:24.661 Malloc2p5 : 5.71 61.17 3.82 0.00 0.00 477998.83 8757.99 766413.73 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x20 length 0x20 00:12:24.661 Malloc2p5 : 5.73 64.27 4.02 0.00 0.00 457485.88 7447.27 644397.61 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x20 00:12:24.661 Malloc2p6 : 5.71 61.15 3.82 0.00 0.00 475736.73 9115.46 747348.71 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x20 length 0x20 00:12:24.661 Malloc2p6 : 5.73 64.25 4.02 0.00 0.00 455430.11 7268.54 632958.60 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x20 00:12:24.661 Malloc2p7 : 5.71 61.14 3.82 0.00 0.00 473477.11 10307.03 732096.70 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x20 length 0x20 00:12:24.661 Malloc2p7 : 5.73 64.24 4.01 0.00 0.00 453443.15 7149.38 617706.59 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x100 00:12:24.661 TestPT : 6.05 110.45 6.90 0.00 0.00 1010647.71 53143.74 2257298.15 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x100 length 0x100 00:12:24.661 TestPT : 5.96 107.16 6.70 0.00 0.00 1056315.38 62437.93 2196290.09 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x200 00:12:24.661 raid0 : 6.03 115.31 7.21 0.00 0.00 957349.89 44326.17 2242046.14 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x200 length 0x200 00:12:24.661 raid0 : 5.97 116.44 7.28 0.00 0.00 961527.56 49330.73 2257298.15 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x200 00:12:24.661 concat0 : 6.01 134.28 8.39 0.00 0.00 819182.87 43372.92 2242046.14 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x200 length 0x200 00:12:24.661 concat0 : 5.97 121.27 7.58 0.00 0.00 908261.98 29312.47 2257298.15 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x100 00:12:24.661 raid1 : 6.01 158.03 9.88 0.00 0.00 684079.51 19660.80 2242046.14 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x100 length 0x100 00:12:24.661 raid1 : 5.97 138.95 8.68 0.00 0.00 786812.32 30504.03 2242046.14 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x0 length 0x4e 00:12:24.661 AIO0 : 6.03 142.99 8.94 0.00 0.00 452866.65 1489.45 1273543.21 00:12:24.661 [2024-11-26T11:22:42.891Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:24.661 Verification LBA range: start 0x4e length 0x4e 00:12:24.661 AIO0 : 5.97 144.52 9.03 0.00 0.00 455662.82 2844.86 1281169.22 00:12:24.661 [2024-11-26T11:22:42.891Z] =================================================================================================================== 00:12:24.661 [2024-11-26T11:22:42.891Z] Total : 3528.18 220.51 0.00 0.00 632710.82 1489.45 2303054.20 00:12:24.661 00:12:24.661 real 0m7.029s 00:12:24.661 user 0m13.091s 00:12:24.661 sys 0m0.442s 00:12:24.661 11:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.661 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:12:24.661 ************************************ 00:12:24.661 END TEST bdev_verify_big_io 00:12:24.661 ************************************ 00:12:24.661 11:22:42 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:24.661 11:22:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:24.661 11:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.661 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:12:24.661 ************************************ 00:12:24.661 START TEST bdev_write_zeroes 00:12:24.661 ************************************ 00:12:24.661 11:22:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:24.661 [2024-11-26 11:22:42.632328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:24.661 [2024-11-26 11:22:42.632514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77861 ] 00:12:24.661 [2024-11-26 11:22:42.795998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.661 [2024-11-26 11:22:42.827365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.920 [2024-11-26 11:22:42.932154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.920 [2024-11-26 11:22:42.932219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.920 [2024-11-26 11:22:42.940123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.920 [2024-11-26 11:22:42.940175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.920 [2024-11-26 11:22:42.948144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.920 [2024-11-26 11:22:42.948190] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:24.920 [2024-11-26 11:22:42.948211] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:24.920 [2024-11-26 11:22:43.021900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.920 [2024-11-26 11:22:43.021983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.920 [2024-11-26 11:22:43.022013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:24.920 [2024-11-26 11:22:43.022037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.920 [2024-11-26 11:22:43.024491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.920 [2024-11-26 11:22:43.024543] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:25.180 Running I/O for 1 seconds... 00:12:26.117 00:12:26.117 Latency(us) 00:12:26.117 [2024-11-26T11:22:44.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc0 : 1.03 5739.74 22.42 0.00 0.00 22282.04 677.70 36461.85 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc1p0 : 1.03 5733.32 22.40 0.00 0.00 22278.32 688.87 35746.91 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc1p1 : 1.03 5726.92 22.37 0.00 0.00 22263.55 703.77 35031.97 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p0 : 1.03 5719.10 22.34 0.00 0.00 22247.98 700.04 34317.03 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p1 : 1.05 5733.36 22.40 0.00 0.00 22154.31 722.39 33602.09 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p2 : 1.05 5727.11 22.37 0.00 0.00 22144.36 700.04 32648.84 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p3 : 1.05 5720.93 22.35 0.00 0.00 22128.78 700.04 31933.91 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p4 : 1.05 5713.50 22.32 0.00 0.00 22121.37 685.15 31218.97 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p5 : 1.05 5707.15 22.29 0.00 0.00 22115.04 681.43 30742.34 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p6 : 1.06 5700.73 22.27 0.00 0.00 22096.18 763.35 29908.25 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 Malloc2p7 : 1.06 5693.95 22.24 0.00 0.00 22085.55 703.77 29193.31 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 TestPT : 1.06 5687.72 22.22 0.00 0.00 22076.73 733.56 28478.37 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 raid0 : 1.06 5680.42 22.19 0.00 0.00 22054.79 1325.61 27048.49 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 concat0 : 1.06 5672.98 22.16 0.00 0.00 22008.84 1362.85 26333.56 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 raid1 : 1.06 5663.96 22.12 0.00 0.00 21964.30 2234.18 25737.77 00:12:26.117 [2024-11-26T11:22:44.347Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:26.117 AIO0 : 1.06 5649.09 22.07 0.00 0.00 21922.23 1675.64 25856.93 00:12:26.117 [2024-11-26T11:22:44.347Z] =================================================================================================================== 00:12:26.117 [2024-11-26T11:22:44.347Z] Total : 91269.97 356.52 0.00 0.00 22120.76 677.70 36461.85 00:12:26.377 00:12:26.377 real 0m1.963s 00:12:26.377 user 0m1.541s 00:12:26.377 sys 0m0.270s 00:12:26.377 11:22:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.377 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:12:26.377 ************************************ 00:12:26.377 END TEST bdev_write_zeroes 00:12:26.377 ************************************ 00:12:26.377 11:22:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:26.377 11:22:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:26.377 11:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.377 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:12:26.377 ************************************ 00:12:26.377 START TEST bdev_json_nonenclosed 00:12:26.377 ************************************ 00:12:26.377 11:22:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:26.636 [2024-11-26 11:22:44.655943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.636 [2024-11-26 11:22:44.656128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77903 ] 00:12:26.636 [2024-11-26 11:22:44.821171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.636 [2024-11-26 11:22:44.853017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.636 [2024-11-26 11:22:44.853236] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:26.636 [2024-11-26 11:22:44.853265] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:26.896 00:12:26.896 real 0m0.351s 00:12:26.896 user 0m0.158s 00:12:26.896 sys 0m0.093s 00:12:26.896 11:22:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.896 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:12:26.896 ************************************ 00:12:26.896 END TEST bdev_json_nonenclosed 00:12:26.896 ************************************ 00:12:26.896 11:22:44 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:26.896 11:22:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:26.896 11:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.896 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:12:26.896 ************************************ 00:12:26.896 START TEST bdev_json_nonarray 00:12:26.896 ************************************ 00:12:26.896 11:22:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:26.896 [2024-11-26 11:22:45.061735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.896 [2024-11-26 11:22:45.061966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77923 ] 00:12:27.156 [2024-11-26 11:22:45.229146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.156 [2024-11-26 11:22:45.263433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.156 [2024-11-26 11:22:45.263703] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:27.156 [2024-11-26 11:22:45.263743] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:27.156 ************************************ 00:12:27.156 END TEST bdev_json_nonarray 00:12:27.156 ************************************ 00:12:27.156 00:12:27.156 real 0m0.348s 00:12:27.156 user 0m0.154s 00:12:27.156 sys 0m0.093s 00:12:27.156 11:22:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:27.156 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:12:27.415 11:22:45 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:27.415 11:22:45 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:27.415 11:22:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:27.415 11:22:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.415 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:12:27.415 ************************************ 00:12:27.415 START TEST bdev_qos 00:12:27.415 ************************************ 00:12:27.415 11:22:45 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:12:27.415 11:22:45 -- bdev/blockdev.sh@444 -- # QOS_PID=77954 00:12:27.415 Process qos testing pid: 77954 00:12:27.415 11:22:45 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 77954' 00:12:27.415 11:22:45 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:27.415 11:22:45 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:27.415 11:22:45 -- bdev/blockdev.sh@447 -- # waitforlisten 77954 00:12:27.415 11:22:45 -- common/autotest_common.sh@829 -- # '[' -z 77954 ']' 00:12:27.415 11:22:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.415 11:22:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.415 11:22:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.415 11:22:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.415 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:12:27.415 [2024-11-26 11:22:45.455728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:27.415 [2024-11-26 11:22:45.455917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77954 ] 00:12:27.415 [2024-11-26 11:22:45.610466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.675 [2024-11-26 11:22:45.651112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.245 11:22:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.245 11:22:46 -- common/autotest_common.sh@862 -- # return 0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 Malloc_0 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:28.245 11:22:46 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:12:28.245 11:22:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:28.245 11:22:46 -- common/autotest_common.sh@899 -- # local i 00:12:28.245 11:22:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:28.245 11:22:46 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 [ 00:12:28.245 { 00:12:28.245 "name": "Malloc_0", 00:12:28.245 "aliases": [ 00:12:28.245 "377317f3-bf94-41d5-85e2-bc63d97a06f2" 00:12:28.245 ], 00:12:28.245 "product_name": "Malloc disk", 00:12:28.245 "block_size": 512, 00:12:28.245 "num_blocks": 262144, 00:12:28.245 "uuid": "377317f3-bf94-41d5-85e2-bc63d97a06f2", 00:12:28.245 "assigned_rate_limits": { 00:12:28.245 "rw_ios_per_sec": 0, 00:12:28.245 "rw_mbytes_per_sec": 0, 00:12:28.245 "r_mbytes_per_sec": 0, 00:12:28.245 "w_mbytes_per_sec": 0 00:12:28.245 }, 00:12:28.245 "claimed": false, 00:12:28.245 "zoned": false, 00:12:28.245 "supported_io_types": { 00:12:28.245 "read": true, 00:12:28.245 "write": true, 00:12:28.245 "unmap": true, 00:12:28.245 "write_zeroes": true, 00:12:28.245 "flush": true, 00:12:28.245 "reset": true, 00:12:28.245 "compare": false, 00:12:28.245 "compare_and_write": false, 00:12:28.245 "abort": true, 00:12:28.245 "nvme_admin": false, 00:12:28.245 "nvme_io": false 00:12:28.245 }, 00:12:28.245 "memory_domains": [ 00:12:28.245 { 00:12:28.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.245 "dma_device_type": 2 00:12:28.245 } 00:12:28.245 ], 00:12:28.245 "driver_specific": {} 00:12:28.245 } 00:12:28.245 ] 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@905 -- # return 0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 Null_1 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:28.245 11:22:46 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:12:28.245 11:22:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:28.245 11:22:46 -- common/autotest_common.sh@899 -- # local i 00:12:28.245 11:22:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:28.245 11:22:46 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:28.245 11:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.245 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:28.245 [ 00:12:28.245 { 00:12:28.245 "name": "Null_1", 00:12:28.245 "aliases": [ 00:12:28.245 "bb236561-64b0-411d-b4c8-c705515e6b5e" 00:12:28.245 ], 00:12:28.245 "product_name": "Null disk", 00:12:28.245 "block_size": 512, 00:12:28.245 "num_blocks": 262144, 00:12:28.245 "uuid": "bb236561-64b0-411d-b4c8-c705515e6b5e", 00:12:28.245 "assigned_rate_limits": { 00:12:28.245 "rw_ios_per_sec": 0, 00:12:28.245 "rw_mbytes_per_sec": 0, 00:12:28.245 "r_mbytes_per_sec": 0, 00:12:28.245 "w_mbytes_per_sec": 0 00:12:28.245 }, 00:12:28.245 "claimed": false, 00:12:28.245 "zoned": false, 00:12:28.245 "supported_io_types": { 00:12:28.245 "read": true, 00:12:28.245 "write": true, 00:12:28.245 "unmap": false, 00:12:28.245 "write_zeroes": true, 00:12:28.245 "flush": false, 00:12:28.245 "reset": true, 00:12:28.245 "compare": false, 00:12:28.245 "compare_and_write": false, 00:12:28.245 "abort": true, 00:12:28.245 "nvme_admin": false, 00:12:28.245 "nvme_io": false 00:12:28.245 }, 00:12:28.245 "driver_specific": {} 00:12:28.245 } 00:12:28.245 ] 00:12:28.245 11:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.245 11:22:46 -- common/autotest_common.sh@905 -- # return 0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:28.245 11:22:46 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.245 11:22:46 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:28.245 11:22:46 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:28.245 11:22:46 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:28.245 11:22:46 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:28.245 11:22:46 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:28.245 11:22:46 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:28.245 11:22:46 -- bdev/blockdev.sh@376 -- # tail -1 00:12:28.505 Running I/O for 60 seconds... 00:12:33.803 11:22:51 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 73200.83 292803.33 0.00 0.00 295936.00 0.00 0.00 ' 00:12:33.803 11:22:51 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:33.803 11:22:51 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:33.803 11:22:51 -- bdev/blockdev.sh@378 -- # iostat_result=73200.83 00:12:33.803 11:22:51 -- bdev/blockdev.sh@383 -- # echo 73200 00:12:33.803 11:22:51 -- bdev/blockdev.sh@414 -- # io_result=73200 00:12:33.803 11:22:51 -- bdev/blockdev.sh@416 -- # iops_limit=18000 00:12:33.803 11:22:51 -- bdev/blockdev.sh@417 -- # '[' 18000 -gt 1000 ']' 00:12:33.803 11:22:51 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 18000 Malloc_0 00:12:33.803 11:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.803 11:22:51 -- common/autotest_common.sh@10 -- # set +x 00:12:33.803 11:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.803 11:22:51 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 18000 IOPS Malloc_0 00:12:33.803 11:22:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:33.803 11:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.803 11:22:51 -- common/autotest_common.sh@10 -- # set +x 00:12:33.803 ************************************ 00:12:33.803 START TEST bdev_qos_iops 00:12:33.803 ************************************ 00:12:33.803 11:22:51 -- common/autotest_common.sh@1114 -- # run_qos_test 18000 IOPS Malloc_0 00:12:33.803 11:22:51 -- bdev/blockdev.sh@387 -- # local qos_limit=18000 00:12:33.803 11:22:51 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:33.803 11:22:51 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:33.803 11:22:51 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:33.803 11:22:51 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:33.803 11:22:51 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:33.803 11:22:51 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:33.803 11:22:51 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:33.803 11:22:51 -- bdev/blockdev.sh@376 -- # tail -1 00:12:39.072 11:22:56 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 17988.10 71952.39 0.00 0.00 73224.00 0.00 0.00 ' 00:12:39.072 11:22:56 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:39.072 11:22:56 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:39.072 11:22:56 -- bdev/blockdev.sh@378 -- # iostat_result=17988.10 00:12:39.072 11:22:56 -- bdev/blockdev.sh@383 -- # echo 17988 00:12:39.072 11:22:56 -- bdev/blockdev.sh@390 -- # qos_result=17988 00:12:39.072 11:22:56 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:39.072 11:22:56 -- bdev/blockdev.sh@394 -- # lower_limit=16200 00:12:39.072 11:22:56 -- bdev/blockdev.sh@395 -- # upper_limit=19800 00:12:39.072 11:22:56 -- bdev/blockdev.sh@398 -- # '[' 17988 -lt 16200 ']' 00:12:39.072 11:22:56 -- bdev/blockdev.sh@398 -- # '[' 17988 -gt 19800 ']' 00:12:39.072 00:12:39.072 real 0m5.229s 00:12:39.072 user 0m0.130s 00:12:39.072 sys 0m0.032s 00:12:39.072 11:22:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:39.072 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.072 ************************************ 00:12:39.072 END TEST bdev_qos_iops 00:12:39.072 ************************************ 00:12:39.072 11:22:56 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:39.072 11:22:56 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:39.072 11:22:56 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:39.072 11:22:56 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:39.072 11:22:56 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:39.072 11:22:56 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:39.072 11:22:56 -- bdev/blockdev.sh@376 -- # tail -1 00:12:44.339 11:23:02 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 23806.34 95225.35 0.00 0.00 97280.00 0.00 0.00 ' 00:12:44.339 11:23:02 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:44.339 11:23:02 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:44.339 11:23:02 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:44.339 11:23:02 -- bdev/blockdev.sh@380 -- # iostat_result=97280.00 00:12:44.339 11:23:02 -- bdev/blockdev.sh@383 -- # echo 97280 00:12:44.339 11:23:02 -- bdev/blockdev.sh@425 -- # bw_limit=97280 00:12:44.339 11:23:02 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:12:44.339 11:23:02 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:12:44.339 11:23:02 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:12:44.339 11:23:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.339 11:23:02 -- common/autotest_common.sh@10 -- # set +x 00:12:44.339 11:23:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.339 11:23:02 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:12:44.339 11:23:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:44.339 11:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.339 11:23:02 -- common/autotest_common.sh@10 -- # set +x 00:12:44.339 ************************************ 00:12:44.339 START TEST bdev_qos_bw 00:12:44.339 ************************************ 00:12:44.339 11:23:02 -- common/autotest_common.sh@1114 -- # run_qos_test 9 BANDWIDTH Null_1 00:12:44.339 11:23:02 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:12:44.339 11:23:02 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:44.339 11:23:02 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:12:44.339 11:23:02 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:44.339 11:23:02 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:44.339 11:23:02 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:44.339 11:23:02 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:44.339 11:23:02 -- bdev/blockdev.sh@376 -- # tail -1 00:12:44.339 11:23:02 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:49.609 11:23:07 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2301.69 9206.74 0.00 0.00 9464.00 0.00 0.00 ' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@380 -- # iostat_result=9464.00 00:12:49.609 11:23:07 -- bdev/blockdev.sh@383 -- # echo 9464 00:12:49.609 11:23:07 -- bdev/blockdev.sh@390 -- # qos_result=9464 00:12:49.609 11:23:07 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:12:49.609 ************************************ 00:12:49.609 END TEST bdev_qos_bw 00:12:49.609 ************************************ 00:12:49.609 11:23:07 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:12:49.609 11:23:07 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:12:49.609 11:23:07 -- bdev/blockdev.sh@398 -- # '[' 9464 -lt 8294 ']' 00:12:49.609 11:23:07 -- bdev/blockdev.sh@398 -- # '[' 9464 -gt 10137 ']' 00:12:49.609 00:12:49.609 real 0m5.274s 00:12:49.609 user 0m0.132s 00:12:49.609 sys 0m0.033s 00:12:49.609 11:23:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.609 11:23:07 -- common/autotest_common.sh@10 -- # set +x 00:12:49.609 11:23:07 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:49.609 11:23:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.609 11:23:07 -- common/autotest_common.sh@10 -- # set +x 00:12:49.609 11:23:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.610 11:23:07 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:49.610 11:23:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:49.610 11:23:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.610 11:23:07 -- common/autotest_common.sh@10 -- # set +x 00:12:49.610 ************************************ 00:12:49.610 START TEST bdev_qos_ro_bw 00:12:49.610 ************************************ 00:12:49.610 11:23:07 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:49.610 11:23:07 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:12:49.610 11:23:07 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:49.610 11:23:07 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:12:49.610 11:23:07 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:49.610 11:23:07 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:49.610 11:23:07 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:49.610 11:23:07 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:49.610 11:23:07 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:49.610 11:23:07 -- bdev/blockdev.sh@376 -- # tail -1 00:12:54.872 11:23:12 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.47 2045.87 0.00 0.00 2060.00 0.00 0.00 ' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:12:54.872 11:23:12 -- bdev/blockdev.sh@383 -- # echo 2060 00:12:54.872 ************************************ 00:12:54.872 END TEST bdev_qos_ro_bw 00:12:54.872 ************************************ 00:12:54.872 11:23:12 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:12:54.872 11:23:12 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:12:54.872 11:23:12 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:12:54.872 11:23:12 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:12:54.872 11:23:12 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:12:54.872 11:23:12 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:12:54.872 00:12:54.872 real 0m5.188s 00:12:54.872 user 0m0.135s 00:12:54.872 sys 0m0.033s 00:12:54.872 11:23:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:54.872 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:12:54.872 11:23:12 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:54.872 11:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.872 11:23:12 -- common/autotest_common.sh@10 -- # set +x 00:12:55.441 11:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.441 11:23:13 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:12:55.441 11:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.441 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:12:55.441 00:12:55.441 Latency(us) 00:12:55.441 [2024-11-26T11:23:13.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.441 [2024-11-26T11:23:13.671Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:55.441 Malloc_0 : 26.77 24095.30 94.12 0.00 0.00 10524.57 2323.55 503316.48 00:12:55.441 [2024-11-26T11:23:13.671Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:55.441 Null_1 : 26.90 23396.18 91.39 0.00 0.00 10914.20 599.51 129642.12 00:12:55.441 [2024-11-26T11:23:13.671Z] =================================================================================================================== 00:12:55.441 [2024-11-26T11:23:13.671Z] Total : 47491.48 185.51 0.00 0.00 10716.98 599.51 503316.48 00:12:55.441 0 00:12:55.441 11:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.441 11:23:13 -- bdev/blockdev.sh@459 -- # killprocess 77954 00:12:55.441 11:23:13 -- common/autotest_common.sh@936 -- # '[' -z 77954 ']' 00:12:55.441 11:23:13 -- common/autotest_common.sh@940 -- # kill -0 77954 00:12:55.441 11:23:13 -- common/autotest_common.sh@941 -- # uname 00:12:55.441 11:23:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.441 11:23:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77954 00:12:55.441 killing process with pid 77954 00:12:55.441 Received shutdown signal, test time was about 26.942758 seconds 00:12:55.441 00:12:55.441 Latency(us) 00:12:55.441 [2024-11-26T11:23:13.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.441 [2024-11-26T11:23:13.671Z] =================================================================================================================== 00:12:55.441 [2024-11-26T11:23:13.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.441 11:23:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:55.441 11:23:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:55.441 11:23:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77954' 00:12:55.441 11:23:13 -- common/autotest_common.sh@955 -- # kill 77954 00:12:55.441 11:23:13 -- common/autotest_common.sh@960 -- # wait 77954 00:12:55.719 11:23:13 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:12:55.720 00:12:55.720 real 0m28.308s 00:12:55.720 user 0m29.217s 00:12:55.720 sys 0m0.583s 00:12:55.720 ************************************ 00:12:55.720 END TEST bdev_qos 00:12:55.720 ************************************ 00:12:55.720 11:23:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:55.720 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:12:55.720 11:23:13 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:55.720 11:23:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.720 11:23:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.720 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:12:55.720 ************************************ 00:12:55.720 START TEST bdev_qd_sampling 00:12:55.720 ************************************ 00:12:55.720 11:23:13 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:12:55.720 11:23:13 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:12:55.720 11:23:13 -- bdev/blockdev.sh@539 -- # QD_PID=78361 00:12:55.720 Process bdev QD sampling period testing pid: 78361 00:12:55.720 11:23:13 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 78361' 00:12:55.720 11:23:13 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:55.720 11:23:13 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:55.720 11:23:13 -- bdev/blockdev.sh@542 -- # waitforlisten 78361 00:12:55.720 11:23:13 -- common/autotest_common.sh@829 -- # '[' -z 78361 ']' 00:12:55.720 11:23:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.720 11:23:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.720 11:23:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.720 11:23:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.720 11:23:13 -- common/autotest_common.sh@10 -- # set +x 00:12:55.720 [2024-11-26 11:23:13.835979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:55.720 [2024-11-26 11:23:13.836177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78361 ] 00:12:55.992 [2024-11-26 11:23:14.004494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:55.992 [2024-11-26 11:23:14.047510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.992 [2024-11-26 11:23:14.047547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.560 11:23:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.560 11:23:14 -- common/autotest_common.sh@862 -- # return 0 00:12:56.560 11:23:14 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:56.560 11:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.560 11:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:56.560 Malloc_QD 00:12:56.560 11:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.560 11:23:14 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:12:56.560 11:23:14 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:12:56.560 11:23:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.560 11:23:14 -- common/autotest_common.sh@899 -- # local i 00:12:56.560 11:23:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.560 11:23:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.560 11:23:14 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:56.560 11:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.560 11:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:56.560 11:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.560 11:23:14 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:56.560 11:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.560 11:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:56.818 [ 00:12:56.818 { 00:12:56.818 "name": "Malloc_QD", 00:12:56.818 "aliases": [ 00:12:56.818 "c13215f4-6ea9-44e7-a4e9-46f327882e52" 00:12:56.818 ], 00:12:56.818 "product_name": "Malloc disk", 00:12:56.818 "block_size": 512, 00:12:56.818 "num_blocks": 262144, 00:12:56.818 "uuid": "c13215f4-6ea9-44e7-a4e9-46f327882e52", 00:12:56.818 "assigned_rate_limits": { 00:12:56.818 "rw_ios_per_sec": 0, 00:12:56.818 "rw_mbytes_per_sec": 0, 00:12:56.818 "r_mbytes_per_sec": 0, 00:12:56.818 "w_mbytes_per_sec": 0 00:12:56.818 }, 00:12:56.818 "claimed": false, 00:12:56.818 "zoned": false, 00:12:56.818 "supported_io_types": { 00:12:56.818 "read": true, 00:12:56.818 "write": true, 00:12:56.818 "unmap": true, 00:12:56.818 "write_zeroes": true, 00:12:56.818 "flush": true, 00:12:56.818 "reset": true, 00:12:56.818 "compare": false, 00:12:56.818 "compare_and_write": false, 00:12:56.818 "abort": true, 00:12:56.818 "nvme_admin": false, 00:12:56.818 "nvme_io": false 00:12:56.818 }, 00:12:56.818 "memory_domains": [ 00:12:56.818 { 00:12:56.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.818 "dma_device_type": 2 00:12:56.818 } 00:12:56.818 ], 00:12:56.818 "driver_specific": {} 00:12:56.818 } 00:12:56.818 ] 00:12:56.818 11:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.818 11:23:14 -- common/autotest_common.sh@905 -- # return 0 00:12:56.818 11:23:14 -- bdev/blockdev.sh@548 -- # sleep 2 00:12:56.818 11:23:14 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.818 Running I/O for 5 seconds... 00:12:58.722 11:23:16 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:12:58.722 11:23:16 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:12:58.722 11:23:16 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:12:58.722 11:23:16 -- bdev/blockdev.sh@519 -- # local iostats 00:12:58.722 11:23:16 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:58.722 11:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.722 11:23:16 -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 11:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.722 11:23:16 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:58.722 11:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.722 11:23:16 -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 11:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.722 11:23:16 -- bdev/blockdev.sh@523 -- # iostats='{ 00:12:58.722 "tick_rate": 2200000000, 00:12:58.722 "ticks": 1559466756269, 00:12:58.722 "bdevs": [ 00:12:58.722 { 00:12:58.722 "name": "Malloc_QD", 00:12:58.722 "bytes_read": 873501184, 00:12:58.722 "num_read_ops": 213251, 00:12:58.722 "bytes_written": 0, 00:12:58.722 "num_write_ops": 0, 00:12:58.722 "bytes_unmapped": 0, 00:12:58.722 "num_unmap_ops": 0, 00:12:58.722 "bytes_copied": 0, 00:12:58.722 "num_copy_ops": 0, 00:12:58.722 "read_latency_ticks": 2163847608113, 00:12:58.722 "max_read_latency_ticks": 12547856, 00:12:58.722 "min_read_latency_ticks": 401604, 00:12:58.722 "write_latency_ticks": 0, 00:12:58.722 "max_write_latency_ticks": 0, 00:12:58.722 "min_write_latency_ticks": 0, 00:12:58.722 "unmap_latency_ticks": 0, 00:12:58.722 "max_unmap_latency_ticks": 0, 00:12:58.722 "min_unmap_latency_ticks": 0, 00:12:58.722 "copy_latency_ticks": 0, 00:12:58.722 "max_copy_latency_ticks": 0, 00:12:58.722 "min_copy_latency_ticks": 0, 00:12:58.722 "io_error": {}, 00:12:58.722 "queue_depth_polling_period": 10, 00:12:58.722 "queue_depth": 512, 00:12:58.722 "io_time": 30, 00:12:58.722 "weighted_io_time": 15360 00:12:58.722 } 00:12:58.722 ] 00:12:58.722 }' 00:12:58.722 11:23:16 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:58.722 11:23:16 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:12:58.722 11:23:16 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:12:58.722 11:23:16 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:12:58.722 11:23:16 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:58.722 11:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.722 11:23:16 -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 00:12:58.722 Latency(us) 00:12:58.722 [2024-11-26T11:23:16.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.722 [2024-11-26T11:23:16.952Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:58.722 Malloc_QD : 1.97 54937.33 214.60 0.00 0.00 4647.92 1288.38 5719.51 00:12:58.722 [2024-11-26T11:23:16.952Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:58.722 Malloc_QD : 1.97 55801.87 217.98 0.00 0.00 4576.21 923.46 5600.35 00:12:58.722 [2024-11-26T11:23:16.952Z] =================================================================================================================== 00:12:58.722 [2024-11-26T11:23:16.952Z] Total : 110739.20 432.57 0.00 0.00 4611.77 923.46 5719.51 00:12:58.722 0 00:12:58.722 11:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.722 11:23:16 -- bdev/blockdev.sh@552 -- # killprocess 78361 00:12:58.722 11:23:16 -- common/autotest_common.sh@936 -- # '[' -z 78361 ']' 00:12:58.722 11:23:16 -- common/autotest_common.sh@940 -- # kill -0 78361 00:12:58.722 11:23:16 -- common/autotest_common.sh@941 -- # uname 00:12:58.722 11:23:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.722 11:23:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78361 00:12:58.722 killing process with pid 78361 00:12:58.722 Received shutdown signal, test time was about 2.020943 seconds 00:12:58.722 00:12:58.722 Latency(us) 00:12:58.722 [2024-11-26T11:23:16.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.722 [2024-11-26T11:23:16.952Z] =================================================================================================================== 00:12:58.722 [2024-11-26T11:23:16.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.722 11:23:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.722 11:23:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.722 11:23:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78361' 00:12:58.722 11:23:16 -- common/autotest_common.sh@955 -- # kill 78361 00:12:58.722 11:23:16 -- common/autotest_common.sh@960 -- # wait 78361 00:12:58.981 11:23:17 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:12:58.981 00:12:58.981 real 0m3.349s 00:12:58.981 user 0m6.488s 00:12:58.981 sys 0m0.342s 00:12:58.981 11:23:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.981 ************************************ 00:12:58.981 END TEST bdev_qd_sampling 00:12:58.981 ************************************ 00:12:58.981 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.981 11:23:17 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:12:58.981 11:23:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.981 11:23:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.981 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.981 ************************************ 00:12:58.981 START TEST bdev_error 00:12:58.981 ************************************ 00:12:58.981 11:23:17 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:12:58.981 11:23:17 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:12:58.981 11:23:17 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:12:58.981 11:23:17 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:12:58.981 Process error testing pid: 78427 00:12:58.981 11:23:17 -- bdev/blockdev.sh@470 -- # ERR_PID=78427 00:12:58.981 11:23:17 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 78427' 00:12:58.981 11:23:17 -- bdev/blockdev.sh@472 -- # waitforlisten 78427 00:12:58.981 11:23:17 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:58.981 11:23:17 -- common/autotest_common.sh@829 -- # '[' -z 78427 ']' 00:12:58.981 11:23:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.981 11:23:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.981 11:23:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.981 11:23:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.981 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 [2024-11-26 11:23:17.234870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.240 [2024-11-26 11:23:17.235269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78427 ] 00:12:59.240 [2024-11-26 11:23:17.393882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.240 [2024-11-26 11:23:17.429211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.177 11:23:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.177 11:23:18 -- common/autotest_common.sh@862 -- # return 0 00:13:00.177 11:23:18 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:00.177 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.177 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.177 Dev_1 00:13:00.177 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.177 11:23:18 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:00.177 11:23:18 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:00.177 11:23:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.177 11:23:18 -- common/autotest_common.sh@899 -- # local i 00:13:00.177 11:23:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.177 11:23:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.177 11:23:18 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:00.177 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.177 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.177 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.177 11:23:18 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:00.177 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.177 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.177 [ 00:13:00.177 { 00:13:00.177 "name": "Dev_1", 00:13:00.177 "aliases": [ 00:13:00.177 "01d42680-28d9-4ca9-839c-7049cbb2c990" 00:13:00.177 ], 00:13:00.177 "product_name": "Malloc disk", 00:13:00.177 "block_size": 512, 00:13:00.177 "num_blocks": 262144, 00:13:00.177 "uuid": "01d42680-28d9-4ca9-839c-7049cbb2c990", 00:13:00.177 "assigned_rate_limits": { 00:13:00.177 "rw_ios_per_sec": 0, 00:13:00.177 "rw_mbytes_per_sec": 0, 00:13:00.177 "r_mbytes_per_sec": 0, 00:13:00.177 "w_mbytes_per_sec": 0 00:13:00.177 }, 00:13:00.177 "claimed": false, 00:13:00.177 "zoned": false, 00:13:00.177 "supported_io_types": { 00:13:00.177 "read": true, 00:13:00.177 "write": true, 00:13:00.177 "unmap": true, 00:13:00.177 "write_zeroes": true, 00:13:00.177 "flush": true, 00:13:00.177 "reset": true, 00:13:00.177 "compare": false, 00:13:00.177 "compare_and_write": false, 00:13:00.177 "abort": true, 00:13:00.177 "nvme_admin": false, 00:13:00.177 "nvme_io": false 00:13:00.177 }, 00:13:00.177 "memory_domains": [ 00:13:00.178 { 00:13:00.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.178 "dma_device_type": 2 00:13:00.178 } 00:13:00.178 ], 00:13:00.178 "driver_specific": {} 00:13:00.178 } 00:13:00.178 ] 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- common/autotest_common.sh@905 -- # return 0 00:13:00.178 11:23:18 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:00.178 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.178 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 true 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:00.178 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.178 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 Dev_2 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:00.178 11:23:18 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:00.178 11:23:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.178 11:23:18 -- common/autotest_common.sh@899 -- # local i 00:13:00.178 11:23:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.178 11:23:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.178 11:23:18 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:00.178 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.178 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:00.178 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.178 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 [ 00:13:00.178 { 00:13:00.178 "name": "Dev_2", 00:13:00.178 "aliases": [ 00:13:00.178 "1bd7fefc-e243-4fb9-8baa-5639964c8e54" 00:13:00.178 ], 00:13:00.178 "product_name": "Malloc disk", 00:13:00.178 "block_size": 512, 00:13:00.178 "num_blocks": 262144, 00:13:00.178 "uuid": "1bd7fefc-e243-4fb9-8baa-5639964c8e54", 00:13:00.178 "assigned_rate_limits": { 00:13:00.178 "rw_ios_per_sec": 0, 00:13:00.178 "rw_mbytes_per_sec": 0, 00:13:00.178 "r_mbytes_per_sec": 0, 00:13:00.178 "w_mbytes_per_sec": 0 00:13:00.178 }, 00:13:00.178 "claimed": false, 00:13:00.178 "zoned": false, 00:13:00.178 "supported_io_types": { 00:13:00.178 "read": true, 00:13:00.178 "write": true, 00:13:00.178 "unmap": true, 00:13:00.178 "write_zeroes": true, 00:13:00.178 "flush": true, 00:13:00.178 "reset": true, 00:13:00.178 "compare": false, 00:13:00.178 "compare_and_write": false, 00:13:00.178 "abort": true, 00:13:00.178 "nvme_admin": false, 00:13:00.178 "nvme_io": false 00:13:00.178 }, 00:13:00.178 "memory_domains": [ 00:13:00.178 { 00:13:00.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.178 "dma_device_type": 2 00:13:00.178 } 00:13:00.178 ], 00:13:00.178 "driver_specific": {} 00:13:00.178 } 00:13:00.178 ] 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- common/autotest_common.sh@905 -- # return 0 00:13:00.178 11:23:18 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:00.178 11:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.178 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 11:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.178 11:23:18 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:00.178 11:23:18 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:00.437 Running I/O for 5 seconds... 00:13:01.372 11:23:19 -- bdev/blockdev.sh@485 -- # kill -0 78427 00:13:01.372 Process is existed as continue on error is set. Pid: 78427 00:13:01.372 11:23:19 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 78427' 00:13:01.372 11:23:19 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:01.372 11:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.372 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:13:01.372 11:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.372 11:23:19 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:01.372 11:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.372 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:13:01.372 11:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.372 11:23:19 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:01.372 Timeout while waiting for response: 00:13:01.372 00:13:01.372 00:13:05.559 00:13:05.559 Latency(us) 00:13:05.559 [2024-11-26T11:23:23.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.559 [2024-11-26T11:23:23.789Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:05.559 EE_Dev_1 : 0.89 33455.28 130.68 5.60 0.00 474.77 159.19 1295.83 00:13:05.559 [2024-11-26T11:23:23.789Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:05.559 Dev_2 : 5.00 75303.68 294.15 0.00 0.00 209.11 77.27 13464.67 00:13:05.559 [2024-11-26T11:23:23.789Z] =================================================================================================================== 00:13:05.559 [2024-11-26T11:23:23.789Z] Total : 108758.96 424.84 5.60 0.00 228.63 77.27 13464.67 00:13:06.495 11:23:24 -- bdev/blockdev.sh@497 -- # killprocess 78427 00:13:06.495 11:23:24 -- common/autotest_common.sh@936 -- # '[' -z 78427 ']' 00:13:06.495 11:23:24 -- common/autotest_common.sh@940 -- # kill -0 78427 00:13:06.495 11:23:24 -- common/autotest_common.sh@941 -- # uname 00:13:06.495 11:23:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:06.495 11:23:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78427 00:13:06.495 killing process with pid 78427 00:13:06.495 Received shutdown signal, test time was about 5.000000 seconds 00:13:06.495 00:13:06.495 Latency(us) 00:13:06.495 [2024-11-26T11:23:24.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.495 [2024-11-26T11:23:24.725Z] =================================================================================================================== 00:13:06.495 [2024-11-26T11:23:24.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.495 11:23:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:06.495 11:23:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:06.495 11:23:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78427' 00:13:06.495 11:23:24 -- common/autotest_common.sh@955 -- # kill 78427 00:13:06.495 11:23:24 -- common/autotest_common.sh@960 -- # wait 78427 00:13:06.495 Process error testing pid: 78517 00:13:06.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.495 11:23:24 -- bdev/blockdev.sh@501 -- # ERR_PID=78517 00:13:06.495 11:23:24 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 78517' 00:13:06.495 11:23:24 -- bdev/blockdev.sh@503 -- # waitforlisten 78517 00:13:06.495 11:23:24 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:06.495 11:23:24 -- common/autotest_common.sh@829 -- # '[' -z 78517 ']' 00:13:06.495 11:23:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.495 11:23:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.495 11:23:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.495 11:23:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.495 11:23:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.495 [2024-11-26 11:23:24.696155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:06.495 [2024-11-26 11:23:24.696346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78517 ] 00:13:06.753 [2024-11-26 11:23:24.866129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.753 [2024-11-26 11:23:24.903432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.689 11:23:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.689 11:23:25 -- common/autotest_common.sh@862 -- # return 0 00:13:07.689 11:23:25 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 Dev_1 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:07.689 11:23:25 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:07.689 11:23:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:07.689 11:23:25 -- common/autotest_common.sh@899 -- # local i 00:13:07.689 11:23:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:07.689 11:23:25 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 [ 00:13:07.689 { 00:13:07.689 "name": "Dev_1", 00:13:07.689 "aliases": [ 00:13:07.689 "c2293b7a-b6da-4f18-8151-d7171d6b7648" 00:13:07.689 ], 00:13:07.689 "product_name": "Malloc disk", 00:13:07.689 "block_size": 512, 00:13:07.689 "num_blocks": 262144, 00:13:07.689 "uuid": "c2293b7a-b6da-4f18-8151-d7171d6b7648", 00:13:07.689 "assigned_rate_limits": { 00:13:07.689 "rw_ios_per_sec": 0, 00:13:07.689 "rw_mbytes_per_sec": 0, 00:13:07.689 "r_mbytes_per_sec": 0, 00:13:07.689 "w_mbytes_per_sec": 0 00:13:07.689 }, 00:13:07.689 "claimed": false, 00:13:07.689 "zoned": false, 00:13:07.689 "supported_io_types": { 00:13:07.689 "read": true, 00:13:07.689 "write": true, 00:13:07.689 "unmap": true, 00:13:07.689 "write_zeroes": true, 00:13:07.689 "flush": true, 00:13:07.689 "reset": true, 00:13:07.689 "compare": false, 00:13:07.689 "compare_and_write": false, 00:13:07.689 "abort": true, 00:13:07.689 "nvme_admin": false, 00:13:07.689 "nvme_io": false 00:13:07.689 }, 00:13:07.689 "memory_domains": [ 00:13:07.689 { 00:13:07.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.689 "dma_device_type": 2 00:13:07.689 } 00:13:07.689 ], 00:13:07.689 "driver_specific": {} 00:13:07.689 } 00:13:07.689 ] 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@905 -- # return 0 00:13:07.689 11:23:25 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 true 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 Dev_2 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:07.689 11:23:25 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:07.689 11:23:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:07.689 11:23:25 -- common/autotest_common.sh@899 -- # local i 00:13:07.689 11:23:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:07.689 11:23:25 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 [ 00:13:07.689 { 00:13:07.689 "name": "Dev_2", 00:13:07.689 "aliases": [ 00:13:07.689 "04d9b767-a2cc-4331-8596-ed22e6c58c98" 00:13:07.689 ], 00:13:07.689 "product_name": "Malloc disk", 00:13:07.689 "block_size": 512, 00:13:07.689 "num_blocks": 262144, 00:13:07.689 "uuid": "04d9b767-a2cc-4331-8596-ed22e6c58c98", 00:13:07.689 "assigned_rate_limits": { 00:13:07.689 "rw_ios_per_sec": 0, 00:13:07.689 "rw_mbytes_per_sec": 0, 00:13:07.689 "r_mbytes_per_sec": 0, 00:13:07.689 "w_mbytes_per_sec": 0 00:13:07.689 }, 00:13:07.689 "claimed": false, 00:13:07.689 "zoned": false, 00:13:07.689 "supported_io_types": { 00:13:07.689 "read": true, 00:13:07.689 "write": true, 00:13:07.689 "unmap": true, 00:13:07.689 "write_zeroes": true, 00:13:07.689 "flush": true, 00:13:07.689 "reset": true, 00:13:07.689 "compare": false, 00:13:07.689 "compare_and_write": false, 00:13:07.689 "abort": true, 00:13:07.689 "nvme_admin": false, 00:13:07.689 "nvme_io": false 00:13:07.689 }, 00:13:07.689 "memory_domains": [ 00:13:07.689 { 00:13:07.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.689 "dma_device_type": 2 00:13:07.689 } 00:13:07.689 ], 00:13:07.689 "driver_specific": {} 00:13:07.689 } 00:13:07.689 ] 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- common/autotest_common.sh@905 -- # return 0 00:13:07.689 11:23:25 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:07.689 11:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.689 11:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 11:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.689 11:23:25 -- bdev/blockdev.sh@513 -- # NOT wait 78517 00:13:07.689 11:23:25 -- common/autotest_common.sh@650 -- # local es=0 00:13:07.689 11:23:25 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 78517 00:13:07.689 11:23:25 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:07.689 11:23:25 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:07.689 11:23:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.689 11:23:25 -- common/autotest_common.sh@642 -- # type -t wait 00:13:07.689 11:23:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.689 11:23:25 -- common/autotest_common.sh@653 -- # wait 78517 00:13:07.949 Running I/O for 5 seconds... 00:13:07.949 task offset: 254936 on job bdev=EE_Dev_1 fails 00:13:07.949 00:13:07.949 Latency(us) 00:13:07.949 [2024-11-26T11:23:26.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.949 [2024-11-26T11:23:26.179Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:07.949 [2024-11-26T11:23:26.179Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:07.949 EE_Dev_1 : 0.00 22540.98 88.05 5122.95 0.00 482.54 169.43 848.99 00:13:07.949 [2024-11-26T11:23:26.179Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:07.949 Dev_2 : 0.00 16326.53 63.78 0.00 0.00 694.25 170.36 1273.48 00:13:07.949 [2024-11-26T11:23:26.179Z] =================================================================================================================== 00:13:07.949 [2024-11-26T11:23:26.179Z] Total : 38867.51 151.83 5122.95 0.00 597.37 169.43 1273.48 00:13:07.949 [2024-11-26 11:23:25.946486] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:07.949 request: 00:13:07.949 { 00:13:07.949 "method": "perform_tests", 00:13:07.949 "req_id": 1 00:13:07.949 } 00:13:07.949 Got JSON-RPC error response 00:13:07.949 response: 00:13:07.949 { 00:13:07.949 "code": -32603, 00:13:07.949 "message": "bdevperf failed with error Operation not permitted" 00:13:07.949 } 00:13:08.208 11:23:26 -- common/autotest_common.sh@653 -- # es=255 00:13:08.208 11:23:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.208 11:23:26 -- common/autotest_common.sh@662 -- # es=127 00:13:08.208 11:23:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:08.208 11:23:26 -- common/autotest_common.sh@670 -- # es=1 00:13:08.208 11:23:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.208 00:13:08.208 real 0m9.036s 00:13:08.208 user 0m9.584s 00:13:08.208 sys 0m0.624s 00:13:08.208 11:23:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:08.208 11:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:08.208 ************************************ 00:13:08.208 END TEST bdev_error 00:13:08.208 ************************************ 00:13:08.208 11:23:26 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:08.208 11:23:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.208 11:23:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.208 11:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:08.208 ************************************ 00:13:08.208 START TEST bdev_stat 00:13:08.208 ************************************ 00:13:08.208 11:23:26 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:13:08.208 11:23:26 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:08.208 11:23:26 -- bdev/blockdev.sh@594 -- # STAT_PID=78559 00:13:08.208 Process Bdev IO statistics testing pid: 78559 00:13:08.208 11:23:26 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 78559' 00:13:08.208 11:23:26 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:08.208 11:23:26 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:08.208 11:23:26 -- bdev/blockdev.sh@597 -- # waitforlisten 78559 00:13:08.208 11:23:26 -- common/autotest_common.sh@829 -- # '[' -z 78559 ']' 00:13:08.208 11:23:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.208 11:23:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.208 11:23:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.208 11:23:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.208 11:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:08.208 [2024-11-26 11:23:26.332524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:08.208 [2024-11-26 11:23:26.332724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78559 ] 00:13:08.467 [2024-11-26 11:23:26.495867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.467 [2024-11-26 11:23:26.541245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.467 [2024-11-26 11:23:26.541272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.403 11:23:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.403 11:23:27 -- common/autotest_common.sh@862 -- # return 0 00:13:09.403 11:23:27 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:09.403 11:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.403 11:23:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.403 Malloc_STAT 00:13:09.403 11:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.403 11:23:27 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:09.403 11:23:27 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:09.403 11:23:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:09.403 11:23:27 -- common/autotest_common.sh@899 -- # local i 00:13:09.403 11:23:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:09.403 11:23:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:09.403 11:23:27 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:09.403 11:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.403 11:23:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.403 11:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.403 11:23:27 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:09.403 11:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.403 11:23:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.403 [ 00:13:09.403 { 00:13:09.403 "name": "Malloc_STAT", 00:13:09.403 "aliases": [ 00:13:09.403 "0eed0b15-9910-40e3-90d5-069d3a8e565b" 00:13:09.403 ], 00:13:09.403 "product_name": "Malloc disk", 00:13:09.403 "block_size": 512, 00:13:09.403 "num_blocks": 262144, 00:13:09.403 "uuid": "0eed0b15-9910-40e3-90d5-069d3a8e565b", 00:13:09.403 "assigned_rate_limits": { 00:13:09.403 "rw_ios_per_sec": 0, 00:13:09.403 "rw_mbytes_per_sec": 0, 00:13:09.403 "r_mbytes_per_sec": 0, 00:13:09.403 "w_mbytes_per_sec": 0 00:13:09.403 }, 00:13:09.403 "claimed": false, 00:13:09.403 "zoned": false, 00:13:09.403 "supported_io_types": { 00:13:09.403 "read": true, 00:13:09.403 "write": true, 00:13:09.403 "unmap": true, 00:13:09.403 "write_zeroes": true, 00:13:09.403 "flush": true, 00:13:09.403 "reset": true, 00:13:09.403 "compare": false, 00:13:09.403 "compare_and_write": false, 00:13:09.403 "abort": true, 00:13:09.403 "nvme_admin": false, 00:13:09.403 "nvme_io": false 00:13:09.403 }, 00:13:09.403 "memory_domains": [ 00:13:09.403 { 00:13:09.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.403 "dma_device_type": 2 00:13:09.403 } 00:13:09.403 ], 00:13:09.403 "driver_specific": {} 00:13:09.403 } 00:13:09.403 ] 00:13:09.403 11:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.403 11:23:27 -- common/autotest_common.sh@905 -- # return 0 00:13:09.403 11:23:27 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:09.403 11:23:27 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.403 Running I/O for 10 seconds... 00:13:11.303 11:23:29 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:11.303 11:23:29 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:11.303 11:23:29 -- bdev/blockdev.sh@558 -- # local iostats 00:13:11.303 11:23:29 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:11.303 11:23:29 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:11.303 11:23:29 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:11.303 11:23:29 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:11.303 11:23:29 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:11.303 11:23:29 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:11.303 11:23:29 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:11.303 11:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.303 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.304 11:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.304 11:23:29 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:11.304 "tick_rate": 2200000000, 00:13:11.304 "ticks": 1587103700957, 00:13:11.304 "bdevs": [ 00:13:11.304 { 00:13:11.304 "name": "Malloc_STAT", 00:13:11.304 "bytes_read": 799052288, 00:13:11.304 "num_read_ops": 195075, 00:13:11.304 "bytes_written": 0, 00:13:11.304 "num_write_ops": 0, 00:13:11.304 "bytes_unmapped": 0, 00:13:11.304 "num_unmap_ops": 0, 00:13:11.304 "bytes_copied": 0, 00:13:11.304 "num_copy_ops": 0, 00:13:11.304 "read_latency_ticks": 2123402891756, 00:13:11.304 "max_read_latency_ticks": 12476980, 00:13:11.304 "min_read_latency_ticks": 379304, 00:13:11.304 "write_latency_ticks": 0, 00:13:11.304 "max_write_latency_ticks": 0, 00:13:11.304 "min_write_latency_ticks": 0, 00:13:11.304 "unmap_latency_ticks": 0, 00:13:11.304 "max_unmap_latency_ticks": 0, 00:13:11.304 "min_unmap_latency_ticks": 0, 00:13:11.304 "copy_latency_ticks": 0, 00:13:11.304 "max_copy_latency_ticks": 0, 00:13:11.304 "min_copy_latency_ticks": 0, 00:13:11.304 "io_error": {} 00:13:11.304 } 00:13:11.304 ] 00:13:11.304 }' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@567 -- # io_count1=195075 00:13:11.304 11:23:29 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:11.304 11:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.304 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.304 11:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.304 11:23:29 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:11.304 "tick_rate": 2200000000, 00:13:11.304 "ticks": 1587171138417, 00:13:11.304 "name": "Malloc_STAT", 00:13:11.304 "channels": [ 00:13:11.304 { 00:13:11.304 "thread_id": 2, 00:13:11.304 "bytes_read": 401604608, 00:13:11.304 "num_read_ops": 98048, 00:13:11.304 "bytes_written": 0, 00:13:11.304 "num_write_ops": 0, 00:13:11.304 "bytes_unmapped": 0, 00:13:11.304 "num_unmap_ops": 0, 00:13:11.304 "bytes_copied": 0, 00:13:11.304 "num_copy_ops": 0, 00:13:11.304 "read_latency_ticks": 1078820273474, 00:13:11.304 "max_read_latency_ticks": 12476980, 00:13:11.304 "min_read_latency_ticks": 8813740, 00:13:11.304 "write_latency_ticks": 0, 00:13:11.304 "max_write_latency_ticks": 0, 00:13:11.304 "min_write_latency_ticks": 0, 00:13:11.304 "unmap_latency_ticks": 0, 00:13:11.304 "max_unmap_latency_ticks": 0, 00:13:11.304 "min_unmap_latency_ticks": 0, 00:13:11.304 "copy_latency_ticks": 0, 00:13:11.304 "max_copy_latency_ticks": 0, 00:13:11.304 "min_copy_latency_ticks": 0 00:13:11.304 }, 00:13:11.304 { 00:13:11.304 "thread_id": 3, 00:13:11.304 "bytes_read": 411041792, 00:13:11.304 "num_read_ops": 100352, 00:13:11.304 "bytes_written": 0, 00:13:11.304 "num_write_ops": 0, 00:13:11.304 "bytes_unmapped": 0, 00:13:11.304 "num_unmap_ops": 0, 00:13:11.304 "bytes_copied": 0, 00:13:11.304 "num_copy_ops": 0, 00:13:11.304 "read_latency_ticks": 1081193518830, 00:13:11.304 "max_read_latency_ticks": 12243298, 00:13:11.304 "min_read_latency_ticks": 8775864, 00:13:11.304 "write_latency_ticks": 0, 00:13:11.304 "max_write_latency_ticks": 0, 00:13:11.304 "min_write_latency_ticks": 0, 00:13:11.304 "unmap_latency_ticks": 0, 00:13:11.304 "max_unmap_latency_ticks": 0, 00:13:11.304 "min_unmap_latency_ticks": 0, 00:13:11.304 "copy_latency_ticks": 0, 00:13:11.304 "max_copy_latency_ticks": 0, 00:13:11.304 "min_copy_latency_ticks": 0 00:13:11.304 } 00:13:11.304 ] 00:13:11.304 }' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=98048 00:13:11.304 11:23:29 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=98048 00:13:11.304 11:23:29 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=100352 00:13:11.304 11:23:29 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=198400 00:13:11.304 11:23:29 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:11.304 11:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.304 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.304 11:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.304 11:23:29 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:11.304 "tick_rate": 2200000000, 00:13:11.304 "ticks": 1587271536124, 00:13:11.304 "bdevs": [ 00:13:11.304 { 00:13:11.304 "name": "Malloc_STAT", 00:13:11.304 "bytes_read": 831558144, 00:13:11.304 "num_read_ops": 203011, 00:13:11.304 "bytes_written": 0, 00:13:11.304 "num_write_ops": 0, 00:13:11.304 "bytes_unmapped": 0, 00:13:11.304 "num_unmap_ops": 0, 00:13:11.304 "bytes_copied": 0, 00:13:11.304 "num_copy_ops": 0, 00:13:11.304 "read_latency_ticks": 2210530492118, 00:13:11.304 "max_read_latency_ticks": 12476980, 00:13:11.304 "min_read_latency_ticks": 379304, 00:13:11.304 "write_latency_ticks": 0, 00:13:11.304 "max_write_latency_ticks": 0, 00:13:11.304 "min_write_latency_ticks": 0, 00:13:11.304 "unmap_latency_ticks": 0, 00:13:11.304 "max_unmap_latency_ticks": 0, 00:13:11.304 "min_unmap_latency_ticks": 0, 00:13:11.304 "copy_latency_ticks": 0, 00:13:11.304 "max_copy_latency_ticks": 0, 00:13:11.304 "min_copy_latency_ticks": 0, 00:13:11.304 "io_error": {} 00:13:11.304 } 00:13:11.304 ] 00:13:11.304 }' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@576 -- # io_count2=203011 00:13:11.304 11:23:29 -- bdev/blockdev.sh@581 -- # '[' 198400 -lt 195075 ']' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@581 -- # '[' 198400 -gt 203011 ']' 00:13:11.304 11:23:29 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:11.304 11:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.304 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.304 00:13:11.304 Latency(us) 00:13:11.304 [2024-11-26T11:23:29.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.304 [2024-11-26T11:23:29.534Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:11.304 Malloc_STAT : 2.00 51073.31 199.51 0.00 0.00 4999.41 1489.45 5689.72 00:13:11.304 [2024-11-26T11:23:29.534Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:11.304 Malloc_STAT : 2.01 52207.62 203.94 0.00 0.00 4891.42 1221.35 5570.56 00:13:11.304 [2024-11-26T11:23:29.534Z] =================================================================================================================== 00:13:11.304 [2024-11-26T11:23:29.534Z] Total : 103280.93 403.44 0.00 0.00 4944.81 1221.35 5689.72 00:13:11.304 0 00:13:11.304 11:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.304 11:23:29 -- bdev/blockdev.sh@607 -- # killprocess 78559 00:13:11.304 11:23:29 -- common/autotest_common.sh@936 -- # '[' -z 78559 ']' 00:13:11.304 11:23:29 -- common/autotest_common.sh@940 -- # kill -0 78559 00:13:11.304 11:23:29 -- common/autotest_common.sh@941 -- # uname 00:13:11.304 11:23:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.304 11:23:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78559 00:13:11.572 11:23:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.572 killing process with pid 78559 00:13:11.572 11:23:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.572 11:23:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78559' 00:13:11.572 Received shutdown signal, test time was about 2.060470 seconds 00:13:11.572 00:13:11.572 Latency(us) 00:13:11.572 [2024-11-26T11:23:29.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.572 [2024-11-26T11:23:29.802Z] =================================================================================================================== 00:13:11.572 [2024-11-26T11:23:29.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.572 11:23:29 -- common/autotest_common.sh@955 -- # kill 78559 00:13:11.572 11:23:29 -- common/autotest_common.sh@960 -- # wait 78559 00:13:11.572 11:23:29 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:11.572 00:13:11.572 real 0m3.499s 00:13:11.572 user 0m6.957s 00:13:11.572 sys 0m0.345s 00:13:11.572 11:23:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.572 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.572 ************************************ 00:13:11.572 END TEST bdev_stat 00:13:11.572 ************************************ 00:13:11.831 11:23:29 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:11.831 11:23:29 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:11.831 11:23:29 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:11.831 11:23:29 -- bdev/blockdev.sh@809 -- # cleanup 00:13:11.831 11:23:29 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:11.831 11:23:29 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:11.831 11:23:29 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:11.831 11:23:29 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:11.831 11:23:29 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:11.831 11:23:29 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:11.831 00:13:11.831 real 1m51.337s 00:13:11.831 user 5m12.128s 00:13:11.831 sys 0m20.165s 00:13:11.831 11:23:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.831 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.831 ************************************ 00:13:11.831 END TEST blockdev_general 00:13:11.831 ************************************ 00:13:11.831 11:23:29 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:11.831 11:23:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:11.831 11:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.831 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:13:11.831 ************************************ 00:13:11.831 START TEST bdev_raid 00:13:11.831 ************************************ 00:13:11.831 11:23:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:11.831 * Looking for test storage... 00:13:11.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.831 11:23:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:11.831 11:23:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:11.831 11:23:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:11.831 11:23:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:11.831 11:23:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:11.831 11:23:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:11.831 11:23:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:11.831 11:23:30 -- scripts/common.sh@335 -- # IFS=.-: 00:13:11.831 11:23:30 -- scripts/common.sh@335 -- # read -ra ver1 00:13:11.831 11:23:30 -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.831 11:23:30 -- scripts/common.sh@336 -- # read -ra ver2 00:13:11.831 11:23:30 -- scripts/common.sh@337 -- # local 'op=<' 00:13:11.831 11:23:30 -- scripts/common.sh@339 -- # ver1_l=2 00:13:11.831 11:23:30 -- scripts/common.sh@340 -- # ver2_l=1 00:13:11.831 11:23:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:11.831 11:23:30 -- scripts/common.sh@343 -- # case "$op" in 00:13:11.831 11:23:30 -- scripts/common.sh@344 -- # : 1 00:13:11.831 11:23:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:11.831 11:23:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.831 11:23:30 -- scripts/common.sh@364 -- # decimal 1 00:13:11.831 11:23:30 -- scripts/common.sh@352 -- # local d=1 00:13:11.831 11:23:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.831 11:23:30 -- scripts/common.sh@354 -- # echo 1 00:13:11.831 11:23:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:11.831 11:23:30 -- scripts/common.sh@365 -- # decimal 2 00:13:11.831 11:23:30 -- scripts/common.sh@352 -- # local d=2 00:13:11.831 11:23:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.831 11:23:30 -- scripts/common.sh@354 -- # echo 2 00:13:11.831 11:23:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:11.831 11:23:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:11.831 11:23:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:11.831 11:23:30 -- scripts/common.sh@367 -- # return 0 00:13:11.831 11:23:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.831 11:23:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.831 --rc genhtml_branch_coverage=1 00:13:11.831 --rc genhtml_function_coverage=1 00:13:11.831 --rc genhtml_legend=1 00:13:11.831 --rc geninfo_all_blocks=1 00:13:11.831 --rc geninfo_unexecuted_blocks=1 00:13:11.831 00:13:11.831 ' 00:13:11.831 11:23:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.831 --rc genhtml_branch_coverage=1 00:13:11.831 --rc genhtml_function_coverage=1 00:13:11.831 --rc genhtml_legend=1 00:13:11.831 --rc geninfo_all_blocks=1 00:13:11.831 --rc geninfo_unexecuted_blocks=1 00:13:11.831 00:13:11.831 ' 00:13:11.831 11:23:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:11.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.831 --rc genhtml_branch_coverage=1 00:13:11.831 --rc genhtml_function_coverage=1 00:13:11.831 --rc genhtml_legend=1 00:13:11.832 --rc geninfo_all_blocks=1 00:13:11.832 --rc geninfo_unexecuted_blocks=1 00:13:11.832 00:13:11.832 ' 00:13:11.832 11:23:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:11.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.832 --rc genhtml_branch_coverage=1 00:13:11.832 --rc genhtml_function_coverage=1 00:13:11.832 --rc genhtml_legend=1 00:13:11.832 --rc geninfo_all_blocks=1 00:13:11.832 --rc geninfo_unexecuted_blocks=1 00:13:11.832 00:13:11.832 ' 00:13:11.832 11:23:30 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:11.832 11:23:30 -- bdev/nbd_common.sh@6 -- # set -e 00:13:11.832 11:23:30 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:11.832 11:23:30 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:12.091 11:23:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:12.091 11:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.091 11:23:30 -- common/autotest_common.sh@10 -- # set +x 00:13:12.091 ************************************ 00:13:12.091 START TEST raid_function_test_raid0 00:13:12.091 ************************************ 00:13:12.091 11:23:30 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@86 -- # raid_pid=78698 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:12.091 Process raid pid: 78698 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 78698' 00:13:12.091 11:23:30 -- bdev/bdev_raid.sh@88 -- # waitforlisten 78698 /var/tmp/spdk-raid.sock 00:13:12.091 11:23:30 -- common/autotest_common.sh@829 -- # '[' -z 78698 ']' 00:13:12.091 11:23:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:12.091 11:23:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.091 11:23:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:12.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:12.091 11:23:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.091 11:23:30 -- common/autotest_common.sh@10 -- # set +x 00:13:12.091 [2024-11-26 11:23:30.147624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:12.091 [2024-11-26 11:23:30.147778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.091 [2024-11-26 11:23:30.311313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.349 [2024-11-26 11:23:30.350214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.349 [2024-11-26 11:23:30.386162] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.916 11:23:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.916 11:23:31 -- common/autotest_common.sh@862 -- # return 0 00:13:12.916 11:23:31 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:12.916 11:23:31 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:12.916 11:23:31 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:12.916 11:23:31 -- bdev/bdev_raid.sh@70 -- # cat 00:13:12.916 11:23:31 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:13.485 [2024-11-26 11:23:31.438340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:13.485 [2024-11-26 11:23:31.441505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:13.485 [2024-11-26 11:23:31.441640] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:13.485 [2024-11-26 11:23:31.441680] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:13.485 [2024-11-26 11:23:31.441915] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:13.485 Base_1 00:13:13.485 Base_2 00:13:13.485 [2024-11-26 11:23:31.442508] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:13.485 [2024-11-26 11:23:31.442563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:13.485 [2024-11-26 11:23:31.442826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.485 11:23:31 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:13.485 11:23:31 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:13.485 11:23:31 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:13.744 11:23:31 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:13.744 11:23:31 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:13.744 11:23:31 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@12 -- # local i 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.744 11:23:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:14.002 [2024-11-26 11:23:31.983129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:14.002 /dev/nbd0 00:13:14.002 11:23:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:14.002 11:23:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:14.002 11:23:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:14.002 11:23:32 -- common/autotest_common.sh@867 -- # local i 00:13:14.002 11:23:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:14.002 11:23:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:14.002 11:23:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:14.002 11:23:32 -- common/autotest_common.sh@871 -- # break 00:13:14.002 11:23:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:14.002 11:23:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:14.003 11:23:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.003 1+0 records in 00:13:14.003 1+0 records out 00:13:14.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866302 s, 4.7 MB/s 00:13:14.003 11:23:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.003 11:23:32 -- common/autotest_common.sh@884 -- # size=4096 00:13:14.003 11:23:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.003 11:23:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:14.003 11:23:32 -- common/autotest_common.sh@887 -- # return 0 00:13:14.003 11:23:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.003 11:23:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.003 11:23:32 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:14.003 11:23:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:14.003 11:23:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:14.261 { 00:13:14.261 "nbd_device": "/dev/nbd0", 00:13:14.261 "bdev_name": "raid" 00:13:14.261 } 00:13:14.261 ]' 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:14.261 { 00:13:14.261 "nbd_device": "/dev/nbd0", 00:13:14.261 "bdev_name": "raid" 00:13:14.261 } 00:13:14.261 ]' 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@65 -- # count=1 00:13:14.261 11:23:32 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:14.261 4096+0 records in 00:13:14.261 4096+0 records out 00:13:14.261 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0228939 s, 91.6 MB/s 00:13:14.261 11:23:32 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:14.521 4096+0 records in 00:13:14.521 4096+0 records out 00:13:14.521 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.324645 s, 6.5 MB/s 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:14.521 128+0 records in 00:13:14.521 128+0 records out 00:13:14.521 65536 bytes (66 kB, 64 KiB) copied, 0.000993202 s, 66.0 MB/s 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:14.521 2035+0 records in 00:13:14.521 2035+0 records out 00:13:14.521 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00589432 s, 177 MB/s 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:14.521 456+0 records in 00:13:14.521 456+0 records out 00:13:14.521 233472 bytes (233 kB, 228 KiB) copied, 0.0011768 s, 198 MB/s 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:14.521 11:23:32 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@51 -- # local i 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.521 11:23:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:15.089 [2024-11-26 11:23:33.020475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@41 -- # break 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.089 11:23:33 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:15.089 11:23:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@65 -- # true 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@65 -- # count=0 00:13:15.348 11:23:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:15.348 11:23:33 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:15.348 11:23:33 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:15.348 11:23:33 -- bdev/bdev_raid.sh@111 -- # killprocess 78698 00:13:15.348 11:23:33 -- common/autotest_common.sh@936 -- # '[' -z 78698 ']' 00:13:15.348 11:23:33 -- common/autotest_common.sh@940 -- # kill -0 78698 00:13:15.348 11:23:33 -- common/autotest_common.sh@941 -- # uname 00:13:15.348 11:23:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.348 11:23:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78698 00:13:15.348 killing process with pid 78698 00:13:15.348 11:23:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.348 11:23:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.348 11:23:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78698' 00:13:15.348 11:23:33 -- common/autotest_common.sh@955 -- # kill 78698 00:13:15.348 [2024-11-26 11:23:33.398843] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.348 11:23:33 -- common/autotest_common.sh@960 -- # wait 78698 00:13:15.348 [2024-11-26 11:23:33.399016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.348 [2024-11-26 11:23:33.399091] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.348 [2024-11-26 11:23:33.399113] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:13:15.348 [2024-11-26 11:23:33.417848] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:15.607 00:13:15.607 real 0m3.544s 00:13:15.607 user 0m4.952s 00:13:15.607 sys 0m0.947s 00:13:15.607 11:23:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.607 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:13:15.607 ************************************ 00:13:15.607 END TEST raid_function_test_raid0 00:13:15.607 ************************************ 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:15.607 11:23:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:15.607 11:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.607 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:13:15.607 ************************************ 00:13:15.607 START TEST raid_function_test_concat 00:13:15.607 ************************************ 00:13:15.607 11:23:33 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:15.607 Process raid pid: 78831 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@86 -- # raid_pid=78831 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 78831' 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@88 -- # waitforlisten 78831 /var/tmp/spdk-raid.sock 00:13:15.607 11:23:33 -- common/autotest_common.sh@829 -- # '[' -z 78831 ']' 00:13:15.607 11:23:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:15.607 11:23:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.607 11:23:33 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:15.607 11:23:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:15.607 11:23:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.607 11:23:33 -- common/autotest_common.sh@10 -- # set +x 00:13:15.607 [2024-11-26 11:23:33.757328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:15.608 [2024-11-26 11:23:33.757857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.866 [2024-11-26 11:23:33.931313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.866 [2024-11-26 11:23:33.969798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.866 [2024-11-26 11:23:34.005322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.804 11:23:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.804 11:23:34 -- common/autotest_common.sh@862 -- # return 0 00:13:16.804 11:23:34 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:16.804 11:23:34 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:16.804 11:23:34 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:16.804 11:23:34 -- bdev/bdev_raid.sh@70 -- # cat 00:13:16.804 11:23:34 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:17.078 [2024-11-26 11:23:35.045115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:17.078 [2024-11-26 11:23:35.048966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:17.078 [2024-11-26 11:23:35.049317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:17.078 [2024-11-26 11:23:35.049511] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:17.078 [2024-11-26 11:23:35.049799] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:17.078 [2024-11-26 11:23:35.050483] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:17.078 Base_1 00:13:17.078 Base_2 00:13:17.078 [2024-11-26 11:23:35.050690] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:17.078 [2024-11-26 11:23:35.051052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.078 11:23:35 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:17.078 11:23:35 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:17.078 11:23:35 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.346 11:23:35 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:17.346 11:23:35 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:17.346 11:23:35 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@12 -- # local i 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.346 11:23:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:17.606 [2024-11-26 11:23:35.589755] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:17.606 /dev/nbd0 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.606 11:23:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:17.606 11:23:35 -- common/autotest_common.sh@867 -- # local i 00:13:17.606 11:23:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:17.606 11:23:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:17.606 11:23:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:17.606 11:23:35 -- common/autotest_common.sh@871 -- # break 00:13:17.606 11:23:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:17.606 11:23:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:17.606 11:23:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.606 1+0 records in 00:13:17.606 1+0 records out 00:13:17.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260582 s, 15.7 MB/s 00:13:17.606 11:23:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.606 11:23:35 -- common/autotest_common.sh@884 -- # size=4096 00:13:17.606 11:23:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.606 11:23:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:17.606 11:23:35 -- common/autotest_common.sh@887 -- # return 0 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.606 11:23:35 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:17.606 11:23:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:17.865 { 00:13:17.865 "nbd_device": "/dev/nbd0", 00:13:17.865 "bdev_name": "raid" 00:13:17.865 } 00:13:17.865 ]' 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:17.865 { 00:13:17.865 "nbd_device": "/dev/nbd0", 00:13:17.865 "bdev_name": "raid" 00:13:17.865 } 00:13:17.865 ]' 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@65 -- # count=1 00:13:17.865 11:23:35 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:17.865 4096+0 records in 00:13:17.865 4096+0 records out 00:13:17.865 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0253334 s, 82.8 MB/s 00:13:17.865 11:23:35 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:18.124 4096+0 records in 00:13:18.124 4096+0 records out 00:13:18.124 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.323419 s, 6.5 MB/s 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:18.124 128+0 records in 00:13:18.124 128+0 records out 00:13:18.124 65536 bytes (66 kB, 64 KiB) copied, 0.000399512 s, 164 MB/s 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:18.124 2035+0 records in 00:13:18.124 2035+0 records out 00:13:18.124 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00569652 s, 183 MB/s 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:18.124 456+0 records in 00:13:18.124 456+0 records out 00:13:18.124 233472 bytes (233 kB, 228 KiB) copied, 0.00111432 s, 210 MB/s 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:18.124 11:23:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:18.383 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:18.383 11:23:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:18.383 11:23:36 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:18.383 11:23:36 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@51 -- # local i 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.383 11:23:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:18.642 [2024-11-26 11:23:36.655997] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@41 -- # break 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.642 11:23:36 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:18.642 11:23:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@65 -- # true 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@65 -- # count=0 00:13:18.901 11:23:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:18.901 11:23:36 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:18.901 11:23:36 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:18.901 11:23:36 -- bdev/bdev_raid.sh@111 -- # killprocess 78831 00:13:18.901 11:23:36 -- common/autotest_common.sh@936 -- # '[' -z 78831 ']' 00:13:18.901 11:23:36 -- common/autotest_common.sh@940 -- # kill -0 78831 00:13:18.901 11:23:36 -- common/autotest_common.sh@941 -- # uname 00:13:18.901 11:23:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.901 11:23:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78831 00:13:18.901 killing process with pid 78831 00:13:18.901 11:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.901 11:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.901 11:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78831' 00:13:18.901 11:23:36 -- common/autotest_common.sh@955 -- # kill 78831 00:13:18.901 [2024-11-26 11:23:36.997771] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.901 11:23:36 -- common/autotest_common.sh@960 -- # wait 78831 00:13:18.901 [2024-11-26 11:23:36.997900] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.901 [2024-11-26 11:23:36.997994] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.901 [2024-11-26 11:23:36.998013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:13:18.901 [2024-11-26 11:23:37.016432] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.160 ************************************ 00:13:19.160 END TEST raid_function_test_concat 00:13:19.160 ************************************ 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:19.160 00:13:19.160 real 0m3.542s 00:13:19.160 user 0m4.962s 00:13:19.160 sys 0m0.926s 00:13:19.160 11:23:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.160 11:23:37 -- common/autotest_common.sh@10 -- # set +x 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:19.160 11:23:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.160 11:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.160 11:23:37 -- common/autotest_common.sh@10 -- # set +x 00:13:19.160 ************************************ 00:13:19.160 START TEST raid0_resize_test 00:13:19.160 ************************************ 00:13:19.160 11:23:37 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:19.160 Process raid pid: 78969 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@301 -- # raid_pid=78969 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 78969' 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@303 -- # waitforlisten 78969 /var/tmp/spdk-raid.sock 00:13:19.160 11:23:37 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:19.160 11:23:37 -- common/autotest_common.sh@829 -- # '[' -z 78969 ']' 00:13:19.160 11:23:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:19.160 11:23:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.160 11:23:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:19.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:19.160 11:23:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.160 11:23:37 -- common/autotest_common.sh@10 -- # set +x 00:13:19.160 [2024-11-26 11:23:37.360538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:19.160 [2024-11-26 11:23:37.360736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.419 [2024-11-26 11:23:37.533368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.419 [2024-11-26 11:23:37.573395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.419 [2024-11-26 11:23:37.609041] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.355 11:23:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.355 11:23:38 -- common/autotest_common.sh@862 -- # return 0 00:13:20.355 11:23:38 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:20.355 Base_1 00:13:20.355 11:23:38 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:20.613 Base_2 00:13:20.613 11:23:38 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:20.872 [2024-11-26 11:23:38.934653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:20.872 [2024-11-26 11:23:38.937224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:20.872 [2024-11-26 11:23:38.937347] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:20.872 [2024-11-26 11:23:38.937361] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:20.872 [2024-11-26 11:23:38.937495] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005450 00:13:20.872 [2024-11-26 11:23:38.937801] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:20.872 [2024-11-26 11:23:38.937826] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006f80 00:13:20.872 [2024-11-26 11:23:38.938161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.872 11:23:38 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:21.129 [2024-11-26 11:23:39.146739] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:21.129 [2024-11-26 11:23:39.147008] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:21.129 true 00:13:21.129 11:23:39 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:21.129 11:23:39 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:21.386 [2024-11-26 11:23:39.402952] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.386 11:23:39 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:21.386 11:23:39 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:21.386 11:23:39 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:21.386 11:23:39 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:21.644 [2024-11-26 11:23:39.654845] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:21.644 [2024-11-26 11:23:39.655108] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:21.644 [2024-11-26 11:23:39.655352] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:21.644 [2024-11-26 11:23:39.655602] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:21.644 true 00:13:21.644 11:23:39 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:21.644 11:23:39 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:21.903 [2024-11-26 11:23:39.907122] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.903 11:23:39 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:21.904 11:23:39 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:21.904 11:23:39 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:21.904 11:23:39 -- bdev/bdev_raid.sh@332 -- # killprocess 78969 00:13:21.904 11:23:39 -- common/autotest_common.sh@936 -- # '[' -z 78969 ']' 00:13:21.904 11:23:39 -- common/autotest_common.sh@940 -- # kill -0 78969 00:13:21.904 11:23:39 -- common/autotest_common.sh@941 -- # uname 00:13:21.904 11:23:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.904 11:23:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78969 00:13:21.904 killing process with pid 78969 00:13:21.904 11:23:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.904 11:23:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.904 11:23:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78969' 00:13:21.904 11:23:39 -- common/autotest_common.sh@955 -- # kill 78969 00:13:21.904 [2024-11-26 11:23:39.962234] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.904 11:23:39 -- common/autotest_common.sh@960 -- # wait 78969 00:13:21.904 [2024-11-26 11:23:39.962373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.904 [2024-11-26 11:23:39.962444] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.904 [2024-11-26 11:23:39.962460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Raid, state offline 00:13:21.904 [2024-11-26 11:23:39.963154] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:22.163 00:13:22.163 real 0m2.868s 00:13:22.163 user 0m4.453s 00:13:22.163 sys 0m0.443s 00:13:22.163 11:23:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.163 ************************************ 00:13:22.163 END TEST raid0_resize_test 00:13:22.163 ************************************ 00:13:22.163 11:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:22.163 11:23:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:22.163 11:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.163 11:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:22.163 ************************************ 00:13:22.163 START TEST raid_state_function_test 00:13:22.163 ************************************ 00:13:22.163 11:23:40 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:22.163 Process raid pid: 79042 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=79042 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 79042' 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:22.163 11:23:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 79042 /var/tmp/spdk-raid.sock 00:13:22.163 11:23:40 -- common/autotest_common.sh@829 -- # '[' -z 79042 ']' 00:13:22.163 11:23:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:22.163 11:23:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:22.163 11:23:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:22.163 11:23:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.163 11:23:40 -- common/autotest_common.sh@10 -- # set +x 00:13:22.163 [2024-11-26 11:23:40.268815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.163 [2024-11-26 11:23:40.268993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.421 [2024-11-26 11:23:40.425585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.421 [2024-11-26 11:23:40.464051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.421 [2024-11-26 11:23:40.498767] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.356 11:23:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.356 11:23:41 -- common/autotest_common.sh@862 -- # return 0 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.356 [2024-11-26 11:23:41.435527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.356 [2024-11-26 11:23:41.435584] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.356 [2024-11-26 11:23:41.435606] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.356 [2024-11-26 11:23:41.435635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.356 11:23:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.615 11:23:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.615 "name": "Existed_Raid", 00:13:23.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.615 "strip_size_kb": 64, 00:13:23.615 "state": "configuring", 00:13:23.615 "raid_level": "raid0", 00:13:23.615 "superblock": false, 00:13:23.615 "num_base_bdevs": 2, 00:13:23.615 "num_base_bdevs_discovered": 0, 00:13:23.615 "num_base_bdevs_operational": 2, 00:13:23.615 "base_bdevs_list": [ 00:13:23.615 { 00:13:23.615 "name": "BaseBdev1", 00:13:23.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.615 "is_configured": false, 00:13:23.615 "data_offset": 0, 00:13:23.615 "data_size": 0 00:13:23.615 }, 00:13:23.615 { 00:13:23.615 "name": "BaseBdev2", 00:13:23.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.615 "is_configured": false, 00:13:23.615 "data_offset": 0, 00:13:23.615 "data_size": 0 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 }' 00:13:23.615 11:23:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.615 11:23:41 -- common/autotest_common.sh@10 -- # set +x 00:13:23.873 11:23:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:24.132 [2024-11-26 11:23:42.211702] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.132 [2024-11-26 11:23:42.211774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:13:24.132 11:23:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:24.390 [2024-11-26 11:23:42.415751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.390 [2024-11-26 11:23:42.415813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.390 [2024-11-26 11:23:42.415839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.390 [2024-11-26 11:23:42.415852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.390 11:23:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.648 [2024-11-26 11:23:42.626825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.648 BaseBdev1 00:13:24.648 11:23:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:24.648 11:23:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:24.648 11:23:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:24.648 11:23:42 -- common/autotest_common.sh@899 -- # local i 00:13:24.648 11:23:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:24.648 11:23:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:24.648 11:23:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.648 11:23:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.906 [ 00:13:24.906 { 00:13:24.906 "name": "BaseBdev1", 00:13:24.906 "aliases": [ 00:13:24.906 "b061c60c-e8b5-41e3-97bf-076cf0edbe89" 00:13:24.906 ], 00:13:24.906 "product_name": "Malloc disk", 00:13:24.906 "block_size": 512, 00:13:24.906 "num_blocks": 65536, 00:13:24.906 "uuid": "b061c60c-e8b5-41e3-97bf-076cf0edbe89", 00:13:24.906 "assigned_rate_limits": { 00:13:24.906 "rw_ios_per_sec": 0, 00:13:24.906 "rw_mbytes_per_sec": 0, 00:13:24.906 "r_mbytes_per_sec": 0, 00:13:24.906 "w_mbytes_per_sec": 0 00:13:24.906 }, 00:13:24.906 "claimed": true, 00:13:24.906 "claim_type": "exclusive_write", 00:13:24.906 "zoned": false, 00:13:24.906 "supported_io_types": { 00:13:24.906 "read": true, 00:13:24.906 "write": true, 00:13:24.906 "unmap": true, 00:13:24.906 "write_zeroes": true, 00:13:24.906 "flush": true, 00:13:24.906 "reset": true, 00:13:24.906 "compare": false, 00:13:24.906 "compare_and_write": false, 00:13:24.906 "abort": true, 00:13:24.906 "nvme_admin": false, 00:13:24.906 "nvme_io": false 00:13:24.906 }, 00:13:24.906 "memory_domains": [ 00:13:24.906 { 00:13:24.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.906 "dma_device_type": 2 00:13:24.906 } 00:13:24.906 ], 00:13:24.906 "driver_specific": {} 00:13:24.906 } 00:13:24.906 ] 00:13:24.906 11:23:43 -- common/autotest_common.sh@905 -- # return 0 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.906 11:23:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.164 11:23:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.164 "name": "Existed_Raid", 00:13:25.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.164 "strip_size_kb": 64, 00:13:25.164 "state": "configuring", 00:13:25.164 "raid_level": "raid0", 00:13:25.164 "superblock": false, 00:13:25.164 "num_base_bdevs": 2, 00:13:25.164 "num_base_bdevs_discovered": 1, 00:13:25.164 "num_base_bdevs_operational": 2, 00:13:25.164 "base_bdevs_list": [ 00:13:25.164 { 00:13:25.164 "name": "BaseBdev1", 00:13:25.164 "uuid": "b061c60c-e8b5-41e3-97bf-076cf0edbe89", 00:13:25.164 "is_configured": true, 00:13:25.164 "data_offset": 0, 00:13:25.164 "data_size": 65536 00:13:25.164 }, 00:13:25.164 { 00:13:25.164 "name": "BaseBdev2", 00:13:25.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.164 "is_configured": false, 00:13:25.164 "data_offset": 0, 00:13:25.164 "data_size": 0 00:13:25.164 } 00:13:25.164 ] 00:13:25.164 }' 00:13:25.164 11:23:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.164 11:23:43 -- common/autotest_common.sh@10 -- # set +x 00:13:25.730 11:23:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:25.730 [2024-11-26 11:23:43.847237] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.730 [2024-11-26 11:23:43.847338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:13:25.730 11:23:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:25.730 11:23:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:26.006 [2024-11-26 11:23:44.051426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.006 [2024-11-26 11:23:44.054008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.006 [2024-11-26 11:23:44.054053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.006 11:23:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.266 11:23:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.266 "name": "Existed_Raid", 00:13:26.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.266 "strip_size_kb": 64, 00:13:26.266 "state": "configuring", 00:13:26.266 "raid_level": "raid0", 00:13:26.266 "superblock": false, 00:13:26.266 "num_base_bdevs": 2, 00:13:26.266 "num_base_bdevs_discovered": 1, 00:13:26.266 "num_base_bdevs_operational": 2, 00:13:26.266 "base_bdevs_list": [ 00:13:26.266 { 00:13:26.266 "name": "BaseBdev1", 00:13:26.266 "uuid": "b061c60c-e8b5-41e3-97bf-076cf0edbe89", 00:13:26.266 "is_configured": true, 00:13:26.266 "data_offset": 0, 00:13:26.266 "data_size": 65536 00:13:26.266 }, 00:13:26.266 { 00:13:26.266 "name": "BaseBdev2", 00:13:26.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.266 "is_configured": false, 00:13:26.266 "data_offset": 0, 00:13:26.266 "data_size": 0 00:13:26.266 } 00:13:26.266 ] 00:13:26.266 }' 00:13:26.266 11:23:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.266 11:23:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.523 11:23:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.782 [2024-11-26 11:23:44.819977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.782 [2024-11-26 11:23:44.820053] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:26.782 [2024-11-26 11:23:44.820098] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:26.782 [2024-11-26 11:23:44.820222] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:26.782 [2024-11-26 11:23:44.820642] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:26.782 [2024-11-26 11:23:44.820671] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:13:26.782 [2024-11-26 11:23:44.820955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.782 BaseBdev2 00:13:26.782 11:23:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:26.782 11:23:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:26.782 11:23:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:26.782 11:23:44 -- common/autotest_common.sh@899 -- # local i 00:13:26.782 11:23:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:26.782 11:23:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:26.782 11:23:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.040 11:23:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.040 [ 00:13:27.040 { 00:13:27.040 "name": "BaseBdev2", 00:13:27.040 "aliases": [ 00:13:27.040 "b99d9ab7-2e93-4ad4-bf1b-c578ba9294bb" 00:13:27.040 ], 00:13:27.040 "product_name": "Malloc disk", 00:13:27.040 "block_size": 512, 00:13:27.040 "num_blocks": 65536, 00:13:27.040 "uuid": "b99d9ab7-2e93-4ad4-bf1b-c578ba9294bb", 00:13:27.040 "assigned_rate_limits": { 00:13:27.040 "rw_ios_per_sec": 0, 00:13:27.040 "rw_mbytes_per_sec": 0, 00:13:27.040 "r_mbytes_per_sec": 0, 00:13:27.040 "w_mbytes_per_sec": 0 00:13:27.040 }, 00:13:27.040 "claimed": true, 00:13:27.040 "claim_type": "exclusive_write", 00:13:27.040 "zoned": false, 00:13:27.040 "supported_io_types": { 00:13:27.040 "read": true, 00:13:27.040 "write": true, 00:13:27.040 "unmap": true, 00:13:27.040 "write_zeroes": true, 00:13:27.040 "flush": true, 00:13:27.040 "reset": true, 00:13:27.040 "compare": false, 00:13:27.040 "compare_and_write": false, 00:13:27.040 "abort": true, 00:13:27.040 "nvme_admin": false, 00:13:27.040 "nvme_io": false 00:13:27.040 }, 00:13:27.040 "memory_domains": [ 00:13:27.040 { 00:13:27.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.040 "dma_device_type": 2 00:13:27.040 } 00:13:27.040 ], 00:13:27.040 "driver_specific": {} 00:13:27.040 } 00:13:27.040 ] 00:13:27.298 11:23:45 -- common/autotest_common.sh@905 -- # return 0 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.298 "name": "Existed_Raid", 00:13:27.298 "uuid": "cbe6eca9-6efc-480d-baa7-998e7538782e", 00:13:27.298 "strip_size_kb": 64, 00:13:27.298 "state": "online", 00:13:27.298 "raid_level": "raid0", 00:13:27.298 "superblock": false, 00:13:27.298 "num_base_bdevs": 2, 00:13:27.298 "num_base_bdevs_discovered": 2, 00:13:27.298 "num_base_bdevs_operational": 2, 00:13:27.298 "base_bdevs_list": [ 00:13:27.298 { 00:13:27.298 "name": "BaseBdev1", 00:13:27.298 "uuid": "b061c60c-e8b5-41e3-97bf-076cf0edbe89", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 }, 00:13:27.298 { 00:13:27.298 "name": "BaseBdev2", 00:13:27.298 "uuid": "b99d9ab7-2e93-4ad4-bf1b-c578ba9294bb", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 } 00:13:27.298 ] 00:13:27.298 }' 00:13:27.298 11:23:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.298 11:23:45 -- common/autotest_common.sh@10 -- # set +x 00:13:27.866 11:23:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:27.866 [2024-11-26 11:23:46.040594] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.866 [2024-11-26 11:23:46.040632] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.866 [2024-11-26 11:23:46.040704] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.866 11:23:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.124 11:23:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.124 "name": "Existed_Raid", 00:13:28.124 "uuid": "cbe6eca9-6efc-480d-baa7-998e7538782e", 00:13:28.124 "strip_size_kb": 64, 00:13:28.124 "state": "offline", 00:13:28.124 "raid_level": "raid0", 00:13:28.124 "superblock": false, 00:13:28.124 "num_base_bdevs": 2, 00:13:28.124 "num_base_bdevs_discovered": 1, 00:13:28.124 "num_base_bdevs_operational": 1, 00:13:28.124 "base_bdevs_list": [ 00:13:28.124 { 00:13:28.124 "name": null, 00:13:28.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.124 "is_configured": false, 00:13:28.124 "data_offset": 0, 00:13:28.124 "data_size": 65536 00:13:28.124 }, 00:13:28.124 { 00:13:28.124 "name": "BaseBdev2", 00:13:28.124 "uuid": "b99d9ab7-2e93-4ad4-bf1b-c578ba9294bb", 00:13:28.124 "is_configured": true, 00:13:28.124 "data_offset": 0, 00:13:28.124 "data_size": 65536 00:13:28.124 } 00:13:28.124 ] 00:13:28.124 }' 00:13:28.125 11:23:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.125 11:23:46 -- common/autotest_common.sh@10 -- # set +x 00:13:28.383 11:23:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:28.383 11:23:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:28.383 11:23:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.383 11:23:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:28.640 11:23:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:28.640 11:23:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.640 11:23:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:28.897 [2024-11-26 11:23:47.064335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.897 [2024-11-26 11:23:47.064395] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:13:28.897 11:23:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:28.897 11:23:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:28.897 11:23:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.898 11:23:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.155 11:23:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:29.155 11:23:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:29.155 11:23:47 -- bdev/bdev_raid.sh@287 -- # killprocess 79042 00:13:29.155 11:23:47 -- common/autotest_common.sh@936 -- # '[' -z 79042 ']' 00:13:29.155 11:23:47 -- common/autotest_common.sh@940 -- # kill -0 79042 00:13:29.155 11:23:47 -- common/autotest_common.sh@941 -- # uname 00:13:29.155 11:23:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.155 11:23:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79042 00:13:29.155 killing process with pid 79042 00:13:29.155 11:23:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:29.155 11:23:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:29.155 11:23:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79042' 00:13:29.155 11:23:47 -- common/autotest_common.sh@955 -- # kill 79042 00:13:29.155 [2024-11-26 11:23:47.378315] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.155 11:23:47 -- common/autotest_common.sh@960 -- # wait 79042 00:13:29.155 [2024-11-26 11:23:47.378384] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:29.413 00:13:29.413 real 0m7.349s 00:13:29.413 user 0m12.690s 00:13:29.413 sys 0m1.145s 00:13:29.413 11:23:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.413 11:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:29.413 ************************************ 00:13:29.413 END TEST raid_state_function_test 00:13:29.413 ************************************ 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:29.413 11:23:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:29.413 11:23:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.413 11:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:29.413 ************************************ 00:13:29.413 START TEST raid_state_function_test_sb 00:13:29.413 ************************************ 00:13:29.413 11:23:47 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:29.413 11:23:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:29.414 Process raid pid: 79316 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=79316 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 79316' 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:29.414 11:23:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 79316 /var/tmp/spdk-raid.sock 00:13:29.414 11:23:47 -- common/autotest_common.sh@829 -- # '[' -z 79316 ']' 00:13:29.414 11:23:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:29.414 11:23:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.414 11:23:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:29.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:29.414 11:23:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.414 11:23:47 -- common/autotest_common.sh@10 -- # set +x 00:13:29.672 [2024-11-26 11:23:47.674456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.672 [2024-11-26 11:23:47.674775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.672 [2024-11-26 11:23:47.834206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.672 [2024-11-26 11:23:47.869747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.672 [2024-11-26 11:23:47.902590] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.606 11:23:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.606 11:23:48 -- common/autotest_common.sh@862 -- # return 0 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:30.606 [2024-11-26 11:23:48.735638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.606 [2024-11-26 11:23:48.735719] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.606 [2024-11-26 11:23:48.735750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.606 [2024-11-26 11:23:48.735762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.606 11:23:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.864 11:23:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.864 "name": "Existed_Raid", 00:13:30.864 "uuid": "2e5590a3-7f2a-42ee-9b47-99cac321486f", 00:13:30.864 "strip_size_kb": 64, 00:13:30.864 "state": "configuring", 00:13:30.864 "raid_level": "raid0", 00:13:30.864 "superblock": true, 00:13:30.864 "num_base_bdevs": 2, 00:13:30.864 "num_base_bdevs_discovered": 0, 00:13:30.864 "num_base_bdevs_operational": 2, 00:13:30.864 "base_bdevs_list": [ 00:13:30.864 { 00:13:30.864 "name": "BaseBdev1", 00:13:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.864 "is_configured": false, 00:13:30.864 "data_offset": 0, 00:13:30.864 "data_size": 0 00:13:30.864 }, 00:13:30.864 { 00:13:30.864 "name": "BaseBdev2", 00:13:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.864 "is_configured": false, 00:13:30.864 "data_offset": 0, 00:13:30.864 "data_size": 0 00:13:30.864 } 00:13:30.864 ] 00:13:30.864 }' 00:13:30.864 11:23:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.864 11:23:49 -- common/autotest_common.sh@10 -- # set +x 00:13:31.121 11:23:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:31.378 [2024-11-26 11:23:49.519714] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.378 [2024-11-26 11:23:49.520007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:13:31.378 11:23:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:31.636 [2024-11-26 11:23:49.771862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.636 [2024-11-26 11:23:49.772141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.636 [2024-11-26 11:23:49.772286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.636 [2024-11-26 11:23:49.772343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.636 11:23:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.893 [2024-11-26 11:23:49.998589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.893 BaseBdev1 00:13:31.893 11:23:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:31.893 11:23:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:31.893 11:23:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.893 11:23:50 -- common/autotest_common.sh@899 -- # local i 00:13:31.893 11:23:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.893 11:23:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.893 11:23:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.151 11:23:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.409 [ 00:13:32.409 { 00:13:32.409 "name": "BaseBdev1", 00:13:32.409 "aliases": [ 00:13:32.410 "03f5d261-7a5b-4616-b9ed-cd2c76b247ed" 00:13:32.410 ], 00:13:32.410 "product_name": "Malloc disk", 00:13:32.410 "block_size": 512, 00:13:32.410 "num_blocks": 65536, 00:13:32.410 "uuid": "03f5d261-7a5b-4616-b9ed-cd2c76b247ed", 00:13:32.410 "assigned_rate_limits": { 00:13:32.410 "rw_ios_per_sec": 0, 00:13:32.410 "rw_mbytes_per_sec": 0, 00:13:32.410 "r_mbytes_per_sec": 0, 00:13:32.410 "w_mbytes_per_sec": 0 00:13:32.410 }, 00:13:32.410 "claimed": true, 00:13:32.410 "claim_type": "exclusive_write", 00:13:32.410 "zoned": false, 00:13:32.410 "supported_io_types": { 00:13:32.410 "read": true, 00:13:32.410 "write": true, 00:13:32.410 "unmap": true, 00:13:32.410 "write_zeroes": true, 00:13:32.410 "flush": true, 00:13:32.410 "reset": true, 00:13:32.410 "compare": false, 00:13:32.410 "compare_and_write": false, 00:13:32.410 "abort": true, 00:13:32.410 "nvme_admin": false, 00:13:32.410 "nvme_io": false 00:13:32.410 }, 00:13:32.410 "memory_domains": [ 00:13:32.410 { 00:13:32.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.410 "dma_device_type": 2 00:13:32.410 } 00:13:32.410 ], 00:13:32.410 "driver_specific": {} 00:13:32.410 } 00:13:32.410 ] 00:13:32.410 11:23:50 -- common/autotest_common.sh@905 -- # return 0 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.410 11:23:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.668 11:23:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.668 "name": "Existed_Raid", 00:13:32.668 "uuid": "7f10bc57-e7cb-42e7-b848-58e8fc99e25b", 00:13:32.668 "strip_size_kb": 64, 00:13:32.668 "state": "configuring", 00:13:32.668 "raid_level": "raid0", 00:13:32.668 "superblock": true, 00:13:32.668 "num_base_bdevs": 2, 00:13:32.668 "num_base_bdevs_discovered": 1, 00:13:32.668 "num_base_bdevs_operational": 2, 00:13:32.668 "base_bdevs_list": [ 00:13:32.668 { 00:13:32.668 "name": "BaseBdev1", 00:13:32.669 "uuid": "03f5d261-7a5b-4616-b9ed-cd2c76b247ed", 00:13:32.669 "is_configured": true, 00:13:32.669 "data_offset": 2048, 00:13:32.669 "data_size": 63488 00:13:32.669 }, 00:13:32.669 { 00:13:32.669 "name": "BaseBdev2", 00:13:32.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.669 "is_configured": false, 00:13:32.669 "data_offset": 0, 00:13:32.669 "data_size": 0 00:13:32.669 } 00:13:32.669 ] 00:13:32.669 }' 00:13:32.669 11:23:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.669 11:23:50 -- common/autotest_common.sh@10 -- # set +x 00:13:32.927 11:23:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:33.186 [2024-11-26 11:23:51.243012] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.186 [2024-11-26 11:23:51.243291] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:13:33.186 11:23:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:33.186 11:23:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:33.454 11:23:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.728 BaseBdev1 00:13:33.728 11:23:51 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:33.728 11:23:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:33.728 11:23:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:33.728 11:23:51 -- common/autotest_common.sh@899 -- # local i 00:13:33.728 11:23:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:33.728 11:23:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:33.728 11:23:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.986 11:23:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.986 [ 00:13:33.986 { 00:13:33.986 "name": "BaseBdev1", 00:13:33.986 "aliases": [ 00:13:33.986 "73cdc940-3cae-4b78-96d3-b509acf2a441" 00:13:33.986 ], 00:13:33.986 "product_name": "Malloc disk", 00:13:33.986 "block_size": 512, 00:13:33.986 "num_blocks": 65536, 00:13:33.986 "uuid": "73cdc940-3cae-4b78-96d3-b509acf2a441", 00:13:33.986 "assigned_rate_limits": { 00:13:33.986 "rw_ios_per_sec": 0, 00:13:33.986 "rw_mbytes_per_sec": 0, 00:13:33.986 "r_mbytes_per_sec": 0, 00:13:33.986 "w_mbytes_per_sec": 0 00:13:33.986 }, 00:13:33.986 "claimed": false, 00:13:33.986 "zoned": false, 00:13:33.986 "supported_io_types": { 00:13:33.986 "read": true, 00:13:33.986 "write": true, 00:13:33.986 "unmap": true, 00:13:33.986 "write_zeroes": true, 00:13:33.986 "flush": true, 00:13:33.986 "reset": true, 00:13:33.986 "compare": false, 00:13:33.986 "compare_and_write": false, 00:13:33.986 "abort": true, 00:13:33.986 "nvme_admin": false, 00:13:33.986 "nvme_io": false 00:13:33.986 }, 00:13:33.986 "memory_domains": [ 00:13:33.986 { 00:13:33.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.986 "dma_device_type": 2 00:13:33.986 } 00:13:33.986 ], 00:13:33.986 "driver_specific": {} 00:13:33.986 } 00:13:33.986 ] 00:13:33.986 11:23:52 -- common/autotest_common.sh@905 -- # return 0 00:13:33.987 11:23:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:34.246 [2024-11-26 11:23:52.380830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.246 [2024-11-26 11:23:52.383136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.246 [2024-11-26 11:23:52.383370] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.246 11:23:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.505 11:23:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.505 "name": "Existed_Raid", 00:13:34.505 "uuid": "4fa1bf67-8c26-4078-bf6e-2015d0f2efa9", 00:13:34.505 "strip_size_kb": 64, 00:13:34.505 "state": "configuring", 00:13:34.505 "raid_level": "raid0", 00:13:34.505 "superblock": true, 00:13:34.505 "num_base_bdevs": 2, 00:13:34.505 "num_base_bdevs_discovered": 1, 00:13:34.505 "num_base_bdevs_operational": 2, 00:13:34.505 "base_bdevs_list": [ 00:13:34.505 { 00:13:34.505 "name": "BaseBdev1", 00:13:34.505 "uuid": "73cdc940-3cae-4b78-96d3-b509acf2a441", 00:13:34.505 "is_configured": true, 00:13:34.505 "data_offset": 2048, 00:13:34.505 "data_size": 63488 00:13:34.505 }, 00:13:34.505 { 00:13:34.505 "name": "BaseBdev2", 00:13:34.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.505 "is_configured": false, 00:13:34.505 "data_offset": 0, 00:13:34.505 "data_size": 0 00:13:34.505 } 00:13:34.505 ] 00:13:34.505 }' 00:13:34.505 11:23:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.505 11:23:52 -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 11:23:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.023 [2024-11-26 11:23:53.158794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.023 [2024-11-26 11:23:53.159080] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:13:35.023 [2024-11-26 11:23:53.159119] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:35.023 [2024-11-26 11:23:53.159241] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:13:35.023 [2024-11-26 11:23:53.159643] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:13:35.023 [2024-11-26 11:23:53.159661] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:13:35.023 BaseBdev2 00:13:35.023 [2024-11-26 11:23:53.159802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.023 11:23:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:35.023 11:23:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:35.023 11:23:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:35.023 11:23:53 -- common/autotest_common.sh@899 -- # local i 00:13:35.023 11:23:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:35.023 11:23:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:35.023 11:23:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.282 11:23:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.541 [ 00:13:35.541 { 00:13:35.541 "name": "BaseBdev2", 00:13:35.541 "aliases": [ 00:13:35.541 "2f47f6b4-fa97-4737-9883-866090b23804" 00:13:35.541 ], 00:13:35.541 "product_name": "Malloc disk", 00:13:35.541 "block_size": 512, 00:13:35.541 "num_blocks": 65536, 00:13:35.541 "uuid": "2f47f6b4-fa97-4737-9883-866090b23804", 00:13:35.541 "assigned_rate_limits": { 00:13:35.541 "rw_ios_per_sec": 0, 00:13:35.541 "rw_mbytes_per_sec": 0, 00:13:35.541 "r_mbytes_per_sec": 0, 00:13:35.541 "w_mbytes_per_sec": 0 00:13:35.541 }, 00:13:35.541 "claimed": true, 00:13:35.541 "claim_type": "exclusive_write", 00:13:35.541 "zoned": false, 00:13:35.541 "supported_io_types": { 00:13:35.541 "read": true, 00:13:35.541 "write": true, 00:13:35.541 "unmap": true, 00:13:35.541 "write_zeroes": true, 00:13:35.541 "flush": true, 00:13:35.541 "reset": true, 00:13:35.541 "compare": false, 00:13:35.541 "compare_and_write": false, 00:13:35.541 "abort": true, 00:13:35.541 "nvme_admin": false, 00:13:35.541 "nvme_io": false 00:13:35.541 }, 00:13:35.541 "memory_domains": [ 00:13:35.541 { 00:13:35.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.541 "dma_device_type": 2 00:13:35.541 } 00:13:35.541 ], 00:13:35.541 "driver_specific": {} 00:13:35.541 } 00:13:35.541 ] 00:13:35.541 11:23:53 -- common/autotest_common.sh@905 -- # return 0 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.541 11:23:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.800 11:23:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:35.800 "name": "Existed_Raid", 00:13:35.800 "uuid": "4fa1bf67-8c26-4078-bf6e-2015d0f2efa9", 00:13:35.800 "strip_size_kb": 64, 00:13:35.800 "state": "online", 00:13:35.800 "raid_level": "raid0", 00:13:35.800 "superblock": true, 00:13:35.800 "num_base_bdevs": 2, 00:13:35.800 "num_base_bdevs_discovered": 2, 00:13:35.800 "num_base_bdevs_operational": 2, 00:13:35.800 "base_bdevs_list": [ 00:13:35.800 { 00:13:35.800 "name": "BaseBdev1", 00:13:35.800 "uuid": "73cdc940-3cae-4b78-96d3-b509acf2a441", 00:13:35.800 "is_configured": true, 00:13:35.800 "data_offset": 2048, 00:13:35.800 "data_size": 63488 00:13:35.800 }, 00:13:35.800 { 00:13:35.800 "name": "BaseBdev2", 00:13:35.800 "uuid": "2f47f6b4-fa97-4737-9883-866090b23804", 00:13:35.800 "is_configured": true, 00:13:35.800 "data_offset": 2048, 00:13:35.800 "data_size": 63488 00:13:35.800 } 00:13:35.800 ] 00:13:35.800 }' 00:13:35.800 11:23:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:35.800 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.059 11:23:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:36.318 [2024-11-26 11:23:54.419350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.318 [2024-11-26 11:23:54.419391] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.318 [2024-11-26 11:23:54.419467] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.318 11:23:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.577 11:23:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:36.577 "name": "Existed_Raid", 00:13:36.577 "uuid": "4fa1bf67-8c26-4078-bf6e-2015d0f2efa9", 00:13:36.577 "strip_size_kb": 64, 00:13:36.577 "state": "offline", 00:13:36.577 "raid_level": "raid0", 00:13:36.577 "superblock": true, 00:13:36.577 "num_base_bdevs": 2, 00:13:36.577 "num_base_bdevs_discovered": 1, 00:13:36.577 "num_base_bdevs_operational": 1, 00:13:36.577 "base_bdevs_list": [ 00:13:36.577 { 00:13:36.577 "name": null, 00:13:36.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.577 "is_configured": false, 00:13:36.577 "data_offset": 2048, 00:13:36.577 "data_size": 63488 00:13:36.577 }, 00:13:36.577 { 00:13:36.577 "name": "BaseBdev2", 00:13:36.577 "uuid": "2f47f6b4-fa97-4737-9883-866090b23804", 00:13:36.577 "is_configured": true, 00:13:36.577 "data_offset": 2048, 00:13:36.577 "data_size": 63488 00:13:36.577 } 00:13:36.577 ] 00:13:36.577 }' 00:13:36.577 11:23:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:36.577 11:23:54 -- common/autotest_common.sh@10 -- # set +x 00:13:36.835 11:23:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:36.835 11:23:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:36.835 11:23:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.835 11:23:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:37.094 11:23:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:37.094 11:23:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.094 11:23:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:37.352 [2024-11-26 11:23:55.399726] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.352 [2024-11-26 11:23:55.399804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:13:37.352 11:23:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:37.352 11:23:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:37.352 11:23:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.352 11:23:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.612 11:23:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:37.612 11:23:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:37.612 11:23:55 -- bdev/bdev_raid.sh@287 -- # killprocess 79316 00:13:37.612 11:23:55 -- common/autotest_common.sh@936 -- # '[' -z 79316 ']' 00:13:37.612 11:23:55 -- common/autotest_common.sh@940 -- # kill -0 79316 00:13:37.612 11:23:55 -- common/autotest_common.sh@941 -- # uname 00:13:37.612 11:23:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.612 11:23:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79316 00:13:37.612 killing process with pid 79316 00:13:37.612 11:23:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:37.612 11:23:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:37.612 11:23:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79316' 00:13:37.612 11:23:55 -- common/autotest_common.sh@955 -- # kill 79316 00:13:37.612 [2024-11-26 11:23:55.706974] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.612 11:23:55 -- common/autotest_common.sh@960 -- # wait 79316 00:13:37.612 [2024-11-26 11:23:55.707090] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.871 ************************************ 00:13:37.871 END TEST raid_state_function_test_sb 00:13:37.871 ************************************ 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:37.871 00:13:37.871 real 0m8.275s 00:13:37.871 user 0m14.450s 00:13:37.871 sys 0m1.235s 00:13:37.871 11:23:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.871 11:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:37.871 11:23:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:37.871 11:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.871 11:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.871 ************************************ 00:13:37.871 START TEST raid_superblock_test 00:13:37.871 ************************************ 00:13:37.871 11:23:55 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:37.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@357 -- # raid_pid=79601 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@358 -- # waitforlisten 79601 /var/tmp/spdk-raid.sock 00:13:37.871 11:23:55 -- common/autotest_common.sh@829 -- # '[' -z 79601 ']' 00:13:37.871 11:23:55 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:37.871 11:23:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.871 11:23:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.871 11:23:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.871 11:23:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.871 11:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.871 [2024-11-26 11:23:55.999439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:37.871 [2024-11-26 11:23:55.999629] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79601 ] 00:13:38.130 [2024-11-26 11:23:56.157962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.130 [2024-11-26 11:23:56.192970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.130 [2024-11-26 11:23:56.227226] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.697 11:23:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.697 11:23:56 -- common/autotest_common.sh@862 -- # return 0 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.697 11:23:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:38.955 malloc1 00:13:38.956 11:23:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.214 [2024-11-26 11:23:57.302143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.214 [2024-11-26 11:23:57.302253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.214 [2024-11-26 11:23:57.302290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:13:39.214 [2024-11-26 11:23:57.302318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.214 [2024-11-26 11:23:57.304966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.214 [2024-11-26 11:23:57.305008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.214 pt1 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:39.214 11:23:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:39.473 malloc2 00:13:39.473 11:23:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.731 [2024-11-26 11:23:57.804976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.731 [2024-11-26 11:23:57.805061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.731 [2024-11-26 11:23:57.805098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:13:39.731 [2024-11-26 11:23:57.805113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.731 [2024-11-26 11:23:57.807703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.731 [2024-11-26 11:23:57.807750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.731 pt2 00:13:39.731 11:23:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:39.731 11:23:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:39.731 11:23:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:39.990 [2024-11-26 11:23:58.053032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.990 [2024-11-26 11:23:58.055424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.990 [2024-11-26 11:23:58.055821] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:13:39.990 [2024-11-26 11:23:58.056027] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.990 [2024-11-26 11:23:58.056225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:39.990 [2024-11-26 11:23:58.056689] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:13:39.990 [2024-11-26 11:23:58.056873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:13:39.990 [2024-11-26 11:23:58.057219] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.990 11:23:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.249 11:23:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:40.249 "name": "raid_bdev1", 00:13:40.249 "uuid": "e14e2502-c18f-4b11-a4c4-c515c1985551", 00:13:40.249 "strip_size_kb": 64, 00:13:40.249 "state": "online", 00:13:40.249 "raid_level": "raid0", 00:13:40.249 "superblock": true, 00:13:40.249 "num_base_bdevs": 2, 00:13:40.249 "num_base_bdevs_discovered": 2, 00:13:40.249 "num_base_bdevs_operational": 2, 00:13:40.249 "base_bdevs_list": [ 00:13:40.249 { 00:13:40.249 "name": "pt1", 00:13:40.249 "uuid": "a27c74df-ad37-5834-8f02-7265b710225e", 00:13:40.249 "is_configured": true, 00:13:40.249 "data_offset": 2048, 00:13:40.249 "data_size": 63488 00:13:40.249 }, 00:13:40.249 { 00:13:40.249 "name": "pt2", 00:13:40.249 "uuid": "8c907789-1f61-5762-b86c-42bb8ba1de8d", 00:13:40.249 "is_configured": true, 00:13:40.249 "data_offset": 2048, 00:13:40.249 "data_size": 63488 00:13:40.249 } 00:13:40.249 ] 00:13:40.249 }' 00:13:40.249 11:23:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:40.249 11:23:58 -- common/autotest_common.sh@10 -- # set +x 00:13:40.508 11:23:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:40.508 11:23:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:40.766 [2024-11-26 11:23:58.821676] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.766 11:23:58 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e14e2502-c18f-4b11-a4c4-c515c1985551 00:13:40.766 11:23:58 -- bdev/bdev_raid.sh@380 -- # '[' -z e14e2502-c18f-4b11-a4c4-c515c1985551 ']' 00:13:40.766 11:23:58 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:41.025 [2024-11-26 11:23:59.037459] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.025 [2024-11-26 11:23:59.037514] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.025 [2024-11-26 11:23:59.037607] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.025 [2024-11-26 11:23:59.037672] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.025 [2024-11-26 11:23:59.037687] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:13:41.025 11:23:59 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.025 11:23:59 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:41.283 11:23:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:41.283 11:23:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:41.283 11:23:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.283 11:23:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:41.541 11:23:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.542 11:23:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:41.542 11:23:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:41.542 11:23:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:41.801 11:23:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:41.801 11:23:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.801 11:23:59 -- common/autotest_common.sh@650 -- # local es=0 00:13:41.801 11:23:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.801 11:23:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.801 11:23:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.801 11:23:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.801 11:23:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.801 11:24:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.801 11:23:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.801 11:24:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.801 11:24:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:41.801 11:24:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:42.060 [2024-11-26 11:24:00.193759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:42.060 [2024-11-26 11:24:00.196020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:42.060 [2024-11-26 11:24:00.196311] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:42.060 [2024-11-26 11:24:00.196392] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:42.060 [2024-11-26 11:24:00.196444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.060 [2024-11-26 11:24:00.196457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:13:42.060 request: 00:13:42.060 { 00:13:42.060 "name": "raid_bdev1", 00:13:42.060 "raid_level": "raid0", 00:13:42.060 "base_bdevs": [ 00:13:42.060 "malloc1", 00:13:42.060 "malloc2" 00:13:42.060 ], 00:13:42.060 "superblock": false, 00:13:42.060 "strip_size_kb": 64, 00:13:42.060 "method": "bdev_raid_create", 00:13:42.060 "req_id": 1 00:13:42.060 } 00:13:42.060 Got JSON-RPC error response 00:13:42.060 response: 00:13:42.060 { 00:13:42.060 "code": -17, 00:13:42.060 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:42.060 } 00:13:42.060 11:24:00 -- common/autotest_common.sh@653 -- # es=1 00:13:42.060 11:24:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.060 11:24:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.060 11:24:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.060 11:24:00 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.060 11:24:00 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:42.319 11:24:00 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:42.319 11:24:00 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:42.319 11:24:00 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.579 [2024-11-26 11:24:00.645836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.579 [2024-11-26 11:24:00.645945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.579 [2024-11-26 11:24:00.645980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:13:42.579 [2024-11-26 11:24:00.646025] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.579 [2024-11-26 11:24:00.648600] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.579 [2024-11-26 11:24:00.648642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.579 [2024-11-26 11:24:00.648740] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:42.579 [2024-11-26 11:24:00.648783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.579 pt1 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.579 11:24:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.838 11:24:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.838 "name": "raid_bdev1", 00:13:42.838 "uuid": "e14e2502-c18f-4b11-a4c4-c515c1985551", 00:13:42.838 "strip_size_kb": 64, 00:13:42.838 "state": "configuring", 00:13:42.838 "raid_level": "raid0", 00:13:42.838 "superblock": true, 00:13:42.838 "num_base_bdevs": 2, 00:13:42.838 "num_base_bdevs_discovered": 1, 00:13:42.838 "num_base_bdevs_operational": 2, 00:13:42.838 "base_bdevs_list": [ 00:13:42.838 { 00:13:42.838 "name": "pt1", 00:13:42.838 "uuid": "a27c74df-ad37-5834-8f02-7265b710225e", 00:13:42.838 "is_configured": true, 00:13:42.838 "data_offset": 2048, 00:13:42.838 "data_size": 63488 00:13:42.838 }, 00:13:42.838 { 00:13:42.838 "name": null, 00:13:42.838 "uuid": "8c907789-1f61-5762-b86c-42bb8ba1de8d", 00:13:42.838 "is_configured": false, 00:13:42.838 "data_offset": 2048, 00:13:42.838 "data_size": 63488 00:13:42.838 } 00:13:42.838 ] 00:13:42.838 }' 00:13:42.838 11:24:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.838 11:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:43.098 11:24:01 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:43.098 11:24:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:43.098 11:24:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:43.098 11:24:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.356 [2024-11-26 11:24:01.490091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.356 [2024-11-26 11:24:01.490180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.356 [2024-11-26 11:24:01.490233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:13:43.356 [2024-11-26 11:24:01.490248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.356 [2024-11-26 11:24:01.490692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.356 [2024-11-26 11:24:01.490717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.356 [2024-11-26 11:24:01.490792] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:43.356 [2024-11-26 11:24:01.490818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.356 [2024-11-26 11:24:01.490977] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:13:43.356 [2024-11-26 11:24:01.490993] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:43.356 [2024-11-26 11:24:01.491109] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:13:43.356 [2024-11-26 11:24:01.491481] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:13:43.356 [2024-11-26 11:24:01.491503] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:13:43.356 [2024-11-26 11:24:01.491632] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.356 pt2 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.356 11:24:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.614 11:24:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.614 "name": "raid_bdev1", 00:13:43.614 "uuid": "e14e2502-c18f-4b11-a4c4-c515c1985551", 00:13:43.614 "strip_size_kb": 64, 00:13:43.614 "state": "online", 00:13:43.614 "raid_level": "raid0", 00:13:43.614 "superblock": true, 00:13:43.614 "num_base_bdevs": 2, 00:13:43.614 "num_base_bdevs_discovered": 2, 00:13:43.614 "num_base_bdevs_operational": 2, 00:13:43.614 "base_bdevs_list": [ 00:13:43.614 { 00:13:43.614 "name": "pt1", 00:13:43.614 "uuid": "a27c74df-ad37-5834-8f02-7265b710225e", 00:13:43.614 "is_configured": true, 00:13:43.614 "data_offset": 2048, 00:13:43.614 "data_size": 63488 00:13:43.614 }, 00:13:43.614 { 00:13:43.614 "name": "pt2", 00:13:43.614 "uuid": "8c907789-1f61-5762-b86c-42bb8ba1de8d", 00:13:43.614 "is_configured": true, 00:13:43.614 "data_offset": 2048, 00:13:43.614 "data_size": 63488 00:13:43.614 } 00:13:43.614 ] 00:13:43.614 }' 00:13:43.614 11:24:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.614 11:24:01 -- common/autotest_common.sh@10 -- # set +x 00:13:43.873 11:24:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.873 11:24:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:44.132 [2024-11-26 11:24:02.330636] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.132 11:24:02 -- bdev/bdev_raid.sh@430 -- # '[' e14e2502-c18f-4b11-a4c4-c515c1985551 '!=' e14e2502-c18f-4b11-a4c4-c515c1985551 ']' 00:13:44.132 11:24:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:44.132 11:24:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:44.132 11:24:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:44.132 11:24:02 -- bdev/bdev_raid.sh@511 -- # killprocess 79601 00:13:44.132 11:24:02 -- common/autotest_common.sh@936 -- # '[' -z 79601 ']' 00:13:44.132 11:24:02 -- common/autotest_common.sh@940 -- # kill -0 79601 00:13:44.132 11:24:02 -- common/autotest_common.sh@941 -- # uname 00:13:44.132 11:24:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.133 11:24:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79601 00:13:44.391 killing process with pid 79601 00:13:44.391 11:24:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:44.391 11:24:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:44.391 11:24:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79601' 00:13:44.391 11:24:02 -- common/autotest_common.sh@955 -- # kill 79601 00:13:44.391 [2024-11-26 11:24:02.381882] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.391 11:24:02 -- common/autotest_common.sh@960 -- # wait 79601 00:13:44.391 [2024-11-26 11:24:02.382001] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.391 [2024-11-26 11:24:02.382060] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.391 [2024-11-26 11:24:02.382079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:13:44.391 [2024-11-26 11:24:02.396857] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.391 ************************************ 00:13:44.391 END TEST raid_superblock_test 00:13:44.391 ************************************ 00:13:44.391 11:24:02 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:44.391 00:13:44.391 real 0m6.652s 00:13:44.391 user 0m11.483s 00:13:44.391 sys 0m0.965s 00:13:44.391 11:24:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.391 11:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:44.650 11:24:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:44.650 11:24:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.650 11:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.650 ************************************ 00:13:44.650 START TEST raid_state_function_test 00:13:44.650 ************************************ 00:13:44.650 11:24:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:44.650 Process raid pid: 79819 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=79819 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 79819' 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 79819 /var/tmp/spdk-raid.sock 00:13:44.650 11:24:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.650 11:24:02 -- common/autotest_common.sh@829 -- # '[' -z 79819 ']' 00:13:44.650 11:24:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.650 11:24:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.650 11:24:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.650 11:24:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.650 11:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.650 [2024-11-26 11:24:02.717241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.650 [2024-11-26 11:24:02.717403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.909 [2024-11-26 11:24:02.884970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.909 [2024-11-26 11:24:02.928357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.909 [2024-11-26 11:24:02.965977] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.476 11:24:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.476 11:24:03 -- common/autotest_common.sh@862 -- # return 0 00:13:45.476 11:24:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:45.736 [2024-11-26 11:24:03.872745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.736 [2024-11-26 11:24:03.873048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.736 [2024-11-26 11:24:03.873083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:45.736 [2024-11-26 11:24:03.873100] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.736 11:24:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.995 11:24:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.995 "name": "Existed_Raid", 00:13:45.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.995 "strip_size_kb": 64, 00:13:45.995 "state": "configuring", 00:13:45.995 "raid_level": "concat", 00:13:45.995 "superblock": false, 00:13:45.995 "num_base_bdevs": 2, 00:13:45.995 "num_base_bdevs_discovered": 0, 00:13:45.995 "num_base_bdevs_operational": 2, 00:13:45.995 "base_bdevs_list": [ 00:13:45.995 { 00:13:45.995 "name": "BaseBdev1", 00:13:45.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.995 "is_configured": false, 00:13:45.995 "data_offset": 0, 00:13:45.995 "data_size": 0 00:13:45.995 }, 00:13:45.995 { 00:13:45.995 "name": "BaseBdev2", 00:13:45.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.995 "is_configured": false, 00:13:45.995 "data_offset": 0, 00:13:45.995 "data_size": 0 00:13:45.995 } 00:13:45.995 ] 00:13:45.995 }' 00:13:45.995 11:24:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.995 11:24:04 -- common/autotest_common.sh@10 -- # set +x 00:13:46.254 11:24:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.513 [2024-11-26 11:24:04.612891] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.513 [2024-11-26 11:24:04.612954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:13:46.513 11:24:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.771 [2024-11-26 11:24:04.865019] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.771 [2024-11-26 11:24:04.865068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.771 [2024-11-26 11:24:04.865112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.771 [2024-11-26 11:24:04.865126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.771 11:24:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.030 [2024-11-26 11:24:05.083776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.030 BaseBdev1 00:13:47.030 11:24:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:47.030 11:24:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:47.030 11:24:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.030 11:24:05 -- common/autotest_common.sh@899 -- # local i 00:13:47.030 11:24:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.030 11:24:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.030 11:24:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.289 11:24:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:47.289 [ 00:13:47.289 { 00:13:47.289 "name": "BaseBdev1", 00:13:47.289 "aliases": [ 00:13:47.289 "896b773e-1608-4350-b105-b839ad69d94f" 00:13:47.289 ], 00:13:47.289 "product_name": "Malloc disk", 00:13:47.289 "block_size": 512, 00:13:47.289 "num_blocks": 65536, 00:13:47.289 "uuid": "896b773e-1608-4350-b105-b839ad69d94f", 00:13:47.289 "assigned_rate_limits": { 00:13:47.289 "rw_ios_per_sec": 0, 00:13:47.289 "rw_mbytes_per_sec": 0, 00:13:47.289 "r_mbytes_per_sec": 0, 00:13:47.289 "w_mbytes_per_sec": 0 00:13:47.289 }, 00:13:47.289 "claimed": true, 00:13:47.289 "claim_type": "exclusive_write", 00:13:47.289 "zoned": false, 00:13:47.289 "supported_io_types": { 00:13:47.289 "read": true, 00:13:47.289 "write": true, 00:13:47.289 "unmap": true, 00:13:47.289 "write_zeroes": true, 00:13:47.289 "flush": true, 00:13:47.289 "reset": true, 00:13:47.289 "compare": false, 00:13:47.289 "compare_and_write": false, 00:13:47.289 "abort": true, 00:13:47.289 "nvme_admin": false, 00:13:47.289 "nvme_io": false 00:13:47.289 }, 00:13:47.289 "memory_domains": [ 00:13:47.289 { 00:13:47.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.289 "dma_device_type": 2 00:13:47.289 } 00:13:47.289 ], 00:13:47.289 "driver_specific": {} 00:13:47.289 } 00:13:47.289 ] 00:13:47.289 11:24:05 -- common/autotest_common.sh@905 -- # return 0 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.289 11:24:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.548 11:24:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:47.548 "name": "Existed_Raid", 00:13:47.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.548 "strip_size_kb": 64, 00:13:47.548 "state": "configuring", 00:13:47.548 "raid_level": "concat", 00:13:47.548 "superblock": false, 00:13:47.548 "num_base_bdevs": 2, 00:13:47.548 "num_base_bdevs_discovered": 1, 00:13:47.548 "num_base_bdevs_operational": 2, 00:13:47.548 "base_bdevs_list": [ 00:13:47.548 { 00:13:47.548 "name": "BaseBdev1", 00:13:47.548 "uuid": "896b773e-1608-4350-b105-b839ad69d94f", 00:13:47.548 "is_configured": true, 00:13:47.548 "data_offset": 0, 00:13:47.548 "data_size": 65536 00:13:47.548 }, 00:13:47.548 { 00:13:47.548 "name": "BaseBdev2", 00:13:47.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.548 "is_configured": false, 00:13:47.548 "data_offset": 0, 00:13:47.548 "data_size": 0 00:13:47.548 } 00:13:47.548 ] 00:13:47.548 }' 00:13:47.548 11:24:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:47.548 11:24:05 -- common/autotest_common.sh@10 -- # set +x 00:13:48.116 11:24:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:48.116 [2024-11-26 11:24:06.260223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.116 [2024-11-26 11:24:06.260509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:13:48.116 11:24:06 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:48.116 11:24:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:48.375 [2024-11-26 11:24:06.468370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.375 [2024-11-26 11:24:06.470625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.375 [2024-11-26 11:24:06.470673] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.375 11:24:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.635 11:24:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.635 "name": "Existed_Raid", 00:13:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.635 "strip_size_kb": 64, 00:13:48.635 "state": "configuring", 00:13:48.635 "raid_level": "concat", 00:13:48.635 "superblock": false, 00:13:48.635 "num_base_bdevs": 2, 00:13:48.635 "num_base_bdevs_discovered": 1, 00:13:48.635 "num_base_bdevs_operational": 2, 00:13:48.635 "base_bdevs_list": [ 00:13:48.635 { 00:13:48.635 "name": "BaseBdev1", 00:13:48.635 "uuid": "896b773e-1608-4350-b105-b839ad69d94f", 00:13:48.635 "is_configured": true, 00:13:48.635 "data_offset": 0, 00:13:48.635 "data_size": 65536 00:13:48.635 }, 00:13:48.635 { 00:13:48.635 "name": "BaseBdev2", 00:13:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.635 "is_configured": false, 00:13:48.635 "data_offset": 0, 00:13:48.635 "data_size": 0 00:13:48.635 } 00:13:48.635 ] 00:13:48.635 }' 00:13:48.635 11:24:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.635 11:24:06 -- common/autotest_common.sh@10 -- # set +x 00:13:48.894 11:24:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.152 [2024-11-26 11:24:07.271769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.152 [2024-11-26 11:24:07.271821] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:49.152 [2024-11-26 11:24:07.271838] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:49.152 [2024-11-26 11:24:07.271992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:49.152 [2024-11-26 11:24:07.272438] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:49.152 [2024-11-26 11:24:07.272456] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:13:49.152 [2024-11-26 11:24:07.272744] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.152 BaseBdev2 00:13:49.152 11:24:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:49.152 11:24:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:49.152 11:24:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:49.152 11:24:07 -- common/autotest_common.sh@899 -- # local i 00:13:49.152 11:24:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:49.152 11:24:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:49.152 11:24:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:49.410 11:24:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.669 [ 00:13:49.669 { 00:13:49.669 "name": "BaseBdev2", 00:13:49.669 "aliases": [ 00:13:49.669 "96967e6f-517a-4568-bb1b-ec62cc683aff" 00:13:49.669 ], 00:13:49.669 "product_name": "Malloc disk", 00:13:49.669 "block_size": 512, 00:13:49.669 "num_blocks": 65536, 00:13:49.669 "uuid": "96967e6f-517a-4568-bb1b-ec62cc683aff", 00:13:49.669 "assigned_rate_limits": { 00:13:49.669 "rw_ios_per_sec": 0, 00:13:49.669 "rw_mbytes_per_sec": 0, 00:13:49.669 "r_mbytes_per_sec": 0, 00:13:49.669 "w_mbytes_per_sec": 0 00:13:49.669 }, 00:13:49.669 "claimed": true, 00:13:49.669 "claim_type": "exclusive_write", 00:13:49.669 "zoned": false, 00:13:49.669 "supported_io_types": { 00:13:49.669 "read": true, 00:13:49.669 "write": true, 00:13:49.669 "unmap": true, 00:13:49.669 "write_zeroes": true, 00:13:49.669 "flush": true, 00:13:49.669 "reset": true, 00:13:49.669 "compare": false, 00:13:49.669 "compare_and_write": false, 00:13:49.669 "abort": true, 00:13:49.669 "nvme_admin": false, 00:13:49.669 "nvme_io": false 00:13:49.669 }, 00:13:49.669 "memory_domains": [ 00:13:49.669 { 00:13:49.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.669 "dma_device_type": 2 00:13:49.669 } 00:13:49.669 ], 00:13:49.669 "driver_specific": {} 00:13:49.669 } 00:13:49.669 ] 00:13:49.669 11:24:07 -- common/autotest_common.sh@905 -- # return 0 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.669 11:24:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.927 11:24:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.927 "name": "Existed_Raid", 00:13:49.927 "uuid": "91687d7a-6883-416b-8862-0b54a3dfddcb", 00:13:49.927 "strip_size_kb": 64, 00:13:49.927 "state": "online", 00:13:49.927 "raid_level": "concat", 00:13:49.927 "superblock": false, 00:13:49.927 "num_base_bdevs": 2, 00:13:49.927 "num_base_bdevs_discovered": 2, 00:13:49.927 "num_base_bdevs_operational": 2, 00:13:49.927 "base_bdevs_list": [ 00:13:49.927 { 00:13:49.927 "name": "BaseBdev1", 00:13:49.927 "uuid": "896b773e-1608-4350-b105-b839ad69d94f", 00:13:49.927 "is_configured": true, 00:13:49.927 "data_offset": 0, 00:13:49.927 "data_size": 65536 00:13:49.927 }, 00:13:49.927 { 00:13:49.927 "name": "BaseBdev2", 00:13:49.927 "uuid": "96967e6f-517a-4568-bb1b-ec62cc683aff", 00:13:49.927 "is_configured": true, 00:13:49.927 "data_offset": 0, 00:13:49.927 "data_size": 65536 00:13:49.927 } 00:13:49.927 ] 00:13:49.927 }' 00:13:49.927 11:24:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.927 11:24:07 -- common/autotest_common.sh@10 -- # set +x 00:13:50.185 11:24:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:50.443 [2024-11-26 11:24:08.484377] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.443 [2024-11-26 11:24:08.484609] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.443 [2024-11-26 11:24:08.484808] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:50.443 11:24:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.444 11:24:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.702 11:24:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.702 "name": "Existed_Raid", 00:13:50.702 "uuid": "91687d7a-6883-416b-8862-0b54a3dfddcb", 00:13:50.702 "strip_size_kb": 64, 00:13:50.702 "state": "offline", 00:13:50.702 "raid_level": "concat", 00:13:50.702 "superblock": false, 00:13:50.702 "num_base_bdevs": 2, 00:13:50.702 "num_base_bdevs_discovered": 1, 00:13:50.702 "num_base_bdevs_operational": 1, 00:13:50.702 "base_bdevs_list": [ 00:13:50.702 { 00:13:50.702 "name": null, 00:13:50.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.702 "is_configured": false, 00:13:50.702 "data_offset": 0, 00:13:50.702 "data_size": 65536 00:13:50.702 }, 00:13:50.702 { 00:13:50.702 "name": "BaseBdev2", 00:13:50.702 "uuid": "96967e6f-517a-4568-bb1b-ec62cc683aff", 00:13:50.702 "is_configured": true, 00:13:50.702 "data_offset": 0, 00:13:50.702 "data_size": 65536 00:13:50.702 } 00:13:50.702 ] 00:13:50.702 }' 00:13:50.702 11:24:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.702 11:24:08 -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 11:24:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:50.961 11:24:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.961 11:24:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.961 11:24:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:51.218 11:24:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:51.218 11:24:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.218 11:24:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:51.476 [2024-11-26 11:24:09.556172] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.476 [2024-11-26 11:24:09.556436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:13:51.476 11:24:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:51.476 11:24:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:51.476 11:24:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.476 11:24:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:51.735 11:24:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:51.735 11:24:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:51.735 11:24:09 -- bdev/bdev_raid.sh@287 -- # killprocess 79819 00:13:51.735 11:24:09 -- common/autotest_common.sh@936 -- # '[' -z 79819 ']' 00:13:51.735 11:24:09 -- common/autotest_common.sh@940 -- # kill -0 79819 00:13:51.735 11:24:09 -- common/autotest_common.sh@941 -- # uname 00:13:51.735 11:24:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:51.735 11:24:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79819 00:13:51.735 killing process with pid 79819 00:13:51.735 11:24:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:51.735 11:24:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:51.735 11:24:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79819' 00:13:51.735 11:24:09 -- common/autotest_common.sh@955 -- # kill 79819 00:13:51.735 [2024-11-26 11:24:09.827699] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.735 [2024-11-26 11:24:09.827806] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.735 11:24:09 -- common/autotest_common.sh@960 -- # wait 79819 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:51.994 00:13:51.994 real 0m7.364s 00:13:51.994 user 0m12.701s 00:13:51.994 sys 0m1.185s 00:13:51.994 11:24:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:51.994 ************************************ 00:13:51.994 END TEST raid_state_function_test 00:13:51.994 ************************************ 00:13:51.994 11:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:51.994 11:24:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:51.994 11:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.994 11:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 ************************************ 00:13:51.994 START TEST raid_state_function_test_sb 00:13:51.994 ************************************ 00:13:51.994 11:24:10 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=80093 00:13:51.994 Process raid pid: 80093 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 80093' 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 80093 /var/tmp/spdk-raid.sock 00:13:51.994 11:24:10 -- common/autotest_common.sh@829 -- # '[' -z 80093 ']' 00:13:51.994 11:24:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.994 11:24:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.994 11:24:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.994 11:24:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.994 11:24:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.994 11:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:51.994 [2024-11-26 11:24:10.125553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.994 [2024-11-26 11:24:10.125698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.253 [2024-11-26 11:24:10.286401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.253 [2024-11-26 11:24:10.321668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.253 [2024-11-26 11:24:10.355114] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.822 11:24:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.822 11:24:11 -- common/autotest_common.sh@862 -- # return 0 00:13:52.822 11:24:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:53.081 [2024-11-26 11:24:11.279495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.081 [2024-11-26 11:24:11.279570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.081 [2024-11-26 11:24:11.279610] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.081 [2024-11-26 11:24:11.279624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:53.081 11:24:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.082 11:24:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.082 11:24:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.082 11:24:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.082 11:24:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.082 11:24:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.340 11:24:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.340 "name": "Existed_Raid", 00:13:53.340 "uuid": "5dd0c97d-0c6c-48c6-a94f-d2e489b347ce", 00:13:53.340 "strip_size_kb": 64, 00:13:53.340 "state": "configuring", 00:13:53.340 "raid_level": "concat", 00:13:53.340 "superblock": true, 00:13:53.340 "num_base_bdevs": 2, 00:13:53.340 "num_base_bdevs_discovered": 0, 00:13:53.340 "num_base_bdevs_operational": 2, 00:13:53.340 "base_bdevs_list": [ 00:13:53.340 { 00:13:53.340 "name": "BaseBdev1", 00:13:53.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.340 "is_configured": false, 00:13:53.340 "data_offset": 0, 00:13:53.340 "data_size": 0 00:13:53.340 }, 00:13:53.340 { 00:13:53.340 "name": "BaseBdev2", 00:13:53.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.340 "is_configured": false, 00:13:53.340 "data_offset": 0, 00:13:53.340 "data_size": 0 00:13:53.340 } 00:13:53.340 ] 00:13:53.340 }' 00:13:53.340 11:24:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.340 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.907 11:24:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:53.907 [2024-11-26 11:24:12.091554] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.907 [2024-11-26 11:24:12.091600] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:13:53.907 11:24:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:54.165 [2024-11-26 11:24:12.303684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.165 [2024-11-26 11:24:12.303735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.165 [2024-11-26 11:24:12.303763] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.165 [2024-11-26 11:24:12.303776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.165 11:24:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.424 [2024-11-26 11:24:12.558332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.424 BaseBdev1 00:13:54.424 11:24:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:54.424 11:24:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:54.424 11:24:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:54.424 11:24:12 -- common/autotest_common.sh@899 -- # local i 00:13:54.424 11:24:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:54.424 11:24:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:54.424 11:24:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.683 11:24:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.942 [ 00:13:54.942 { 00:13:54.942 "name": "BaseBdev1", 00:13:54.942 "aliases": [ 00:13:54.942 "3b85a828-d09d-42bf-9f07-6c17234b4eeb" 00:13:54.942 ], 00:13:54.942 "product_name": "Malloc disk", 00:13:54.942 "block_size": 512, 00:13:54.942 "num_blocks": 65536, 00:13:54.942 "uuid": "3b85a828-d09d-42bf-9f07-6c17234b4eeb", 00:13:54.942 "assigned_rate_limits": { 00:13:54.942 "rw_ios_per_sec": 0, 00:13:54.942 "rw_mbytes_per_sec": 0, 00:13:54.942 "r_mbytes_per_sec": 0, 00:13:54.942 "w_mbytes_per_sec": 0 00:13:54.942 }, 00:13:54.942 "claimed": true, 00:13:54.942 "claim_type": "exclusive_write", 00:13:54.942 "zoned": false, 00:13:54.942 "supported_io_types": { 00:13:54.942 "read": true, 00:13:54.942 "write": true, 00:13:54.942 "unmap": true, 00:13:54.942 "write_zeroes": true, 00:13:54.942 "flush": true, 00:13:54.942 "reset": true, 00:13:54.942 "compare": false, 00:13:54.942 "compare_and_write": false, 00:13:54.942 "abort": true, 00:13:54.942 "nvme_admin": false, 00:13:54.942 "nvme_io": false 00:13:54.942 }, 00:13:54.942 "memory_domains": [ 00:13:54.942 { 00:13:54.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.942 "dma_device_type": 2 00:13:54.942 } 00:13:54.942 ], 00:13:54.942 "driver_specific": {} 00:13:54.942 } 00:13:54.942 ] 00:13:54.942 11:24:12 -- common/autotest_common.sh@905 -- # return 0 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.942 11:24:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.201 11:24:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.201 "name": "Existed_Raid", 00:13:55.201 "uuid": "b1cebc4e-b571-44e4-bf8d-1059fe335fac", 00:13:55.201 "strip_size_kb": 64, 00:13:55.201 "state": "configuring", 00:13:55.201 "raid_level": "concat", 00:13:55.201 "superblock": true, 00:13:55.201 "num_base_bdevs": 2, 00:13:55.201 "num_base_bdevs_discovered": 1, 00:13:55.201 "num_base_bdevs_operational": 2, 00:13:55.201 "base_bdevs_list": [ 00:13:55.201 { 00:13:55.201 "name": "BaseBdev1", 00:13:55.201 "uuid": "3b85a828-d09d-42bf-9f07-6c17234b4eeb", 00:13:55.201 "is_configured": true, 00:13:55.201 "data_offset": 2048, 00:13:55.201 "data_size": 63488 00:13:55.201 }, 00:13:55.201 { 00:13:55.201 "name": "BaseBdev2", 00:13:55.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.201 "is_configured": false, 00:13:55.201 "data_offset": 0, 00:13:55.201 "data_size": 0 00:13:55.201 } 00:13:55.201 ] 00:13:55.201 }' 00:13:55.201 11:24:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.201 11:24:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.462 11:24:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.744 [2024-11-26 11:24:13.794787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.744 [2024-11-26 11:24:13.794883] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:13:55.744 11:24:13 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:55.744 11:24:13 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:56.026 11:24:14 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.026 BaseBdev1 00:13:56.026 11:24:14 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:56.026 11:24:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:56.026 11:24:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.026 11:24:14 -- common/autotest_common.sh@899 -- # local i 00:13:56.026 11:24:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.026 11:24:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.026 11:24:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.298 11:24:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.558 [ 00:13:56.558 { 00:13:56.558 "name": "BaseBdev1", 00:13:56.558 "aliases": [ 00:13:56.558 "a29e0d81-aa8e-48f3-b8d2-0a0dbe77b21b" 00:13:56.558 ], 00:13:56.558 "product_name": "Malloc disk", 00:13:56.558 "block_size": 512, 00:13:56.558 "num_blocks": 65536, 00:13:56.558 "uuid": "a29e0d81-aa8e-48f3-b8d2-0a0dbe77b21b", 00:13:56.558 "assigned_rate_limits": { 00:13:56.558 "rw_ios_per_sec": 0, 00:13:56.558 "rw_mbytes_per_sec": 0, 00:13:56.558 "r_mbytes_per_sec": 0, 00:13:56.558 "w_mbytes_per_sec": 0 00:13:56.558 }, 00:13:56.558 "claimed": false, 00:13:56.558 "zoned": false, 00:13:56.558 "supported_io_types": { 00:13:56.558 "read": true, 00:13:56.558 "write": true, 00:13:56.558 "unmap": true, 00:13:56.558 "write_zeroes": true, 00:13:56.558 "flush": true, 00:13:56.558 "reset": true, 00:13:56.558 "compare": false, 00:13:56.558 "compare_and_write": false, 00:13:56.558 "abort": true, 00:13:56.558 "nvme_admin": false, 00:13:56.558 "nvme_io": false 00:13:56.558 }, 00:13:56.558 "memory_domains": [ 00:13:56.558 { 00:13:56.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.558 "dma_device_type": 2 00:13:56.558 } 00:13:56.558 ], 00:13:56.558 "driver_specific": {} 00:13:56.558 } 00:13:56.558 ] 00:13:56.558 11:24:14 -- common/autotest_common.sh@905 -- # return 0 00:13:56.558 11:24:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:56.817 [2024-11-26 11:24:14.856196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.817 [2024-11-26 11:24:14.858506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.817 [2024-11-26 11:24:14.858551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.817 11:24:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.076 11:24:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.076 "name": "Existed_Raid", 00:13:57.076 "uuid": "03695a2c-62dd-4a78-b185-341047568b37", 00:13:57.076 "strip_size_kb": 64, 00:13:57.076 "state": "configuring", 00:13:57.076 "raid_level": "concat", 00:13:57.076 "superblock": true, 00:13:57.076 "num_base_bdevs": 2, 00:13:57.076 "num_base_bdevs_discovered": 1, 00:13:57.076 "num_base_bdevs_operational": 2, 00:13:57.076 "base_bdevs_list": [ 00:13:57.076 { 00:13:57.076 "name": "BaseBdev1", 00:13:57.076 "uuid": "a29e0d81-aa8e-48f3-b8d2-0a0dbe77b21b", 00:13:57.076 "is_configured": true, 00:13:57.076 "data_offset": 2048, 00:13:57.076 "data_size": 63488 00:13:57.076 }, 00:13:57.076 { 00:13:57.076 "name": "BaseBdev2", 00:13:57.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.076 "is_configured": false, 00:13:57.076 "data_offset": 0, 00:13:57.076 "data_size": 0 00:13:57.076 } 00:13:57.076 ] 00:13:57.076 }' 00:13:57.076 11:24:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.076 11:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:57.334 11:24:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.592 [2024-11-26 11:24:15.604887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.592 [2024-11-26 11:24:15.605138] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:13:57.592 [2024-11-26 11:24:15.605167] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.592 [2024-11-26 11:24:15.605319] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:13:57.592 [2024-11-26 11:24:15.605681] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:13:57.593 [2024-11-26 11:24:15.605711] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:13:57.593 BaseBdev2 00:13:57.593 [2024-11-26 11:24:15.605874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.593 11:24:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:57.593 11:24:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:57.593 11:24:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:57.593 11:24:15 -- common/autotest_common.sh@899 -- # local i 00:13:57.593 11:24:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:57.593 11:24:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:57.593 11:24:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:57.852 11:24:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.852 [ 00:13:57.852 { 00:13:57.852 "name": "BaseBdev2", 00:13:57.852 "aliases": [ 00:13:57.852 "2bbea7d0-14ee-4138-a1b5-d3bcc33946f0" 00:13:57.852 ], 00:13:57.852 "product_name": "Malloc disk", 00:13:57.852 "block_size": 512, 00:13:57.852 "num_blocks": 65536, 00:13:57.852 "uuid": "2bbea7d0-14ee-4138-a1b5-d3bcc33946f0", 00:13:57.852 "assigned_rate_limits": { 00:13:57.852 "rw_ios_per_sec": 0, 00:13:57.852 "rw_mbytes_per_sec": 0, 00:13:57.852 "r_mbytes_per_sec": 0, 00:13:57.852 "w_mbytes_per_sec": 0 00:13:57.852 }, 00:13:57.852 "claimed": true, 00:13:57.852 "claim_type": "exclusive_write", 00:13:57.852 "zoned": false, 00:13:57.852 "supported_io_types": { 00:13:57.852 "read": true, 00:13:57.852 "write": true, 00:13:57.852 "unmap": true, 00:13:57.852 "write_zeroes": true, 00:13:57.852 "flush": true, 00:13:57.852 "reset": true, 00:13:57.852 "compare": false, 00:13:57.852 "compare_and_write": false, 00:13:57.852 "abort": true, 00:13:57.852 "nvme_admin": false, 00:13:57.852 "nvme_io": false 00:13:57.852 }, 00:13:57.852 "memory_domains": [ 00:13:57.852 { 00:13:57.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.852 "dma_device_type": 2 00:13:57.852 } 00:13:57.852 ], 00:13:57.852 "driver_specific": {} 00:13:57.852 } 00:13:57.852 ] 00:13:57.852 11:24:16 -- common/autotest_common.sh@905 -- # return 0 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.852 11:24:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.111 11:24:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.111 "name": "Existed_Raid", 00:13:58.111 "uuid": "03695a2c-62dd-4a78-b185-341047568b37", 00:13:58.111 "strip_size_kb": 64, 00:13:58.111 "state": "online", 00:13:58.111 "raid_level": "concat", 00:13:58.111 "superblock": true, 00:13:58.111 "num_base_bdevs": 2, 00:13:58.111 "num_base_bdevs_discovered": 2, 00:13:58.111 "num_base_bdevs_operational": 2, 00:13:58.111 "base_bdevs_list": [ 00:13:58.111 { 00:13:58.111 "name": "BaseBdev1", 00:13:58.111 "uuid": "a29e0d81-aa8e-48f3-b8d2-0a0dbe77b21b", 00:13:58.111 "is_configured": true, 00:13:58.111 "data_offset": 2048, 00:13:58.111 "data_size": 63488 00:13:58.111 }, 00:13:58.111 { 00:13:58.111 "name": "BaseBdev2", 00:13:58.111 "uuid": "2bbea7d0-14ee-4138-a1b5-d3bcc33946f0", 00:13:58.111 "is_configured": true, 00:13:58.111 "data_offset": 2048, 00:13:58.111 "data_size": 63488 00:13:58.111 } 00:13:58.111 ] 00:13:58.111 }' 00:13:58.111 11:24:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.111 11:24:16 -- common/autotest_common.sh@10 -- # set +x 00:13:58.370 11:24:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:58.628 [2024-11-26 11:24:16.797474] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.628 [2024-11-26 11:24:16.797535] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.628 [2024-11-26 11:24:16.797616] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.628 11:24:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.887 11:24:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.887 "name": "Existed_Raid", 00:13:58.888 "uuid": "03695a2c-62dd-4a78-b185-341047568b37", 00:13:58.888 "strip_size_kb": 64, 00:13:58.888 "state": "offline", 00:13:58.888 "raid_level": "concat", 00:13:58.888 "superblock": true, 00:13:58.888 "num_base_bdevs": 2, 00:13:58.888 "num_base_bdevs_discovered": 1, 00:13:58.888 "num_base_bdevs_operational": 1, 00:13:58.888 "base_bdevs_list": [ 00:13:58.888 { 00:13:58.888 "name": null, 00:13:58.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.888 "is_configured": false, 00:13:58.888 "data_offset": 2048, 00:13:58.888 "data_size": 63488 00:13:58.888 }, 00:13:58.888 { 00:13:58.888 "name": "BaseBdev2", 00:13:58.888 "uuid": "2bbea7d0-14ee-4138-a1b5-d3bcc33946f0", 00:13:58.888 "is_configured": true, 00:13:58.888 "data_offset": 2048, 00:13:58.888 "data_size": 63488 00:13:58.888 } 00:13:58.888 ] 00:13:58.888 }' 00:13:58.888 11:24:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.888 11:24:17 -- common/autotest_common.sh@10 -- # set +x 00:13:59.146 11:24:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:59.146 11:24:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:59.146 11:24:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.146 11:24:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:59.405 11:24:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:59.405 11:24:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.405 11:24:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:59.663 [2024-11-26 11:24:17.861694] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.663 [2024-11-26 11:24:17.861774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:13:59.663 11:24:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:59.663 11:24:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:59.921 11:24:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.921 11:24:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:59.921 11:24:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:59.921 11:24:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:59.921 11:24:18 -- bdev/bdev_raid.sh@287 -- # killprocess 80093 00:13:59.921 11:24:18 -- common/autotest_common.sh@936 -- # '[' -z 80093 ']' 00:13:59.921 11:24:18 -- common/autotest_common.sh@940 -- # kill -0 80093 00:13:59.921 11:24:18 -- common/autotest_common.sh@941 -- # uname 00:13:59.921 11:24:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.921 11:24:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80093 00:13:59.921 killing process with pid 80093 00:13:59.921 11:24:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:59.921 11:24:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:59.921 11:24:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80093' 00:13:59.921 11:24:18 -- common/autotest_common.sh@955 -- # kill 80093 00:13:59.921 [2024-11-26 11:24:18.137761] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.921 11:24:18 -- common/autotest_common.sh@960 -- # wait 80093 00:13:59.921 [2024-11-26 11:24:18.137835] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.180 ************************************ 00:14:00.180 END TEST raid_state_function_test_sb 00:14:00.180 ************************************ 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:00.180 00:14:00.180 real 0m8.270s 00:14:00.180 user 0m14.395s 00:14:00.180 sys 0m1.262s 00:14:00.180 11:24:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.180 11:24:18 -- common/autotest_common.sh@10 -- # set +x 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:00.180 11:24:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:00.180 11:24:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.180 11:24:18 -- common/autotest_common.sh@10 -- # set +x 00:14:00.180 ************************************ 00:14:00.180 START TEST raid_superblock_test 00:14:00.180 ************************************ 00:14:00.180 11:24:18 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=80378 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 80378 /var/tmp/spdk-raid.sock 00:14:00.180 11:24:18 -- common/autotest_common.sh@829 -- # '[' -z 80378 ']' 00:14:00.180 11:24:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:00.180 11:24:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:00.180 11:24:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:00.180 11:24:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:00.180 11:24:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.180 11:24:18 -- common/autotest_common.sh@10 -- # set +x 00:14:00.439 [2024-11-26 11:24:18.455887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.439 [2024-11-26 11:24:18.456079] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 00:14:00.439 [2024-11-26 11:24:18.622196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.439 [2024-11-26 11:24:18.656438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.698 [2024-11-26 11:24:18.689278] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.265 11:24:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.265 11:24:19 -- common/autotest_common.sh@862 -- # return 0 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.265 11:24:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:01.523 malloc1 00:14:01.523 11:24:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.781 [2024-11-26 11:24:19.791743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.781 [2024-11-26 11:24:19.791823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.782 [2024-11-26 11:24:19.791863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:01.782 [2024-11-26 11:24:19.791932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.782 [2024-11-26 11:24:19.795029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.782 [2024-11-26 11:24:19.795077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.782 pt1 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.782 11:24:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:01.782 malloc2 00:14:02.040 11:24:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.040 [2024-11-26 11:24:20.227033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.040 [2024-11-26 11:24:20.227096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.040 [2024-11-26 11:24:20.227131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:02.040 [2024-11-26 11:24:20.227146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.041 [2024-11-26 11:24:20.229654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.041 [2024-11-26 11:24:20.229705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.041 pt2 00:14:02.041 11:24:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:02.041 11:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:02.041 11:24:20 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:02.300 [2024-11-26 11:24:20.471145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.300 [2024-11-26 11:24:20.473419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.300 [2024-11-26 11:24:20.473673] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:02.300 [2024-11-26 11:24:20.473695] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.300 [2024-11-26 11:24:20.473824] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:02.300 [2024-11-26 11:24:20.474228] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:02.300 [2024-11-26 11:24:20.474250] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:02.300 [2024-11-26 11:24:20.474390] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.300 11:24:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.559 11:24:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.559 "name": "raid_bdev1", 00:14:02.559 "uuid": "0c142b7e-ff3b-4ad4-9a10-69f3595f1712", 00:14:02.559 "strip_size_kb": 64, 00:14:02.559 "state": "online", 00:14:02.559 "raid_level": "concat", 00:14:02.559 "superblock": true, 00:14:02.559 "num_base_bdevs": 2, 00:14:02.559 "num_base_bdevs_discovered": 2, 00:14:02.559 "num_base_bdevs_operational": 2, 00:14:02.559 "base_bdevs_list": [ 00:14:02.559 { 00:14:02.559 "name": "pt1", 00:14:02.559 "uuid": "0be299d4-c0e9-5e52-bdad-6688de6afb4b", 00:14:02.559 "is_configured": true, 00:14:02.559 "data_offset": 2048, 00:14:02.559 "data_size": 63488 00:14:02.559 }, 00:14:02.559 { 00:14:02.559 "name": "pt2", 00:14:02.559 "uuid": "1c6f9829-6c87-5c60-874f-971e3e254ded", 00:14:02.559 "is_configured": true, 00:14:02.559 "data_offset": 2048, 00:14:02.559 "data_size": 63488 00:14:02.559 } 00:14:02.559 ] 00:14:02.559 }' 00:14:02.559 11:24:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.559 11:24:20 -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 11:24:21 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:02.818 11:24:21 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:03.077 [2024-11-26 11:24:21.191552] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.077 11:24:21 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0c142b7e-ff3b-4ad4-9a10-69f3595f1712 00:14:03.077 11:24:21 -- bdev/bdev_raid.sh@380 -- # '[' -z 0c142b7e-ff3b-4ad4-9a10-69f3595f1712 ']' 00:14:03.077 11:24:21 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:03.335 [2024-11-26 11:24:21.455449] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.335 [2024-11-26 11:24:21.455505] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.335 [2024-11-26 11:24:21.455619] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.335 [2024-11-26 11:24:21.455712] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.335 [2024-11-26 11:24:21.455742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:03.335 11:24:21 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.335 11:24:21 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:03.594 11:24:21 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:03.594 11:24:21 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:03.594 11:24:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.594 11:24:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:03.853 11:24:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.853 11:24:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:04.112 11:24:22 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:04.112 11:24:22 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:04.112 11:24:22 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:04.112 11:24:22 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:04.112 11:24:22 -- common/autotest_common.sh@650 -- # local es=0 00:14:04.112 11:24:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:04.112 11:24:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.112 11:24:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.112 11:24:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.112 11:24:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.112 11:24:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.112 11:24:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.112 11:24:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.112 11:24:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:04.112 11:24:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:04.370 [2024-11-26 11:24:22.571780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:04.370 [2024-11-26 11:24:22.574297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:04.370 [2024-11-26 11:24:22.574396] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:04.370 [2024-11-26 11:24:22.574449] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:04.370 [2024-11-26 11:24:22.574476] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.370 [2024-11-26 11:24:22.574490] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:04.371 request: 00:14:04.371 { 00:14:04.371 "name": "raid_bdev1", 00:14:04.371 "raid_level": "concat", 00:14:04.371 "base_bdevs": [ 00:14:04.371 "malloc1", 00:14:04.371 "malloc2" 00:14:04.371 ], 00:14:04.371 "superblock": false, 00:14:04.371 "strip_size_kb": 64, 00:14:04.371 "method": "bdev_raid_create", 00:14:04.371 "req_id": 1 00:14:04.371 } 00:14:04.371 Got JSON-RPC error response 00:14:04.371 response: 00:14:04.371 { 00:14:04.371 "code": -17, 00:14:04.371 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:04.371 } 00:14:04.371 11:24:22 -- common/autotest_common.sh@653 -- # es=1 00:14:04.371 11:24:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.371 11:24:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.371 11:24:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.371 11:24:22 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.371 11:24:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:04.629 11:24:22 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:04.629 11:24:22 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:04.629 11:24:22 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.887 [2024-11-26 11:24:22.991787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.887 [2024-11-26 11:24:22.991866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.887 [2024-11-26 11:24:22.991924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:04.887 [2024-11-26 11:24:22.991939] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.887 [2024-11-26 11:24:22.994648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.887 [2024-11-26 11:24:22.994704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.887 [2024-11-26 11:24:22.994791] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:04.887 [2024-11-26 11:24:22.994835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.887 pt1 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.887 11:24:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.146 11:24:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.146 "name": "raid_bdev1", 00:14:05.146 "uuid": "0c142b7e-ff3b-4ad4-9a10-69f3595f1712", 00:14:05.146 "strip_size_kb": 64, 00:14:05.146 "state": "configuring", 00:14:05.146 "raid_level": "concat", 00:14:05.146 "superblock": true, 00:14:05.146 "num_base_bdevs": 2, 00:14:05.146 "num_base_bdevs_discovered": 1, 00:14:05.146 "num_base_bdevs_operational": 2, 00:14:05.146 "base_bdevs_list": [ 00:14:05.146 { 00:14:05.146 "name": "pt1", 00:14:05.146 "uuid": "0be299d4-c0e9-5e52-bdad-6688de6afb4b", 00:14:05.146 "is_configured": true, 00:14:05.146 "data_offset": 2048, 00:14:05.146 "data_size": 63488 00:14:05.146 }, 00:14:05.146 { 00:14:05.146 "name": null, 00:14:05.146 "uuid": "1c6f9829-6c87-5c60-874f-971e3e254ded", 00:14:05.146 "is_configured": false, 00:14:05.146 "data_offset": 2048, 00:14:05.146 "data_size": 63488 00:14:05.146 } 00:14:05.146 ] 00:14:05.146 }' 00:14:05.146 11:24:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.146 11:24:23 -- common/autotest_common.sh@10 -- # set +x 00:14:05.404 11:24:23 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:05.404 11:24:23 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:05.404 11:24:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:05.404 11:24:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.662 [2024-11-26 11:24:23.736043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.662 [2024-11-26 11:24:23.736318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.662 [2024-11-26 11:24:23.736368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:05.662 [2024-11-26 11:24:23.736385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.662 [2024-11-26 11:24:23.736938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.662 [2024-11-26 11:24:23.736978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.662 [2024-11-26 11:24:23.737074] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:05.662 [2024-11-26 11:24:23.737101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.662 [2024-11-26 11:24:23.737260] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:05.662 [2024-11-26 11:24:23.737288] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:05.662 [2024-11-26 11:24:23.737409] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:05.662 [2024-11-26 11:24:23.737773] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:05.662 [2024-11-26 11:24:23.737807] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:05.662 [2024-11-26 11:24:23.737917] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.662 pt2 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.662 11:24:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.663 11:24:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.663 11:24:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.663 11:24:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.663 11:24:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.663 11:24:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.921 11:24:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.921 "name": "raid_bdev1", 00:14:05.921 "uuid": "0c142b7e-ff3b-4ad4-9a10-69f3595f1712", 00:14:05.921 "strip_size_kb": 64, 00:14:05.921 "state": "online", 00:14:05.921 "raid_level": "concat", 00:14:05.921 "superblock": true, 00:14:05.921 "num_base_bdevs": 2, 00:14:05.921 "num_base_bdevs_discovered": 2, 00:14:05.921 "num_base_bdevs_operational": 2, 00:14:05.921 "base_bdevs_list": [ 00:14:05.921 { 00:14:05.921 "name": "pt1", 00:14:05.921 "uuid": "0be299d4-c0e9-5e52-bdad-6688de6afb4b", 00:14:05.921 "is_configured": true, 00:14:05.921 "data_offset": 2048, 00:14:05.921 "data_size": 63488 00:14:05.921 }, 00:14:05.921 { 00:14:05.921 "name": "pt2", 00:14:05.921 "uuid": "1c6f9829-6c87-5c60-874f-971e3e254ded", 00:14:05.921 "is_configured": true, 00:14:05.921 "data_offset": 2048, 00:14:05.921 "data_size": 63488 00:14:05.921 } 00:14:05.921 ] 00:14:05.921 }' 00:14:05.921 11:24:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.921 11:24:24 -- common/autotest_common.sh@10 -- # set +x 00:14:06.180 11:24:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:06.180 11:24:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:06.439 [2024-11-26 11:24:24.572570] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.439 11:24:24 -- bdev/bdev_raid.sh@430 -- # '[' 0c142b7e-ff3b-4ad4-9a10-69f3595f1712 '!=' 0c142b7e-ff3b-4ad4-9a10-69f3595f1712 ']' 00:14:06.439 11:24:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:06.439 11:24:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:06.439 11:24:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:06.439 11:24:24 -- bdev/bdev_raid.sh@511 -- # killprocess 80378 00:14:06.439 11:24:24 -- common/autotest_common.sh@936 -- # '[' -z 80378 ']' 00:14:06.439 11:24:24 -- common/autotest_common.sh@940 -- # kill -0 80378 00:14:06.439 11:24:24 -- common/autotest_common.sh@941 -- # uname 00:14:06.439 11:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.439 11:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80378 00:14:06.439 killing process with pid 80378 00:14:06.439 11:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.439 11:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.439 11:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80378' 00:14:06.439 11:24:24 -- common/autotest_common.sh@955 -- # kill 80378 00:14:06.439 [2024-11-26 11:24:24.624903] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.439 [2024-11-26 11:24:24.625000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.439 11:24:24 -- common/autotest_common.sh@960 -- # wait 80378 00:14:06.439 [2024-11-26 11:24:24.625061] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.439 [2024-11-26 11:24:24.625080] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:06.439 [2024-11-26 11:24:24.640797] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.698 ************************************ 00:14:06.698 END TEST raid_superblock_test 00:14:06.698 ************************************ 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:06.698 00:14:06.698 real 0m6.433s 00:14:06.698 user 0m11.053s 00:14:06.698 sys 0m0.963s 00:14:06.698 11:24:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.698 11:24:24 -- common/autotest_common.sh@10 -- # set +x 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:06.698 11:24:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:06.698 11:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.698 11:24:24 -- common/autotest_common.sh@10 -- # set +x 00:14:06.698 ************************************ 00:14:06.698 START TEST raid_state_function_test 00:14:06.698 ************************************ 00:14:06.698 11:24:24 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:06.698 11:24:24 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:06.699 Process raid pid: 80590 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=80590 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 80590' 00:14:06.699 11:24:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 80590 /var/tmp/spdk-raid.sock 00:14:06.699 11:24:24 -- common/autotest_common.sh@829 -- # '[' -z 80590 ']' 00:14:06.699 11:24:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.699 11:24:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.699 11:24:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.699 11:24:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.699 11:24:24 -- common/autotest_common.sh@10 -- # set +x 00:14:06.957 [2024-11-26 11:24:24.938291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.957 [2024-11-26 11:24:24.938442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.957 [2024-11-26 11:24:25.089323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.957 [2024-11-26 11:24:25.125264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.957 [2024-11-26 11:24:25.157918] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.892 11:24:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.892 11:24:25 -- common/autotest_common.sh@862 -- # return 0 00:14:07.892 11:24:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.892 [2024-11-26 11:24:26.061859] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.892 [2024-11-26 11:24:26.061986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.892 [2024-11-26 11:24:26.062016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.892 [2024-11-26 11:24:26.062031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.892 11:24:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.151 11:24:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.151 "name": "Existed_Raid", 00:14:08.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.151 "strip_size_kb": 0, 00:14:08.151 "state": "configuring", 00:14:08.151 "raid_level": "raid1", 00:14:08.151 "superblock": false, 00:14:08.151 "num_base_bdevs": 2, 00:14:08.151 "num_base_bdevs_discovered": 0, 00:14:08.151 "num_base_bdevs_operational": 2, 00:14:08.151 "base_bdevs_list": [ 00:14:08.151 { 00:14:08.151 "name": "BaseBdev1", 00:14:08.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.151 "is_configured": false, 00:14:08.151 "data_offset": 0, 00:14:08.151 "data_size": 0 00:14:08.151 }, 00:14:08.151 { 00:14:08.151 "name": "BaseBdev2", 00:14:08.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.151 "is_configured": false, 00:14:08.151 "data_offset": 0, 00:14:08.151 "data_size": 0 00:14:08.151 } 00:14:08.151 ] 00:14:08.151 }' 00:14:08.151 11:24:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.151 11:24:26 -- common/autotest_common.sh@10 -- # set +x 00:14:08.409 11:24:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:08.668 [2024-11-26 11:24:26.822011] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.668 [2024-11-26 11:24:26.822052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:08.668 11:24:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:08.926 [2024-11-26 11:24:27.046174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.926 [2024-11-26 11:24:27.046239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.926 [2024-11-26 11:24:27.046280] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.926 [2024-11-26 11:24:27.046292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.926 11:24:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.185 [2024-11-26 11:24:27.296812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.185 BaseBdev1 00:14:09.185 11:24:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:09.185 11:24:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:09.185 11:24:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.185 11:24:27 -- common/autotest_common.sh@899 -- # local i 00:14:09.185 11:24:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.185 11:24:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.185 11:24:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.443 11:24:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.701 [ 00:14:09.701 { 00:14:09.701 "name": "BaseBdev1", 00:14:09.701 "aliases": [ 00:14:09.701 "d4d97772-0c33-4e28-ac7f-0748cf351f81" 00:14:09.701 ], 00:14:09.701 "product_name": "Malloc disk", 00:14:09.701 "block_size": 512, 00:14:09.701 "num_blocks": 65536, 00:14:09.701 "uuid": "d4d97772-0c33-4e28-ac7f-0748cf351f81", 00:14:09.701 "assigned_rate_limits": { 00:14:09.701 "rw_ios_per_sec": 0, 00:14:09.701 "rw_mbytes_per_sec": 0, 00:14:09.701 "r_mbytes_per_sec": 0, 00:14:09.701 "w_mbytes_per_sec": 0 00:14:09.701 }, 00:14:09.701 "claimed": true, 00:14:09.701 "claim_type": "exclusive_write", 00:14:09.701 "zoned": false, 00:14:09.701 "supported_io_types": { 00:14:09.701 "read": true, 00:14:09.701 "write": true, 00:14:09.701 "unmap": true, 00:14:09.701 "write_zeroes": true, 00:14:09.701 "flush": true, 00:14:09.701 "reset": true, 00:14:09.701 "compare": false, 00:14:09.701 "compare_and_write": false, 00:14:09.701 "abort": true, 00:14:09.701 "nvme_admin": false, 00:14:09.701 "nvme_io": false 00:14:09.701 }, 00:14:09.701 "memory_domains": [ 00:14:09.701 { 00:14:09.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.701 "dma_device_type": 2 00:14:09.701 } 00:14:09.701 ], 00:14:09.701 "driver_specific": {} 00:14:09.701 } 00:14:09.701 ] 00:14:09.701 11:24:27 -- common/autotest_common.sh@905 -- # return 0 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.701 11:24:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.959 11:24:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.959 "name": "Existed_Raid", 00:14:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.959 "strip_size_kb": 0, 00:14:09.959 "state": "configuring", 00:14:09.959 "raid_level": "raid1", 00:14:09.959 "superblock": false, 00:14:09.959 "num_base_bdevs": 2, 00:14:09.959 "num_base_bdevs_discovered": 1, 00:14:09.959 "num_base_bdevs_operational": 2, 00:14:09.959 "base_bdevs_list": [ 00:14:09.959 { 00:14:09.959 "name": "BaseBdev1", 00:14:09.959 "uuid": "d4d97772-0c33-4e28-ac7f-0748cf351f81", 00:14:09.959 "is_configured": true, 00:14:09.959 "data_offset": 0, 00:14:09.959 "data_size": 65536 00:14:09.959 }, 00:14:09.959 { 00:14:09.959 "name": "BaseBdev2", 00:14:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.959 "is_configured": false, 00:14:09.959 "data_offset": 0, 00:14:09.959 "data_size": 0 00:14:09.959 } 00:14:09.959 ] 00:14:09.959 }' 00:14:09.959 11:24:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.959 11:24:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.217 11:24:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.475 [2024-11-26 11:24:28.545327] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.475 [2024-11-26 11:24:28.545389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:10.475 11:24:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:10.475 11:24:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.734 [2024-11-26 11:24:28.757414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.734 [2024-11-26 11:24:28.759804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.734 [2024-11-26 11:24:28.759850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.734 11:24:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.992 11:24:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.992 "name": "Existed_Raid", 00:14:10.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.992 "strip_size_kb": 0, 00:14:10.992 "state": "configuring", 00:14:10.992 "raid_level": "raid1", 00:14:10.992 "superblock": false, 00:14:10.992 "num_base_bdevs": 2, 00:14:10.992 "num_base_bdevs_discovered": 1, 00:14:10.992 "num_base_bdevs_operational": 2, 00:14:10.992 "base_bdevs_list": [ 00:14:10.992 { 00:14:10.992 "name": "BaseBdev1", 00:14:10.992 "uuid": "d4d97772-0c33-4e28-ac7f-0748cf351f81", 00:14:10.992 "is_configured": true, 00:14:10.992 "data_offset": 0, 00:14:10.992 "data_size": 65536 00:14:10.992 }, 00:14:10.992 { 00:14:10.992 "name": "BaseBdev2", 00:14:10.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.992 "is_configured": false, 00:14:10.992 "data_offset": 0, 00:14:10.992 "data_size": 0 00:14:10.992 } 00:14:10.992 ] 00:14:10.992 }' 00:14:10.992 11:24:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.992 11:24:28 -- common/autotest_common.sh@10 -- # set +x 00:14:11.249 11:24:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.507 [2024-11-26 11:24:29.558766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.507 [2024-11-26 11:24:29.558820] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:11.507 [2024-11-26 11:24:29.558837] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:11.507 [2024-11-26 11:24:29.558986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:11.507 [2024-11-26 11:24:29.559384] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:11.507 [2024-11-26 11:24:29.559406] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:11.507 [2024-11-26 11:24:29.559664] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.507 BaseBdev2 00:14:11.507 11:24:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:11.507 11:24:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:11.507 11:24:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.507 11:24:29 -- common/autotest_common.sh@899 -- # local i 00:14:11.507 11:24:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.507 11:24:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.507 11:24:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.766 11:24:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.024 [ 00:14:12.024 { 00:14:12.024 "name": "BaseBdev2", 00:14:12.024 "aliases": [ 00:14:12.024 "0ff5defe-5bef-459f-9419-40aaf33c5b74" 00:14:12.024 ], 00:14:12.024 "product_name": "Malloc disk", 00:14:12.024 "block_size": 512, 00:14:12.024 "num_blocks": 65536, 00:14:12.025 "uuid": "0ff5defe-5bef-459f-9419-40aaf33c5b74", 00:14:12.025 "assigned_rate_limits": { 00:14:12.025 "rw_ios_per_sec": 0, 00:14:12.025 "rw_mbytes_per_sec": 0, 00:14:12.025 "r_mbytes_per_sec": 0, 00:14:12.025 "w_mbytes_per_sec": 0 00:14:12.025 }, 00:14:12.025 "claimed": true, 00:14:12.025 "claim_type": "exclusive_write", 00:14:12.025 "zoned": false, 00:14:12.025 "supported_io_types": { 00:14:12.025 "read": true, 00:14:12.025 "write": true, 00:14:12.025 "unmap": true, 00:14:12.025 "write_zeroes": true, 00:14:12.025 "flush": true, 00:14:12.025 "reset": true, 00:14:12.025 "compare": false, 00:14:12.025 "compare_and_write": false, 00:14:12.025 "abort": true, 00:14:12.025 "nvme_admin": false, 00:14:12.025 "nvme_io": false 00:14:12.025 }, 00:14:12.025 "memory_domains": [ 00:14:12.025 { 00:14:12.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.025 "dma_device_type": 2 00:14:12.025 } 00:14:12.025 ], 00:14:12.025 "driver_specific": {} 00:14:12.025 } 00:14:12.025 ] 00:14:12.025 11:24:30 -- common/autotest_common.sh@905 -- # return 0 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.025 11:24:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.283 11:24:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.283 "name": "Existed_Raid", 00:14:12.283 "uuid": "83be00ea-1877-4c3d-8441-69916ec27944", 00:14:12.283 "strip_size_kb": 0, 00:14:12.283 "state": "online", 00:14:12.283 "raid_level": "raid1", 00:14:12.283 "superblock": false, 00:14:12.283 "num_base_bdevs": 2, 00:14:12.283 "num_base_bdevs_discovered": 2, 00:14:12.284 "num_base_bdevs_operational": 2, 00:14:12.284 "base_bdevs_list": [ 00:14:12.284 { 00:14:12.284 "name": "BaseBdev1", 00:14:12.284 "uuid": "d4d97772-0c33-4e28-ac7f-0748cf351f81", 00:14:12.284 "is_configured": true, 00:14:12.284 "data_offset": 0, 00:14:12.284 "data_size": 65536 00:14:12.284 }, 00:14:12.284 { 00:14:12.284 "name": "BaseBdev2", 00:14:12.284 "uuid": "0ff5defe-5bef-459f-9419-40aaf33c5b74", 00:14:12.284 "is_configured": true, 00:14:12.284 "data_offset": 0, 00:14:12.284 "data_size": 65536 00:14:12.284 } 00:14:12.284 ] 00:14:12.284 }' 00:14:12.284 11:24:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.284 11:24:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.542 11:24:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.801 [2024-11-26 11:24:30.815325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.801 11:24:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.059 11:24:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.059 "name": "Existed_Raid", 00:14:13.059 "uuid": "83be00ea-1877-4c3d-8441-69916ec27944", 00:14:13.059 "strip_size_kb": 0, 00:14:13.059 "state": "online", 00:14:13.059 "raid_level": "raid1", 00:14:13.059 "superblock": false, 00:14:13.059 "num_base_bdevs": 2, 00:14:13.059 "num_base_bdevs_discovered": 1, 00:14:13.059 "num_base_bdevs_operational": 1, 00:14:13.059 "base_bdevs_list": [ 00:14:13.059 { 00:14:13.059 "name": null, 00:14:13.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.059 "is_configured": false, 00:14:13.059 "data_offset": 0, 00:14:13.059 "data_size": 65536 00:14:13.059 }, 00:14:13.059 { 00:14:13.059 "name": "BaseBdev2", 00:14:13.059 "uuid": "0ff5defe-5bef-459f-9419-40aaf33c5b74", 00:14:13.059 "is_configured": true, 00:14:13.059 "data_offset": 0, 00:14:13.059 "data_size": 65536 00:14:13.059 } 00:14:13.059 ] 00:14:13.059 }' 00:14:13.059 11:24:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.059 11:24:31 -- common/autotest_common.sh@10 -- # set +x 00:14:13.318 11:24:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:13.318 11:24:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:13.318 11:24:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.318 11:24:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:13.578 11:24:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:13.578 11:24:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.578 11:24:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:13.837 [2024-11-26 11:24:31.814659] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.837 [2024-11-26 11:24:31.814693] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.837 [2024-11-26 11:24:31.814780] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.837 [2024-11-26 11:24:31.821897] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.837 [2024-11-26 11:24:31.821945] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:13.837 11:24:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:13.837 11:24:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:13.837 11:24:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.837 11:24:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:13.837 11:24:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:13.837 11:24:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:13.837 11:24:32 -- bdev/bdev_raid.sh@287 -- # killprocess 80590 00:14:13.837 11:24:32 -- common/autotest_common.sh@936 -- # '[' -z 80590 ']' 00:14:13.837 11:24:32 -- common/autotest_common.sh@940 -- # kill -0 80590 00:14:13.837 11:24:32 -- common/autotest_common.sh@941 -- # uname 00:14:13.837 11:24:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.837 11:24:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80590 00:14:14.096 11:24:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.096 killing process with pid 80590 00:14:14.096 11:24:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.096 11:24:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80590' 00:14:14.096 11:24:32 -- common/autotest_common.sh@955 -- # kill 80590 00:14:14.096 11:24:32 -- common/autotest_common.sh@960 -- # wait 80590 00:14:14.096 [2024-11-26 11:24:32.078806] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.096 [2024-11-26 11:24:32.078901] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:14.096 00:14:14.096 real 0m7.384s 00:14:14.096 user 0m12.754s 00:14:14.096 sys 0m1.151s 00:14:14.096 11:24:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.096 ************************************ 00:14:14.096 END TEST raid_state_function_test 00:14:14.096 ************************************ 00:14:14.096 11:24:32 -- common/autotest_common.sh@10 -- # set +x 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:14.096 11:24:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:14.096 11:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.096 11:24:32 -- common/autotest_common.sh@10 -- # set +x 00:14:14.096 ************************************ 00:14:14.096 START TEST raid_state_function_test_sb 00:14:14.096 ************************************ 00:14:14.096 11:24:32 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:14.096 11:24:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:14.355 Process raid pid: 80866 00:14:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=80866 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 80866' 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 80866 /var/tmp/spdk-raid.sock 00:14:14.355 11:24:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:14.355 11:24:32 -- common/autotest_common.sh@829 -- # '[' -z 80866 ']' 00:14:14.355 11:24:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:14.355 11:24:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.355 11:24:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:14.355 11:24:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.355 11:24:32 -- common/autotest_common.sh@10 -- # set +x 00:14:14.355 [2024-11-26 11:24:32.378036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.355 [2024-11-26 11:24:32.378389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.355 [2024-11-26 11:24:32.536552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.355 [2024-11-26 11:24:32.572755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.614 [2024-11-26 11:24:32.606175] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.182 11:24:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.182 11:24:33 -- common/autotest_common.sh@862 -- # return 0 00:14:15.182 11:24:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.441 [2024-11-26 11:24:33.446326] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.441 [2024-11-26 11:24:33.446384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.441 [2024-11-26 11:24:33.446425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.442 [2024-11-26 11:24:33.446439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.442 11:24:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.700 11:24:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.700 "name": "Existed_Raid", 00:14:15.700 "uuid": "99efce41-20de-4218-b770-d9105b459eb8", 00:14:15.700 "strip_size_kb": 0, 00:14:15.700 "state": "configuring", 00:14:15.700 "raid_level": "raid1", 00:14:15.700 "superblock": true, 00:14:15.700 "num_base_bdevs": 2, 00:14:15.700 "num_base_bdevs_discovered": 0, 00:14:15.700 "num_base_bdevs_operational": 2, 00:14:15.700 "base_bdevs_list": [ 00:14:15.700 { 00:14:15.700 "name": "BaseBdev1", 00:14:15.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.700 "is_configured": false, 00:14:15.700 "data_offset": 0, 00:14:15.700 "data_size": 0 00:14:15.700 }, 00:14:15.700 { 00:14:15.700 "name": "BaseBdev2", 00:14:15.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.700 "is_configured": false, 00:14:15.700 "data_offset": 0, 00:14:15.700 "data_size": 0 00:14:15.700 } 00:14:15.700 ] 00:14:15.700 }' 00:14:15.700 11:24:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.700 11:24:33 -- common/autotest_common.sh@10 -- # set +x 00:14:15.959 11:24:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.218 [2024-11-26 11:24:34.234447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.218 [2024-11-26 11:24:34.234493] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:16.218 11:24:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:16.476 [2024-11-26 11:24:34.486569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.476 [2024-11-26 11:24:34.486619] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.476 [2024-11-26 11:24:34.486661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.476 [2024-11-26 11:24:34.486675] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.476 11:24:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.735 [2024-11-26 11:24:34.741560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.735 BaseBdev1 00:14:16.735 11:24:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:16.735 11:24:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:16.735 11:24:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.735 11:24:34 -- common/autotest_common.sh@899 -- # local i 00:14:16.735 11:24:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.735 11:24:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.735 11:24:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.735 11:24:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.992 [ 00:14:16.992 { 00:14:16.992 "name": "BaseBdev1", 00:14:16.992 "aliases": [ 00:14:16.992 "b2936eab-7755-4002-bbcb-d5c2de316871" 00:14:16.992 ], 00:14:16.992 "product_name": "Malloc disk", 00:14:16.992 "block_size": 512, 00:14:16.992 "num_blocks": 65536, 00:14:16.992 "uuid": "b2936eab-7755-4002-bbcb-d5c2de316871", 00:14:16.992 "assigned_rate_limits": { 00:14:16.992 "rw_ios_per_sec": 0, 00:14:16.992 "rw_mbytes_per_sec": 0, 00:14:16.992 "r_mbytes_per_sec": 0, 00:14:16.992 "w_mbytes_per_sec": 0 00:14:16.992 }, 00:14:16.992 "claimed": true, 00:14:16.992 "claim_type": "exclusive_write", 00:14:16.992 "zoned": false, 00:14:16.992 "supported_io_types": { 00:14:16.992 "read": true, 00:14:16.992 "write": true, 00:14:16.992 "unmap": true, 00:14:16.992 "write_zeroes": true, 00:14:16.992 "flush": true, 00:14:16.992 "reset": true, 00:14:16.992 "compare": false, 00:14:16.992 "compare_and_write": false, 00:14:16.992 "abort": true, 00:14:16.992 "nvme_admin": false, 00:14:16.992 "nvme_io": false 00:14:16.992 }, 00:14:16.992 "memory_domains": [ 00:14:16.992 { 00:14:16.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.992 "dma_device_type": 2 00:14:16.992 } 00:14:16.992 ], 00:14:16.992 "driver_specific": {} 00:14:16.992 } 00:14:16.992 ] 00:14:16.992 11:24:35 -- common/autotest_common.sh@905 -- # return 0 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.992 11:24:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.250 11:24:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.250 "name": "Existed_Raid", 00:14:17.250 "uuid": "5f585cdf-71a5-4406-bc98-0f26a6c0ff6a", 00:14:17.250 "strip_size_kb": 0, 00:14:17.250 "state": "configuring", 00:14:17.250 "raid_level": "raid1", 00:14:17.250 "superblock": true, 00:14:17.250 "num_base_bdevs": 2, 00:14:17.250 "num_base_bdevs_discovered": 1, 00:14:17.250 "num_base_bdevs_operational": 2, 00:14:17.250 "base_bdevs_list": [ 00:14:17.250 { 00:14:17.250 "name": "BaseBdev1", 00:14:17.250 "uuid": "b2936eab-7755-4002-bbcb-d5c2de316871", 00:14:17.250 "is_configured": true, 00:14:17.250 "data_offset": 2048, 00:14:17.250 "data_size": 63488 00:14:17.250 }, 00:14:17.250 { 00:14:17.250 "name": "BaseBdev2", 00:14:17.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.250 "is_configured": false, 00:14:17.250 "data_offset": 0, 00:14:17.250 "data_size": 0 00:14:17.250 } 00:14:17.250 ] 00:14:17.250 }' 00:14:17.250 11:24:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.250 11:24:35 -- common/autotest_common.sh@10 -- # set +x 00:14:17.508 11:24:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.765 [2024-11-26 11:24:35.977948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.765 [2024-11-26 11:24:35.978008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:17.765 11:24:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:17.765 11:24:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.069 11:24:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.327 BaseBdev1 00:14:18.327 11:24:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:18.327 11:24:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:18.327 11:24:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:18.327 11:24:36 -- common/autotest_common.sh@899 -- # local i 00:14:18.327 11:24:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:18.327 11:24:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:18.327 11:24:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.585 11:24:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.844 [ 00:14:18.844 { 00:14:18.844 "name": "BaseBdev1", 00:14:18.844 "aliases": [ 00:14:18.844 "b1d10c64-a972-42b7-9a14-31ed91f3ce5b" 00:14:18.844 ], 00:14:18.844 "product_name": "Malloc disk", 00:14:18.844 "block_size": 512, 00:14:18.844 "num_blocks": 65536, 00:14:18.844 "uuid": "b1d10c64-a972-42b7-9a14-31ed91f3ce5b", 00:14:18.844 "assigned_rate_limits": { 00:14:18.844 "rw_ios_per_sec": 0, 00:14:18.844 "rw_mbytes_per_sec": 0, 00:14:18.844 "r_mbytes_per_sec": 0, 00:14:18.844 "w_mbytes_per_sec": 0 00:14:18.844 }, 00:14:18.844 "claimed": false, 00:14:18.844 "zoned": false, 00:14:18.844 "supported_io_types": { 00:14:18.844 "read": true, 00:14:18.844 "write": true, 00:14:18.844 "unmap": true, 00:14:18.844 "write_zeroes": true, 00:14:18.844 "flush": true, 00:14:18.844 "reset": true, 00:14:18.844 "compare": false, 00:14:18.844 "compare_and_write": false, 00:14:18.844 "abort": true, 00:14:18.844 "nvme_admin": false, 00:14:18.844 "nvme_io": false 00:14:18.844 }, 00:14:18.844 "memory_domains": [ 00:14:18.844 { 00:14:18.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.844 "dma_device_type": 2 00:14:18.844 } 00:14:18.844 ], 00:14:18.844 "driver_specific": {} 00:14:18.844 } 00:14:18.844 ] 00:14:18.844 11:24:36 -- common/autotest_common.sh@905 -- # return 0 00:14:18.844 11:24:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:19.102 [2024-11-26 11:24:37.112593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.102 [2024-11-26 11:24:37.115095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.102 [2024-11-26 11:24:37.115143] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.102 11:24:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.360 11:24:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.360 "name": "Existed_Raid", 00:14:19.360 "uuid": "0b6d60c8-1fc7-4fbc-8c0a-dd33698cab7e", 00:14:19.360 "strip_size_kb": 0, 00:14:19.360 "state": "configuring", 00:14:19.360 "raid_level": "raid1", 00:14:19.360 "superblock": true, 00:14:19.360 "num_base_bdevs": 2, 00:14:19.360 "num_base_bdevs_discovered": 1, 00:14:19.360 "num_base_bdevs_operational": 2, 00:14:19.360 "base_bdevs_list": [ 00:14:19.360 { 00:14:19.360 "name": "BaseBdev1", 00:14:19.360 "uuid": "b1d10c64-a972-42b7-9a14-31ed91f3ce5b", 00:14:19.360 "is_configured": true, 00:14:19.360 "data_offset": 2048, 00:14:19.360 "data_size": 63488 00:14:19.360 }, 00:14:19.360 { 00:14:19.360 "name": "BaseBdev2", 00:14:19.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.360 "is_configured": false, 00:14:19.360 "data_offset": 0, 00:14:19.360 "data_size": 0 00:14:19.360 } 00:14:19.360 ] 00:14:19.360 }' 00:14:19.360 11:24:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.360 11:24:37 -- common/autotest_common.sh@10 -- # set +x 00:14:19.618 11:24:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.886 [2024-11-26 11:24:37.896884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.886 [2024-11-26 11:24:37.897184] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:19.886 [2024-11-26 11:24:37.897207] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.886 [2024-11-26 11:24:37.897329] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:19.886 [2024-11-26 11:24:37.897711] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:19.886 [2024-11-26 11:24:37.897729] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:14:19.886 [2024-11-26 11:24:37.897879] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.886 BaseBdev2 00:14:19.886 11:24:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:19.886 11:24:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:19.886 11:24:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:19.886 11:24:37 -- common/autotest_common.sh@899 -- # local i 00:14:19.886 11:24:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:19.886 11:24:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:19.886 11:24:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.177 11:24:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.177 [ 00:14:20.177 { 00:14:20.177 "name": "BaseBdev2", 00:14:20.177 "aliases": [ 00:14:20.177 "f3691671-e8a4-4df7-9be7-f5c7306f71b3" 00:14:20.177 ], 00:14:20.177 "product_name": "Malloc disk", 00:14:20.177 "block_size": 512, 00:14:20.177 "num_blocks": 65536, 00:14:20.177 "uuid": "f3691671-e8a4-4df7-9be7-f5c7306f71b3", 00:14:20.177 "assigned_rate_limits": { 00:14:20.177 "rw_ios_per_sec": 0, 00:14:20.177 "rw_mbytes_per_sec": 0, 00:14:20.177 "r_mbytes_per_sec": 0, 00:14:20.177 "w_mbytes_per_sec": 0 00:14:20.177 }, 00:14:20.177 "claimed": true, 00:14:20.177 "claim_type": "exclusive_write", 00:14:20.177 "zoned": false, 00:14:20.177 "supported_io_types": { 00:14:20.177 "read": true, 00:14:20.177 "write": true, 00:14:20.177 "unmap": true, 00:14:20.177 "write_zeroes": true, 00:14:20.177 "flush": true, 00:14:20.177 "reset": true, 00:14:20.177 "compare": false, 00:14:20.177 "compare_and_write": false, 00:14:20.177 "abort": true, 00:14:20.177 "nvme_admin": false, 00:14:20.177 "nvme_io": false 00:14:20.177 }, 00:14:20.177 "memory_domains": [ 00:14:20.177 { 00:14:20.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.177 "dma_device_type": 2 00:14:20.177 } 00:14:20.177 ], 00:14:20.177 "driver_specific": {} 00:14:20.177 } 00:14:20.177 ] 00:14:20.177 11:24:38 -- common/autotest_common.sh@905 -- # return 0 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.177 11:24:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.463 11:24:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.463 "name": "Existed_Raid", 00:14:20.464 "uuid": "0b6d60c8-1fc7-4fbc-8c0a-dd33698cab7e", 00:14:20.464 "strip_size_kb": 0, 00:14:20.464 "state": "online", 00:14:20.464 "raid_level": "raid1", 00:14:20.464 "superblock": true, 00:14:20.464 "num_base_bdevs": 2, 00:14:20.464 "num_base_bdevs_discovered": 2, 00:14:20.464 "num_base_bdevs_operational": 2, 00:14:20.464 "base_bdevs_list": [ 00:14:20.464 { 00:14:20.464 "name": "BaseBdev1", 00:14:20.464 "uuid": "b1d10c64-a972-42b7-9a14-31ed91f3ce5b", 00:14:20.464 "is_configured": true, 00:14:20.464 "data_offset": 2048, 00:14:20.464 "data_size": 63488 00:14:20.464 }, 00:14:20.464 { 00:14:20.464 "name": "BaseBdev2", 00:14:20.464 "uuid": "f3691671-e8a4-4df7-9be7-f5c7306f71b3", 00:14:20.464 "is_configured": true, 00:14:20.464 "data_offset": 2048, 00:14:20.464 "data_size": 63488 00:14:20.464 } 00:14:20.464 ] 00:14:20.464 }' 00:14:20.464 11:24:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.464 11:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:20.723 11:24:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:20.982 [2024-11-26 11:24:39.117423] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.982 11:24:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.242 11:24:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.242 "name": "Existed_Raid", 00:14:21.242 "uuid": "0b6d60c8-1fc7-4fbc-8c0a-dd33698cab7e", 00:14:21.242 "strip_size_kb": 0, 00:14:21.242 "state": "online", 00:14:21.242 "raid_level": "raid1", 00:14:21.242 "superblock": true, 00:14:21.242 "num_base_bdevs": 2, 00:14:21.242 "num_base_bdevs_discovered": 1, 00:14:21.242 "num_base_bdevs_operational": 1, 00:14:21.242 "base_bdevs_list": [ 00:14:21.242 { 00:14:21.242 "name": null, 00:14:21.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.242 "is_configured": false, 00:14:21.242 "data_offset": 2048, 00:14:21.242 "data_size": 63488 00:14:21.242 }, 00:14:21.242 { 00:14:21.242 "name": "BaseBdev2", 00:14:21.242 "uuid": "f3691671-e8a4-4df7-9be7-f5c7306f71b3", 00:14:21.242 "is_configured": true, 00:14:21.242 "data_offset": 2048, 00:14:21.242 "data_size": 63488 00:14:21.242 } 00:14:21.242 ] 00:14:21.242 }' 00:14:21.242 11:24:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.242 11:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:21.501 11:24:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:21.501 11:24:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:21.501 11:24:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.501 11:24:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:21.759 11:24:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:21.759 11:24:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.759 11:24:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:22.018 [2024-11-26 11:24:40.080884] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.018 [2024-11-26 11:24:40.080921] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.018 [2024-11-26 11:24:40.081048] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.018 [2024-11-26 11:24:40.087828] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.018 [2024-11-26 11:24:40.087868] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:22.018 11:24:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:22.018 11:24:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:22.018 11:24:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.018 11:24:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.277 11:24:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:22.277 11:24:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:22.277 11:24:40 -- bdev/bdev_raid.sh@287 -- # killprocess 80866 00:14:22.277 11:24:40 -- common/autotest_common.sh@936 -- # '[' -z 80866 ']' 00:14:22.277 11:24:40 -- common/autotest_common.sh@940 -- # kill -0 80866 00:14:22.277 11:24:40 -- common/autotest_common.sh@941 -- # uname 00:14:22.277 11:24:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.277 11:24:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80866 00:14:22.277 killing process with pid 80866 00:14:22.277 11:24:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.277 11:24:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.277 11:24:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80866' 00:14:22.277 11:24:40 -- common/autotest_common.sh@955 -- # kill 80866 00:14:22.277 [2024-11-26 11:24:40.346011] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.277 11:24:40 -- common/autotest_common.sh@960 -- # wait 80866 00:14:22.277 [2024-11-26 11:24:40.346089] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:22.536 00:14:22.536 real 0m8.214s 00:14:22.536 user 0m14.354s 00:14:22.536 sys 0m1.211s 00:14:22.536 11:24:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:22.536 ************************************ 00:14:22.536 END TEST raid_state_function_test_sb 00:14:22.536 ************************************ 00:14:22.536 11:24:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:22.536 11:24:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:22.536 11:24:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.536 11:24:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.536 ************************************ 00:14:22.536 START TEST raid_superblock_test 00:14:22.536 ************************************ 00:14:22.536 11:24:40 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=81155 00:14:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 81155 /var/tmp/spdk-raid.sock 00:14:22.536 11:24:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:22.536 11:24:40 -- common/autotest_common.sh@829 -- # '[' -z 81155 ']' 00:14:22.536 11:24:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:22.536 11:24:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.536 11:24:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:22.536 11:24:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.536 11:24:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.536 [2024-11-26 11:24:40.642794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:22.536 [2024-11-26 11:24:40.642988] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81155 ] 00:14:22.796 [2024-11-26 11:24:40.794163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.796 [2024-11-26 11:24:40.829875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.796 [2024-11-26 11:24:40.862789] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.363 11:24:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.363 11:24:41 -- common/autotest_common.sh@862 -- # return 0 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.363 11:24:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:23.622 malloc1 00:14:23.622 11:24:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.880 [2024-11-26 11:24:42.049522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.880 [2024-11-26 11:24:42.049614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.880 [2024-11-26 11:24:42.049650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:23.880 [2024-11-26 11:24:42.049677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.880 [2024-11-26 11:24:42.052584] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.880 [2024-11-26 11:24:42.052754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.880 pt1 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.880 11:24:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:24.139 malloc2 00:14:24.140 11:24:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.398 [2024-11-26 11:24:42.480324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.398 [2024-11-26 11:24:42.480590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.398 [2024-11-26 11:24:42.480642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:24.398 [2024-11-26 11:24:42.480658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.398 [2024-11-26 11:24:42.483214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.398 [2024-11-26 11:24:42.483272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.398 pt2 00:14:24.398 11:24:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:24.398 11:24:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:24.398 11:24:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:24.655 [2024-11-26 11:24:42.696452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.655 [2024-11-26 11:24:42.698702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.655 [2024-11-26 11:24:42.698934] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:24.655 [2024-11-26 11:24:42.698956] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.655 [2024-11-26 11:24:42.699097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:24.655 [2024-11-26 11:24:42.699482] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:24.655 [2024-11-26 11:24:42.699506] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:24.655 [2024-11-26 11:24:42.699651] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.655 11:24:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.914 11:24:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.914 "name": "raid_bdev1", 00:14:24.914 "uuid": "fdf194b5-c94d-4422-80c1-87e0c81e7703", 00:14:24.914 "strip_size_kb": 0, 00:14:24.914 "state": "online", 00:14:24.914 "raid_level": "raid1", 00:14:24.914 "superblock": true, 00:14:24.914 "num_base_bdevs": 2, 00:14:24.914 "num_base_bdevs_discovered": 2, 00:14:24.914 "num_base_bdevs_operational": 2, 00:14:24.914 "base_bdevs_list": [ 00:14:24.914 { 00:14:24.914 "name": "pt1", 00:14:24.914 "uuid": "08181799-7739-5b41-8a96-42ccfa4a034b", 00:14:24.914 "is_configured": true, 00:14:24.914 "data_offset": 2048, 00:14:24.914 "data_size": 63488 00:14:24.914 }, 00:14:24.914 { 00:14:24.914 "name": "pt2", 00:14:24.914 "uuid": "82d8bcaa-1e17-5aef-a9ea-b7356e0cee07", 00:14:24.914 "is_configured": true, 00:14:24.914 "data_offset": 2048, 00:14:24.914 "data_size": 63488 00:14:24.914 } 00:14:24.914 ] 00:14:24.914 }' 00:14:24.914 11:24:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.914 11:24:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.173 11:24:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:25.173 11:24:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:25.431 [2024-11-26 11:24:43.437138] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.431 11:24:43 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fdf194b5-c94d-4422-80c1-87e0c81e7703 00:14:25.431 11:24:43 -- bdev/bdev_raid.sh@380 -- # '[' -z fdf194b5-c94d-4422-80c1-87e0c81e7703 ']' 00:14:25.431 11:24:43 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.431 [2024-11-26 11:24:43.640690] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.431 [2024-11-26 11:24:43.640733] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.431 [2024-11-26 11:24:43.640818] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.431 [2024-11-26 11:24:43.640922] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.432 [2024-11-26 11:24:43.640956] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:25.432 11:24:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:25.432 11:24:43 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.690 11:24:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:25.690 11:24:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:25.690 11:24:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.690 11:24:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:25.949 11:24:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.949 11:24:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:26.207 11:24:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:26.207 11:24:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:26.466 11:24:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:26.466 11:24:44 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.466 11:24:44 -- common/autotest_common.sh@650 -- # local es=0 00:14:26.466 11:24:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.466 11:24:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.466 11:24:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.466 11:24:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.466 11:24:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.466 11:24:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.466 11:24:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.466 11:24:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.466 11:24:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:26.466 11:24:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.725 [2024-11-26 11:24:44.837117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:26.726 [2024-11-26 11:24:44.839477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:26.726 [2024-11-26 11:24:44.839593] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:26.726 [2024-11-26 11:24:44.839664] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:26.726 [2024-11-26 11:24:44.839694] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.726 [2024-11-26 11:24:44.839707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:26.726 request: 00:14:26.726 { 00:14:26.726 "name": "raid_bdev1", 00:14:26.726 "raid_level": "raid1", 00:14:26.726 "base_bdevs": [ 00:14:26.726 "malloc1", 00:14:26.726 "malloc2" 00:14:26.726 ], 00:14:26.726 "superblock": false, 00:14:26.726 "method": "bdev_raid_create", 00:14:26.726 "req_id": 1 00:14:26.726 } 00:14:26.726 Got JSON-RPC error response 00:14:26.726 response: 00:14:26.726 { 00:14:26.726 "code": -17, 00:14:26.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:26.726 } 00:14:26.726 11:24:44 -- common/autotest_common.sh@653 -- # es=1 00:14:26.726 11:24:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.726 11:24:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.726 11:24:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.726 11:24:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.726 11:24:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:26.984 11:24:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:26.984 11:24:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:26.984 11:24:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.243 [2024-11-26 11:24:45.325161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.243 [2024-11-26 11:24:45.325471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.243 [2024-11-26 11:24:45.325513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:27.243 [2024-11-26 11:24:45.325528] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.243 [2024-11-26 11:24:45.328216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.244 [2024-11-26 11:24:45.328271] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.244 [2024-11-26 11:24:45.328370] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:27.244 [2024-11-26 11:24:45.328412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.244 pt1 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.244 11:24:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.502 11:24:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.502 "name": "raid_bdev1", 00:14:27.502 "uuid": "fdf194b5-c94d-4422-80c1-87e0c81e7703", 00:14:27.502 "strip_size_kb": 0, 00:14:27.502 "state": "configuring", 00:14:27.502 "raid_level": "raid1", 00:14:27.502 "superblock": true, 00:14:27.502 "num_base_bdevs": 2, 00:14:27.502 "num_base_bdevs_discovered": 1, 00:14:27.502 "num_base_bdevs_operational": 2, 00:14:27.502 "base_bdevs_list": [ 00:14:27.502 { 00:14:27.502 "name": "pt1", 00:14:27.502 "uuid": "08181799-7739-5b41-8a96-42ccfa4a034b", 00:14:27.502 "is_configured": true, 00:14:27.502 "data_offset": 2048, 00:14:27.502 "data_size": 63488 00:14:27.502 }, 00:14:27.502 { 00:14:27.502 "name": null, 00:14:27.502 "uuid": "82d8bcaa-1e17-5aef-a9ea-b7356e0cee07", 00:14:27.502 "is_configured": false, 00:14:27.502 "data_offset": 2048, 00:14:27.502 "data_size": 63488 00:14:27.502 } 00:14:27.502 ] 00:14:27.502 }' 00:14:27.502 11:24:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.502 11:24:45 -- common/autotest_common.sh@10 -- # set +x 00:14:27.760 11:24:45 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:27.760 11:24:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:27.760 11:24:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:27.760 11:24:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.018 [2024-11-26 11:24:46.069419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.018 [2024-11-26 11:24:46.069705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.018 [2024-11-26 11:24:46.069753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:28.018 [2024-11-26 11:24:46.069768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.018 [2024-11-26 11:24:46.070331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.018 [2024-11-26 11:24:46.070356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.018 [2024-11-26 11:24:46.070434] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:28.018 [2024-11-26 11:24:46.070475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.018 [2024-11-26 11:24:46.070601] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:28.018 [2024-11-26 11:24:46.070615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.018 [2024-11-26 11:24:46.070712] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:28.018 [2024-11-26 11:24:46.071109] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:28.018 [2024-11-26 11:24:46.071137] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:28.018 [2024-11-26 11:24:46.071257] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.018 pt2 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.019 11:24:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.277 11:24:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.277 "name": "raid_bdev1", 00:14:28.277 "uuid": "fdf194b5-c94d-4422-80c1-87e0c81e7703", 00:14:28.277 "strip_size_kb": 0, 00:14:28.277 "state": "online", 00:14:28.277 "raid_level": "raid1", 00:14:28.277 "superblock": true, 00:14:28.277 "num_base_bdevs": 2, 00:14:28.277 "num_base_bdevs_discovered": 2, 00:14:28.277 "num_base_bdevs_operational": 2, 00:14:28.277 "base_bdevs_list": [ 00:14:28.277 { 00:14:28.277 "name": "pt1", 00:14:28.277 "uuid": "08181799-7739-5b41-8a96-42ccfa4a034b", 00:14:28.277 "is_configured": true, 00:14:28.277 "data_offset": 2048, 00:14:28.277 "data_size": 63488 00:14:28.277 }, 00:14:28.277 { 00:14:28.277 "name": "pt2", 00:14:28.277 "uuid": "82d8bcaa-1e17-5aef-a9ea-b7356e0cee07", 00:14:28.277 "is_configured": true, 00:14:28.277 "data_offset": 2048, 00:14:28.277 "data_size": 63488 00:14:28.277 } 00:14:28.277 ] 00:14:28.277 }' 00:14:28.277 11:24:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.277 11:24:46 -- common/autotest_common.sh@10 -- # set +x 00:14:28.535 11:24:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.536 11:24:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:28.794 [2024-11-26 11:24:46.849996] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.794 11:24:46 -- bdev/bdev_raid.sh@430 -- # '[' fdf194b5-c94d-4422-80c1-87e0c81e7703 '!=' fdf194b5-c94d-4422-80c1-87e0c81e7703 ']' 00:14:28.794 11:24:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:28.794 11:24:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:28.794 11:24:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:28.794 11:24:46 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:29.053 [2024-11-26 11:24:47.105818] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.053 11:24:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.311 11:24:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.311 "name": "raid_bdev1", 00:14:29.311 "uuid": "fdf194b5-c94d-4422-80c1-87e0c81e7703", 00:14:29.311 "strip_size_kb": 0, 00:14:29.311 "state": "online", 00:14:29.311 "raid_level": "raid1", 00:14:29.311 "superblock": true, 00:14:29.311 "num_base_bdevs": 2, 00:14:29.311 "num_base_bdevs_discovered": 1, 00:14:29.311 "num_base_bdevs_operational": 1, 00:14:29.311 "base_bdevs_list": [ 00:14:29.311 { 00:14:29.311 "name": null, 00:14:29.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.311 "is_configured": false, 00:14:29.311 "data_offset": 2048, 00:14:29.311 "data_size": 63488 00:14:29.311 }, 00:14:29.311 { 00:14:29.311 "name": "pt2", 00:14:29.311 "uuid": "82d8bcaa-1e17-5aef-a9ea-b7356e0cee07", 00:14:29.311 "is_configured": true, 00:14:29.311 "data_offset": 2048, 00:14:29.311 "data_size": 63488 00:14:29.311 } 00:14:29.311 ] 00:14:29.311 }' 00:14:29.311 11:24:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.311 11:24:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.570 11:24:47 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:29.827 [2024-11-26 11:24:47.917964] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.827 [2024-11-26 11:24:47.917998] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.827 [2024-11-26 11:24:47.918080] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.827 [2024-11-26 11:24:47.918139] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.828 [2024-11-26 11:24:47.918156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:29.828 11:24:47 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.828 11:24:47 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:30.086 11:24:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:30.086 11:24:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:30.086 11:24:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:30.086 11:24:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:30.086 11:24:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:30.344 11:24:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:30.345 11:24:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:30.345 11:24:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:30.345 11:24:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:30.345 11:24:48 -- bdev/bdev_raid.sh@462 -- # i=1 00:14:30.345 11:24:48 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.603 [2024-11-26 11:24:48.638160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.603 [2024-11-26 11:24:48.638253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.603 [2024-11-26 11:24:48.638285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:14:30.603 [2024-11-26 11:24:48.638302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.603 [2024-11-26 11:24:48.640881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.603 [2024-11-26 11:24:48.640964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.603 [2024-11-26 11:24:48.641072] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:30.603 [2024-11-26 11:24:48.641121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.603 [2024-11-26 11:24:48.641223] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:14:30.603 [2024-11-26 11:24:48.641246] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.603 [2024-11-26 11:24:48.641321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:14:30.603 pt2 00:14:30.603 [2024-11-26 11:24:48.641676] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:14:30.603 [2024-11-26 11:24:48.641699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:14:30.603 [2024-11-26 11:24:48.641817] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.603 11:24:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.861 11:24:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.861 "name": "raid_bdev1", 00:14:30.861 "uuid": "fdf194b5-c94d-4422-80c1-87e0c81e7703", 00:14:30.861 "strip_size_kb": 0, 00:14:30.861 "state": "online", 00:14:30.861 "raid_level": "raid1", 00:14:30.861 "superblock": true, 00:14:30.861 "num_base_bdevs": 2, 00:14:30.861 "num_base_bdevs_discovered": 1, 00:14:30.861 "num_base_bdevs_operational": 1, 00:14:30.861 "base_bdevs_list": [ 00:14:30.861 { 00:14:30.861 "name": null, 00:14:30.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.861 "is_configured": false, 00:14:30.861 "data_offset": 2048, 00:14:30.861 "data_size": 63488 00:14:30.861 }, 00:14:30.861 { 00:14:30.861 "name": "pt2", 00:14:30.861 "uuid": "82d8bcaa-1e17-5aef-a9ea-b7356e0cee07", 00:14:30.861 "is_configured": true, 00:14:30.861 "data_offset": 2048, 00:14:30.861 "data_size": 63488 00:14:30.861 } 00:14:30.861 ] 00:14:30.862 }' 00:14:30.862 11:24:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.862 11:24:48 -- common/autotest_common.sh@10 -- # set +x 00:14:31.133 11:24:49 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:14:31.133 11:24:49 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:31.133 11:24:49 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:31.133 [2024-11-26 11:24:49.346618] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.399 11:24:49 -- bdev/bdev_raid.sh@506 -- # '[' fdf194b5-c94d-4422-80c1-87e0c81e7703 '!=' fdf194b5-c94d-4422-80c1-87e0c81e7703 ']' 00:14:31.399 11:24:49 -- bdev/bdev_raid.sh@511 -- # killprocess 81155 00:14:31.399 11:24:49 -- common/autotest_common.sh@936 -- # '[' -z 81155 ']' 00:14:31.399 11:24:49 -- common/autotest_common.sh@940 -- # kill -0 81155 00:14:31.399 11:24:49 -- common/autotest_common.sh@941 -- # uname 00:14:31.399 11:24:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.399 11:24:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81155 00:14:31.399 killing process with pid 81155 00:14:31.399 11:24:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.399 11:24:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.399 11:24:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81155' 00:14:31.399 11:24:49 -- common/autotest_common.sh@955 -- # kill 81155 00:14:31.399 [2024-11-26 11:24:49.399514] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.399 11:24:49 -- common/autotest_common.sh@960 -- # wait 81155 00:14:31.399 [2024-11-26 11:24:49.399599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.399 [2024-11-26 11:24:49.399664] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.399 [2024-11-26 11:24:49.399680] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:14:31.399 [2024-11-26 11:24:49.414696] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.399 11:24:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:31.399 00:14:31.399 real 0m9.017s 00:14:31.399 user 0m15.799s 00:14:31.399 sys 0m1.393s 00:14:31.399 11:24:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.399 11:24:49 -- common/autotest_common.sh@10 -- # set +x 00:14:31.399 ************************************ 00:14:31.399 END TEST raid_superblock_test 00:14:31.399 ************************************ 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:31.657 11:24:49 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:31.657 11:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.657 11:24:49 -- common/autotest_common.sh@10 -- # set +x 00:14:31.657 ************************************ 00:14:31.657 START TEST raid_state_function_test 00:14:31.657 ************************************ 00:14:31.657 11:24:49 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=81465 00:14:31.657 Process raid pid: 81465 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 81465' 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 81465 /var/tmp/spdk-raid.sock 00:14:31.657 11:24:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:31.657 11:24:49 -- common/autotest_common.sh@829 -- # '[' -z 81465 ']' 00:14:31.657 11:24:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.657 11:24:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.657 11:24:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.657 11:24:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.657 11:24:49 -- common/autotest_common.sh@10 -- # set +x 00:14:31.657 [2024-11-26 11:24:49.724792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.657 [2024-11-26 11:24:49.724975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.657 [2024-11-26 11:24:49.878197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.916 [2024-11-26 11:24:49.915115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.916 [2024-11-26 11:24:49.948347] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.482 11:24:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.482 11:24:50 -- common/autotest_common.sh@862 -- # return 0 00:14:32.482 11:24:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:32.739 [2024-11-26 11:24:50.933051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.739 [2024-11-26 11:24:50.933123] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.739 [2024-11-26 11:24:50.933142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.740 [2024-11-26 11:24:50.933156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.740 [2024-11-26 11:24:50.933169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.740 [2024-11-26 11:24:50.933183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.740 11:24:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.998 11:24:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.998 "name": "Existed_Raid", 00:14:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.998 "strip_size_kb": 64, 00:14:32.998 "state": "configuring", 00:14:32.998 "raid_level": "raid0", 00:14:32.998 "superblock": false, 00:14:32.998 "num_base_bdevs": 3, 00:14:32.998 "num_base_bdevs_discovered": 0, 00:14:32.998 "num_base_bdevs_operational": 3, 00:14:32.998 "base_bdevs_list": [ 00:14:32.998 { 00:14:32.998 "name": "BaseBdev1", 00:14:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.998 "is_configured": false, 00:14:32.998 "data_offset": 0, 00:14:32.998 "data_size": 0 00:14:32.998 }, 00:14:32.998 { 00:14:32.998 "name": "BaseBdev2", 00:14:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.998 "is_configured": false, 00:14:32.998 "data_offset": 0, 00:14:32.998 "data_size": 0 00:14:32.998 }, 00:14:32.998 { 00:14:32.998 "name": "BaseBdev3", 00:14:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.998 "is_configured": false, 00:14:32.998 "data_offset": 0, 00:14:32.998 "data_size": 0 00:14:32.998 } 00:14:32.998 ] 00:14:32.998 }' 00:14:32.998 11:24:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.998 11:24:51 -- common/autotest_common.sh@10 -- # set +x 00:14:33.565 11:24:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.565 [2024-11-26 11:24:51.777223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.565 [2024-11-26 11:24:51.777269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:33.565 11:24:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:33.822 [2024-11-26 11:24:51.985281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.823 [2024-11-26 11:24:51.985330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.823 [2024-11-26 11:24:51.985348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.823 [2024-11-26 11:24:51.985361] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.823 [2024-11-26 11:24:51.985371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.823 [2024-11-26 11:24:51.985382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.823 11:24:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.081 [2024-11-26 11:24:52.220031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.081 BaseBdev1 00:14:34.081 11:24:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:34.081 11:24:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:34.081 11:24:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.081 11:24:52 -- common/autotest_common.sh@899 -- # local i 00:14:34.081 11:24:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.081 11:24:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.081 11:24:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.338 11:24:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.596 [ 00:14:34.596 { 00:14:34.596 "name": "BaseBdev1", 00:14:34.596 "aliases": [ 00:14:34.596 "601e960f-9c44-405a-af58-8cb3af1df5d1" 00:14:34.596 ], 00:14:34.596 "product_name": "Malloc disk", 00:14:34.596 "block_size": 512, 00:14:34.596 "num_blocks": 65536, 00:14:34.596 "uuid": "601e960f-9c44-405a-af58-8cb3af1df5d1", 00:14:34.596 "assigned_rate_limits": { 00:14:34.596 "rw_ios_per_sec": 0, 00:14:34.596 "rw_mbytes_per_sec": 0, 00:14:34.596 "r_mbytes_per_sec": 0, 00:14:34.596 "w_mbytes_per_sec": 0 00:14:34.596 }, 00:14:34.596 "claimed": true, 00:14:34.596 "claim_type": "exclusive_write", 00:14:34.596 "zoned": false, 00:14:34.596 "supported_io_types": { 00:14:34.596 "read": true, 00:14:34.596 "write": true, 00:14:34.596 "unmap": true, 00:14:34.596 "write_zeroes": true, 00:14:34.596 "flush": true, 00:14:34.596 "reset": true, 00:14:34.596 "compare": false, 00:14:34.596 "compare_and_write": false, 00:14:34.596 "abort": true, 00:14:34.596 "nvme_admin": false, 00:14:34.596 "nvme_io": false 00:14:34.596 }, 00:14:34.596 "memory_domains": [ 00:14:34.596 { 00:14:34.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.596 "dma_device_type": 2 00:14:34.596 } 00:14:34.596 ], 00:14:34.596 "driver_specific": {} 00:14:34.596 } 00:14:34.596 ] 00:14:34.596 11:24:52 -- common/autotest_common.sh@905 -- # return 0 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.596 11:24:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.855 11:24:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.855 "name": "Existed_Raid", 00:14:34.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.855 "strip_size_kb": 64, 00:14:34.855 "state": "configuring", 00:14:34.855 "raid_level": "raid0", 00:14:34.855 "superblock": false, 00:14:34.855 "num_base_bdevs": 3, 00:14:34.855 "num_base_bdevs_discovered": 1, 00:14:34.855 "num_base_bdevs_operational": 3, 00:14:34.855 "base_bdevs_list": [ 00:14:34.855 { 00:14:34.855 "name": "BaseBdev1", 00:14:34.855 "uuid": "601e960f-9c44-405a-af58-8cb3af1df5d1", 00:14:34.855 "is_configured": true, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 65536 00:14:34.855 }, 00:14:34.855 { 00:14:34.855 "name": "BaseBdev2", 00:14:34.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.855 "is_configured": false, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 0 00:14:34.855 }, 00:14:34.855 { 00:14:34.855 "name": "BaseBdev3", 00:14:34.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.855 "is_configured": false, 00:14:34.855 "data_offset": 0, 00:14:34.855 "data_size": 0 00:14:34.855 } 00:14:34.855 ] 00:14:34.855 }' 00:14:34.855 11:24:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.855 11:24:52 -- common/autotest_common.sh@10 -- # set +x 00:14:35.113 11:24:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.371 [2024-11-26 11:24:53.512549] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.371 [2024-11-26 11:24:53.512646] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:35.371 11:24:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:35.371 11:24:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:35.629 [2024-11-26 11:24:53.760755] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.629 [2024-11-26 11:24:53.763004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.629 [2024-11-26 11:24:53.763061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.629 [2024-11-26 11:24:53.763077] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.629 [2024-11-26 11:24:53.763089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.629 11:24:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.888 11:24:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.888 "name": "Existed_Raid", 00:14:35.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.888 "strip_size_kb": 64, 00:14:35.888 "state": "configuring", 00:14:35.888 "raid_level": "raid0", 00:14:35.888 "superblock": false, 00:14:35.888 "num_base_bdevs": 3, 00:14:35.888 "num_base_bdevs_discovered": 1, 00:14:35.888 "num_base_bdevs_operational": 3, 00:14:35.888 "base_bdevs_list": [ 00:14:35.888 { 00:14:35.888 "name": "BaseBdev1", 00:14:35.888 "uuid": "601e960f-9c44-405a-af58-8cb3af1df5d1", 00:14:35.888 "is_configured": true, 00:14:35.888 "data_offset": 0, 00:14:35.888 "data_size": 65536 00:14:35.888 }, 00:14:35.888 { 00:14:35.888 "name": "BaseBdev2", 00:14:35.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.888 "is_configured": false, 00:14:35.888 "data_offset": 0, 00:14:35.888 "data_size": 0 00:14:35.888 }, 00:14:35.888 { 00:14:35.888 "name": "BaseBdev3", 00:14:35.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.888 "is_configured": false, 00:14:35.888 "data_offset": 0, 00:14:35.888 "data_size": 0 00:14:35.888 } 00:14:35.888 ] 00:14:35.888 }' 00:14:35.888 11:24:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.888 11:24:54 -- common/autotest_common.sh@10 -- # set +x 00:14:36.147 11:24:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.405 [2024-11-26 11:24:54.526764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.405 BaseBdev2 00:14:36.405 11:24:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:36.405 11:24:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:36.405 11:24:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.405 11:24:54 -- common/autotest_common.sh@899 -- # local i 00:14:36.405 11:24:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.406 11:24:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.406 11:24:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.664 11:24:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.923 [ 00:14:36.923 { 00:14:36.923 "name": "BaseBdev2", 00:14:36.923 "aliases": [ 00:14:36.923 "04616bdf-a4ff-4027-8b3e-a6cb2e38f080" 00:14:36.923 ], 00:14:36.923 "product_name": "Malloc disk", 00:14:36.923 "block_size": 512, 00:14:36.923 "num_blocks": 65536, 00:14:36.923 "uuid": "04616bdf-a4ff-4027-8b3e-a6cb2e38f080", 00:14:36.923 "assigned_rate_limits": { 00:14:36.923 "rw_ios_per_sec": 0, 00:14:36.923 "rw_mbytes_per_sec": 0, 00:14:36.923 "r_mbytes_per_sec": 0, 00:14:36.923 "w_mbytes_per_sec": 0 00:14:36.923 }, 00:14:36.923 "claimed": true, 00:14:36.923 "claim_type": "exclusive_write", 00:14:36.923 "zoned": false, 00:14:36.923 "supported_io_types": { 00:14:36.923 "read": true, 00:14:36.923 "write": true, 00:14:36.923 "unmap": true, 00:14:36.923 "write_zeroes": true, 00:14:36.923 "flush": true, 00:14:36.923 "reset": true, 00:14:36.923 "compare": false, 00:14:36.923 "compare_and_write": false, 00:14:36.923 "abort": true, 00:14:36.923 "nvme_admin": false, 00:14:36.923 "nvme_io": false 00:14:36.923 }, 00:14:36.923 "memory_domains": [ 00:14:36.923 { 00:14:36.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.923 "dma_device_type": 2 00:14:36.923 } 00:14:36.923 ], 00:14:36.923 "driver_specific": {} 00:14:36.923 } 00:14:36.923 ] 00:14:36.923 11:24:54 -- common/autotest_common.sh@905 -- # return 0 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.923 11:24:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.192 11:24:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.192 "name": "Existed_Raid", 00:14:37.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.192 "strip_size_kb": 64, 00:14:37.192 "state": "configuring", 00:14:37.192 "raid_level": "raid0", 00:14:37.192 "superblock": false, 00:14:37.192 "num_base_bdevs": 3, 00:14:37.192 "num_base_bdevs_discovered": 2, 00:14:37.192 "num_base_bdevs_operational": 3, 00:14:37.192 "base_bdevs_list": [ 00:14:37.192 { 00:14:37.192 "name": "BaseBdev1", 00:14:37.192 "uuid": "601e960f-9c44-405a-af58-8cb3af1df5d1", 00:14:37.192 "is_configured": true, 00:14:37.192 "data_offset": 0, 00:14:37.192 "data_size": 65536 00:14:37.192 }, 00:14:37.192 { 00:14:37.192 "name": "BaseBdev2", 00:14:37.192 "uuid": "04616bdf-a4ff-4027-8b3e-a6cb2e38f080", 00:14:37.192 "is_configured": true, 00:14:37.192 "data_offset": 0, 00:14:37.192 "data_size": 65536 00:14:37.192 }, 00:14:37.192 { 00:14:37.192 "name": "BaseBdev3", 00:14:37.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.192 "is_configured": false, 00:14:37.192 "data_offset": 0, 00:14:37.192 "data_size": 0 00:14:37.192 } 00:14:37.192 ] 00:14:37.192 }' 00:14:37.192 11:24:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.192 11:24:55 -- common/autotest_common.sh@10 -- # set +x 00:14:37.466 11:24:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:37.724 [2024-11-26 11:24:55.808459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.724 [2024-11-26 11:24:55.808539] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:37.724 [2024-11-26 11:24:55.808553] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:37.724 [2024-11-26 11:24:55.808693] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:37.724 [2024-11-26 11:24:55.809114] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:37.724 [2024-11-26 11:24:55.809159] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:37.724 [2024-11-26 11:24:55.809394] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.724 BaseBdev3 00:14:37.724 11:24:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:37.724 11:24:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:37.724 11:24:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:37.724 11:24:55 -- common/autotest_common.sh@899 -- # local i 00:14:37.724 11:24:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:37.724 11:24:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:37.724 11:24:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.983 11:24:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.241 [ 00:14:38.241 { 00:14:38.241 "name": "BaseBdev3", 00:14:38.241 "aliases": [ 00:14:38.241 "25c5513d-b2f0-4e1c-b4a6-4eb12969032a" 00:14:38.241 ], 00:14:38.241 "product_name": "Malloc disk", 00:14:38.241 "block_size": 512, 00:14:38.241 "num_blocks": 65536, 00:14:38.241 "uuid": "25c5513d-b2f0-4e1c-b4a6-4eb12969032a", 00:14:38.241 "assigned_rate_limits": { 00:14:38.241 "rw_ios_per_sec": 0, 00:14:38.241 "rw_mbytes_per_sec": 0, 00:14:38.241 "r_mbytes_per_sec": 0, 00:14:38.241 "w_mbytes_per_sec": 0 00:14:38.241 }, 00:14:38.241 "claimed": true, 00:14:38.241 "claim_type": "exclusive_write", 00:14:38.241 "zoned": false, 00:14:38.242 "supported_io_types": { 00:14:38.242 "read": true, 00:14:38.242 "write": true, 00:14:38.242 "unmap": true, 00:14:38.242 "write_zeroes": true, 00:14:38.242 "flush": true, 00:14:38.242 "reset": true, 00:14:38.242 "compare": false, 00:14:38.242 "compare_and_write": false, 00:14:38.242 "abort": true, 00:14:38.242 "nvme_admin": false, 00:14:38.242 "nvme_io": false 00:14:38.242 }, 00:14:38.242 "memory_domains": [ 00:14:38.242 { 00:14:38.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.242 "dma_device_type": 2 00:14:38.242 } 00:14:38.242 ], 00:14:38.242 "driver_specific": {} 00:14:38.242 } 00:14:38.242 ] 00:14:38.242 11:24:56 -- common/autotest_common.sh@905 -- # return 0 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.242 11:24:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.500 11:24:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.500 "name": "Existed_Raid", 00:14:38.500 "uuid": "4f9b8d5a-d39d-4e38-af5a-566eac28fea5", 00:14:38.500 "strip_size_kb": 64, 00:14:38.500 "state": "online", 00:14:38.500 "raid_level": "raid0", 00:14:38.500 "superblock": false, 00:14:38.500 "num_base_bdevs": 3, 00:14:38.500 "num_base_bdevs_discovered": 3, 00:14:38.500 "num_base_bdevs_operational": 3, 00:14:38.500 "base_bdevs_list": [ 00:14:38.500 { 00:14:38.500 "name": "BaseBdev1", 00:14:38.500 "uuid": "601e960f-9c44-405a-af58-8cb3af1df5d1", 00:14:38.500 "is_configured": true, 00:14:38.500 "data_offset": 0, 00:14:38.500 "data_size": 65536 00:14:38.500 }, 00:14:38.500 { 00:14:38.500 "name": "BaseBdev2", 00:14:38.500 "uuid": "04616bdf-a4ff-4027-8b3e-a6cb2e38f080", 00:14:38.500 "is_configured": true, 00:14:38.500 "data_offset": 0, 00:14:38.500 "data_size": 65536 00:14:38.500 }, 00:14:38.500 { 00:14:38.500 "name": "BaseBdev3", 00:14:38.500 "uuid": "25c5513d-b2f0-4e1c-b4a6-4eb12969032a", 00:14:38.500 "is_configured": true, 00:14:38.500 "data_offset": 0, 00:14:38.500 "data_size": 65536 00:14:38.500 } 00:14:38.500 ] 00:14:38.500 }' 00:14:38.500 11:24:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.500 11:24:56 -- common/autotest_common.sh@10 -- # set +x 00:14:38.758 11:24:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:39.016 [2024-11-26 11:24:57.077037] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.016 [2024-11-26 11:24:57.077077] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.016 [2024-11-26 11:24:57.077169] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.016 11:24:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.275 11:24:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.275 "name": "Existed_Raid", 00:14:39.275 "uuid": "4f9b8d5a-d39d-4e38-af5a-566eac28fea5", 00:14:39.275 "strip_size_kb": 64, 00:14:39.275 "state": "offline", 00:14:39.275 "raid_level": "raid0", 00:14:39.275 "superblock": false, 00:14:39.275 "num_base_bdevs": 3, 00:14:39.275 "num_base_bdevs_discovered": 2, 00:14:39.275 "num_base_bdevs_operational": 2, 00:14:39.275 "base_bdevs_list": [ 00:14:39.275 { 00:14:39.275 "name": null, 00:14:39.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.275 "is_configured": false, 00:14:39.275 "data_offset": 0, 00:14:39.275 "data_size": 65536 00:14:39.275 }, 00:14:39.275 { 00:14:39.275 "name": "BaseBdev2", 00:14:39.275 "uuid": "04616bdf-a4ff-4027-8b3e-a6cb2e38f080", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 0, 00:14:39.275 "data_size": 65536 00:14:39.275 }, 00:14:39.275 { 00:14:39.275 "name": "BaseBdev3", 00:14:39.275 "uuid": "25c5513d-b2f0-4e1c-b4a6-4eb12969032a", 00:14:39.275 "is_configured": true, 00:14:39.275 "data_offset": 0, 00:14:39.275 "data_size": 65536 00:14:39.275 } 00:14:39.275 ] 00:14:39.275 }' 00:14:39.275 11:24:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.275 11:24:57 -- common/autotest_common.sh@10 -- # set +x 00:14:39.534 11:24:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:39.534 11:24:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:39.534 11:24:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.534 11:24:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:39.793 11:24:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:39.793 11:24:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.793 11:24:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:40.052 [2024-11-26 11:24:58.148415] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.052 11:24:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:40.052 11:24:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:40.052 11:24:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.052 11:24:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:40.311 11:24:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:40.311 11:24:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.311 11:24:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:40.571 [2024-11-26 11:24:58.615818] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.571 [2024-11-26 11:24:58.615918] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:40.571 11:24:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:40.571 11:24:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:40.571 11:24:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.571 11:24:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:40.831 11:24:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:40.831 11:24:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:40.831 11:24:58 -- bdev/bdev_raid.sh@287 -- # killprocess 81465 00:14:40.831 11:24:58 -- common/autotest_common.sh@936 -- # '[' -z 81465 ']' 00:14:40.831 11:24:58 -- common/autotest_common.sh@940 -- # kill -0 81465 00:14:40.831 11:24:58 -- common/autotest_common.sh@941 -- # uname 00:14:40.831 11:24:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.831 11:24:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81465 00:14:40.831 11:24:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:40.831 killing process with pid 81465 00:14:40.831 11:24:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:40.831 11:24:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81465' 00:14:40.831 11:24:58 -- common/autotest_common.sh@955 -- # kill 81465 00:14:40.831 [2024-11-26 11:24:58.957152] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.831 11:24:58 -- common/autotest_common.sh@960 -- # wait 81465 00:14:40.831 [2024-11-26 11:24:58.957263] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:41.093 00:14:41.093 real 0m9.488s 00:14:41.093 user 0m16.650s 00:14:41.093 sys 0m1.454s 00:14:41.093 11:24:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:41.093 ************************************ 00:14:41.093 END TEST raid_state_function_test 00:14:41.093 ************************************ 00:14:41.093 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:41.093 11:24:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:41.093 11:24:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.093 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:14:41.093 ************************************ 00:14:41.093 START TEST raid_state_function_test_sb 00:14:41.093 ************************************ 00:14:41.093 11:24:59 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=81794 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 81794' 00:14:41.093 Process raid pid: 81794 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 81794 /var/tmp/spdk-raid.sock 00:14:41.093 11:24:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.093 11:24:59 -- common/autotest_common.sh@829 -- # '[' -z 81794 ']' 00:14:41.093 11:24:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.093 11:24:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.093 11:24:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.093 11:24:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.093 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:14:41.093 [2024-11-26 11:24:59.284752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:41.093 [2024-11-26 11:24:59.284999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.352 [2024-11-26 11:24:59.455982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.352 [2024-11-26 11:24:59.494048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.352 [2024-11-26 11:24:59.527970] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.287 11:25:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.287 11:25:00 -- common/autotest_common.sh@862 -- # return 0 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:42.287 [2024-11-26 11:25:00.408418] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.287 [2024-11-26 11:25:00.408480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.287 [2024-11-26 11:25:00.408515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.287 [2024-11-26 11:25:00.408528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.287 [2024-11-26 11:25:00.408542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.287 [2024-11-26 11:25:00.408555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.287 11:25:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.546 11:25:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.546 "name": "Existed_Raid", 00:14:42.546 "uuid": "9f479dc3-80a7-4cab-a194-acd0ef6c78de", 00:14:42.546 "strip_size_kb": 64, 00:14:42.546 "state": "configuring", 00:14:42.546 "raid_level": "raid0", 00:14:42.546 "superblock": true, 00:14:42.546 "num_base_bdevs": 3, 00:14:42.546 "num_base_bdevs_discovered": 0, 00:14:42.546 "num_base_bdevs_operational": 3, 00:14:42.546 "base_bdevs_list": [ 00:14:42.546 { 00:14:42.546 "name": "BaseBdev1", 00:14:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.546 "is_configured": false, 00:14:42.546 "data_offset": 0, 00:14:42.546 "data_size": 0 00:14:42.546 }, 00:14:42.546 { 00:14:42.546 "name": "BaseBdev2", 00:14:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.546 "is_configured": false, 00:14:42.546 "data_offset": 0, 00:14:42.546 "data_size": 0 00:14:42.546 }, 00:14:42.546 { 00:14:42.546 "name": "BaseBdev3", 00:14:42.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.546 "is_configured": false, 00:14:42.546 "data_offset": 0, 00:14:42.546 "data_size": 0 00:14:42.546 } 00:14:42.546 ] 00:14:42.546 }' 00:14:42.546 11:25:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.546 11:25:00 -- common/autotest_common.sh@10 -- # set +x 00:14:42.803 11:25:00 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:43.061 [2024-11-26 11:25:01.220528] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.061 [2024-11-26 11:25:01.220571] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:43.061 11:25:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:43.319 [2024-11-26 11:25:01.436649] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.319 [2024-11-26 11:25:01.436707] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.319 [2024-11-26 11:25:01.436744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.319 [2024-11-26 11:25:01.436756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.319 [2024-11-26 11:25:01.436767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.319 [2024-11-26 11:25:01.436778] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.319 11:25:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.577 [2024-11-26 11:25:01.687430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.577 BaseBdev1 00:14:43.577 11:25:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:43.577 11:25:01 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:43.577 11:25:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.577 11:25:01 -- common/autotest_common.sh@899 -- # local i 00:14:43.577 11:25:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.577 11:25:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.577 11:25:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.836 11:25:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.095 [ 00:14:44.095 { 00:14:44.095 "name": "BaseBdev1", 00:14:44.095 "aliases": [ 00:14:44.095 "322b81bd-9977-43f2-bc83-a1b8d299da41" 00:14:44.095 ], 00:14:44.095 "product_name": "Malloc disk", 00:14:44.095 "block_size": 512, 00:14:44.095 "num_blocks": 65536, 00:14:44.095 "uuid": "322b81bd-9977-43f2-bc83-a1b8d299da41", 00:14:44.095 "assigned_rate_limits": { 00:14:44.095 "rw_ios_per_sec": 0, 00:14:44.095 "rw_mbytes_per_sec": 0, 00:14:44.095 "r_mbytes_per_sec": 0, 00:14:44.095 "w_mbytes_per_sec": 0 00:14:44.095 }, 00:14:44.095 "claimed": true, 00:14:44.095 "claim_type": "exclusive_write", 00:14:44.095 "zoned": false, 00:14:44.095 "supported_io_types": { 00:14:44.095 "read": true, 00:14:44.095 "write": true, 00:14:44.095 "unmap": true, 00:14:44.095 "write_zeroes": true, 00:14:44.095 "flush": true, 00:14:44.095 "reset": true, 00:14:44.095 "compare": false, 00:14:44.095 "compare_and_write": false, 00:14:44.095 "abort": true, 00:14:44.095 "nvme_admin": false, 00:14:44.095 "nvme_io": false 00:14:44.095 }, 00:14:44.096 "memory_domains": [ 00:14:44.096 { 00:14:44.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.096 "dma_device_type": 2 00:14:44.096 } 00:14:44.096 ], 00:14:44.096 "driver_specific": {} 00:14:44.096 } 00:14:44.096 ] 00:14:44.096 11:25:02 -- common/autotest_common.sh@905 -- # return 0 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.096 11:25:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.355 11:25:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.355 "name": "Existed_Raid", 00:14:44.355 "uuid": "294717eb-e9ab-48a9-85e1-e5e97b6de975", 00:14:44.355 "strip_size_kb": 64, 00:14:44.355 "state": "configuring", 00:14:44.355 "raid_level": "raid0", 00:14:44.355 "superblock": true, 00:14:44.355 "num_base_bdevs": 3, 00:14:44.355 "num_base_bdevs_discovered": 1, 00:14:44.355 "num_base_bdevs_operational": 3, 00:14:44.355 "base_bdevs_list": [ 00:14:44.355 { 00:14:44.355 "name": "BaseBdev1", 00:14:44.355 "uuid": "322b81bd-9977-43f2-bc83-a1b8d299da41", 00:14:44.355 "is_configured": true, 00:14:44.355 "data_offset": 2048, 00:14:44.355 "data_size": 63488 00:14:44.355 }, 00:14:44.355 { 00:14:44.355 "name": "BaseBdev2", 00:14:44.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.355 "is_configured": false, 00:14:44.355 "data_offset": 0, 00:14:44.355 "data_size": 0 00:14:44.355 }, 00:14:44.355 { 00:14:44.355 "name": "BaseBdev3", 00:14:44.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.355 "is_configured": false, 00:14:44.355 "data_offset": 0, 00:14:44.355 "data_size": 0 00:14:44.355 } 00:14:44.355 ] 00:14:44.355 }' 00:14:44.355 11:25:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.355 11:25:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.614 11:25:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.872 [2024-11-26 11:25:02.891863] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.872 [2024-11-26 11:25:02.891973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:44.872 11:25:02 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:44.872 11:25:02 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:45.131 11:25:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.389 BaseBdev1 00:14:45.389 11:25:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:45.389 11:25:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:45.389 11:25:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.389 11:25:03 -- common/autotest_common.sh@899 -- # local i 00:14:45.389 11:25:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.389 11:25:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.389 11:25:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.389 11:25:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.648 [ 00:14:45.648 { 00:14:45.648 "name": "BaseBdev1", 00:14:45.648 "aliases": [ 00:14:45.648 "a2a5ba47-b657-43be-adbc-746305c1fa9d" 00:14:45.648 ], 00:14:45.648 "product_name": "Malloc disk", 00:14:45.648 "block_size": 512, 00:14:45.648 "num_blocks": 65536, 00:14:45.648 "uuid": "a2a5ba47-b657-43be-adbc-746305c1fa9d", 00:14:45.648 "assigned_rate_limits": { 00:14:45.648 "rw_ios_per_sec": 0, 00:14:45.648 "rw_mbytes_per_sec": 0, 00:14:45.648 "r_mbytes_per_sec": 0, 00:14:45.648 "w_mbytes_per_sec": 0 00:14:45.648 }, 00:14:45.648 "claimed": false, 00:14:45.648 "zoned": false, 00:14:45.648 "supported_io_types": { 00:14:45.648 "read": true, 00:14:45.648 "write": true, 00:14:45.648 "unmap": true, 00:14:45.648 "write_zeroes": true, 00:14:45.648 "flush": true, 00:14:45.648 "reset": true, 00:14:45.648 "compare": false, 00:14:45.648 "compare_and_write": false, 00:14:45.648 "abort": true, 00:14:45.648 "nvme_admin": false, 00:14:45.648 "nvme_io": false 00:14:45.648 }, 00:14:45.648 "memory_domains": [ 00:14:45.648 { 00:14:45.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.648 "dma_device_type": 2 00:14:45.648 } 00:14:45.648 ], 00:14:45.648 "driver_specific": {} 00:14:45.648 } 00:14:45.648 ] 00:14:45.648 11:25:03 -- common/autotest_common.sh@905 -- # return 0 00:14:45.648 11:25:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:45.907 [2024-11-26 11:25:04.010634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.907 [2024-11-26 11:25:04.013227] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.907 [2024-11-26 11:25:04.013279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.907 [2024-11-26 11:25:04.013298] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.907 [2024-11-26 11:25:04.013312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.907 11:25:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.166 11:25:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.166 "name": "Existed_Raid", 00:14:46.166 "uuid": "d3c18259-aaa9-4b2a-b949-da024d641aa4", 00:14:46.166 "strip_size_kb": 64, 00:14:46.166 "state": "configuring", 00:14:46.166 "raid_level": "raid0", 00:14:46.166 "superblock": true, 00:14:46.166 "num_base_bdevs": 3, 00:14:46.166 "num_base_bdevs_discovered": 1, 00:14:46.166 "num_base_bdevs_operational": 3, 00:14:46.166 "base_bdevs_list": [ 00:14:46.166 { 00:14:46.166 "name": "BaseBdev1", 00:14:46.166 "uuid": "a2a5ba47-b657-43be-adbc-746305c1fa9d", 00:14:46.166 "is_configured": true, 00:14:46.166 "data_offset": 2048, 00:14:46.166 "data_size": 63488 00:14:46.166 }, 00:14:46.166 { 00:14:46.166 "name": "BaseBdev2", 00:14:46.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.166 "is_configured": false, 00:14:46.166 "data_offset": 0, 00:14:46.166 "data_size": 0 00:14:46.166 }, 00:14:46.166 { 00:14:46.166 "name": "BaseBdev3", 00:14:46.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.166 "is_configured": false, 00:14:46.166 "data_offset": 0, 00:14:46.166 "data_size": 0 00:14:46.166 } 00:14:46.166 ] 00:14:46.166 }' 00:14:46.166 11:25:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.166 11:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:46.426 11:25:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.685 [2024-11-26 11:25:04.861810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.685 BaseBdev2 00:14:46.685 11:25:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:46.685 11:25:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:46.685 11:25:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.685 11:25:04 -- common/autotest_common.sh@899 -- # local i 00:14:46.685 11:25:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.685 11:25:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.685 11:25:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.944 11:25:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.203 [ 00:14:47.203 { 00:14:47.203 "name": "BaseBdev2", 00:14:47.203 "aliases": [ 00:14:47.203 "025086a5-5942-4c57-9d29-0a6d0e3526f5" 00:14:47.203 ], 00:14:47.203 "product_name": "Malloc disk", 00:14:47.203 "block_size": 512, 00:14:47.203 "num_blocks": 65536, 00:14:47.203 "uuid": "025086a5-5942-4c57-9d29-0a6d0e3526f5", 00:14:47.203 "assigned_rate_limits": { 00:14:47.203 "rw_ios_per_sec": 0, 00:14:47.203 "rw_mbytes_per_sec": 0, 00:14:47.203 "r_mbytes_per_sec": 0, 00:14:47.203 "w_mbytes_per_sec": 0 00:14:47.203 }, 00:14:47.203 "claimed": true, 00:14:47.203 "claim_type": "exclusive_write", 00:14:47.203 "zoned": false, 00:14:47.203 "supported_io_types": { 00:14:47.203 "read": true, 00:14:47.203 "write": true, 00:14:47.203 "unmap": true, 00:14:47.203 "write_zeroes": true, 00:14:47.203 "flush": true, 00:14:47.203 "reset": true, 00:14:47.203 "compare": false, 00:14:47.203 "compare_and_write": false, 00:14:47.203 "abort": true, 00:14:47.203 "nvme_admin": false, 00:14:47.203 "nvme_io": false 00:14:47.203 }, 00:14:47.203 "memory_domains": [ 00:14:47.203 { 00:14:47.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.203 "dma_device_type": 2 00:14:47.203 } 00:14:47.203 ], 00:14:47.203 "driver_specific": {} 00:14:47.203 } 00:14:47.203 ] 00:14:47.203 11:25:05 -- common/autotest_common.sh@905 -- # return 0 00:14:47.203 11:25:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.204 11:25:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.463 11:25:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.463 "name": "Existed_Raid", 00:14:47.463 "uuid": "d3c18259-aaa9-4b2a-b949-da024d641aa4", 00:14:47.463 "strip_size_kb": 64, 00:14:47.463 "state": "configuring", 00:14:47.463 "raid_level": "raid0", 00:14:47.463 "superblock": true, 00:14:47.463 "num_base_bdevs": 3, 00:14:47.463 "num_base_bdevs_discovered": 2, 00:14:47.463 "num_base_bdevs_operational": 3, 00:14:47.463 "base_bdevs_list": [ 00:14:47.463 { 00:14:47.463 "name": "BaseBdev1", 00:14:47.463 "uuid": "a2a5ba47-b657-43be-adbc-746305c1fa9d", 00:14:47.463 "is_configured": true, 00:14:47.463 "data_offset": 2048, 00:14:47.463 "data_size": 63488 00:14:47.463 }, 00:14:47.463 { 00:14:47.463 "name": "BaseBdev2", 00:14:47.463 "uuid": "025086a5-5942-4c57-9d29-0a6d0e3526f5", 00:14:47.463 "is_configured": true, 00:14:47.463 "data_offset": 2048, 00:14:47.463 "data_size": 63488 00:14:47.463 }, 00:14:47.463 { 00:14:47.463 "name": "BaseBdev3", 00:14:47.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.463 "is_configured": false, 00:14:47.463 "data_offset": 0, 00:14:47.463 "data_size": 0 00:14:47.463 } 00:14:47.463 ] 00:14:47.463 }' 00:14:47.463 11:25:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.463 11:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:47.722 11:25:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.981 [2024-11-26 11:25:06.123163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.981 [2024-11-26 11:25:06.123649] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:47.981 [2024-11-26 11:25:06.123800] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:47.981 [2024-11-26 11:25:06.123981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:14:47.981 [2024-11-26 11:25:06.124428] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:47.981 [2024-11-26 11:25:06.124572] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:14:47.981 id_bdev 0x516000007580 00:14:47.981 [2024-11-26 11:25:06.124901] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.981 11:25:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:47.981 11:25:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:47.981 11:25:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:47.981 11:25:06 -- common/autotest_common.sh@899 -- # local i 00:14:47.981 11:25:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:47.981 11:25:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:47.981 11:25:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.240 11:25:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.498 [ 00:14:48.498 { 00:14:48.498 "name": "BaseBdev3", 00:14:48.498 "aliases": [ 00:14:48.498 "c6eba39d-ee6b-432b-991d-6a0ba6d5a214" 00:14:48.498 ], 00:14:48.498 "product_name": "Malloc disk", 00:14:48.498 "block_size": 512, 00:14:48.498 "num_blocks": 65536, 00:14:48.498 "uuid": "c6eba39d-ee6b-432b-991d-6a0ba6d5a214", 00:14:48.498 "assigned_rate_limits": { 00:14:48.498 "rw_ios_per_sec": 0, 00:14:48.498 "rw_mbytes_per_sec": 0, 00:14:48.498 "r_mbytes_per_sec": 0, 00:14:48.498 "w_mbytes_per_sec": 0 00:14:48.498 }, 00:14:48.498 "claimed": true, 00:14:48.498 "claim_type": "exclusive_write", 00:14:48.498 "zoned": false, 00:14:48.498 "supported_io_types": { 00:14:48.498 "read": true, 00:14:48.498 "write": true, 00:14:48.498 "unmap": true, 00:14:48.498 "write_zeroes": true, 00:14:48.498 "flush": true, 00:14:48.498 "reset": true, 00:14:48.498 "compare": false, 00:14:48.498 "compare_and_write": false, 00:14:48.498 "abort": true, 00:14:48.498 "nvme_admin": false, 00:14:48.498 "nvme_io": false 00:14:48.498 }, 00:14:48.498 "memory_domains": [ 00:14:48.498 { 00:14:48.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.498 "dma_device_type": 2 00:14:48.498 } 00:14:48.498 ], 00:14:48.498 "driver_specific": {} 00:14:48.498 } 00:14:48.498 ] 00:14:48.498 11:25:06 -- common/autotest_common.sh@905 -- # return 0 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.498 11:25:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.756 11:25:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.756 "name": "Existed_Raid", 00:14:48.756 "uuid": "d3c18259-aaa9-4b2a-b949-da024d641aa4", 00:14:48.756 "strip_size_kb": 64, 00:14:48.756 "state": "online", 00:14:48.756 "raid_level": "raid0", 00:14:48.756 "superblock": true, 00:14:48.756 "num_base_bdevs": 3, 00:14:48.756 "num_base_bdevs_discovered": 3, 00:14:48.756 "num_base_bdevs_operational": 3, 00:14:48.756 "base_bdevs_list": [ 00:14:48.756 { 00:14:48.756 "name": "BaseBdev1", 00:14:48.756 "uuid": "a2a5ba47-b657-43be-adbc-746305c1fa9d", 00:14:48.756 "is_configured": true, 00:14:48.756 "data_offset": 2048, 00:14:48.756 "data_size": 63488 00:14:48.756 }, 00:14:48.756 { 00:14:48.756 "name": "BaseBdev2", 00:14:48.756 "uuid": "025086a5-5942-4c57-9d29-0a6d0e3526f5", 00:14:48.756 "is_configured": true, 00:14:48.756 "data_offset": 2048, 00:14:48.756 "data_size": 63488 00:14:48.756 }, 00:14:48.756 { 00:14:48.756 "name": "BaseBdev3", 00:14:48.756 "uuid": "c6eba39d-ee6b-432b-991d-6a0ba6d5a214", 00:14:48.756 "is_configured": true, 00:14:48.756 "data_offset": 2048, 00:14:48.756 "data_size": 63488 00:14:48.756 } 00:14:48.756 ] 00:14:48.756 }' 00:14:48.756 11:25:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.756 11:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:49.015 11:25:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:49.275 [2024-11-26 11:25:07.355802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.275 [2024-11-26 11:25:07.356119] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.275 [2024-11-26 11:25:07.356344] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.275 11:25:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.534 11:25:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.534 "name": "Existed_Raid", 00:14:49.534 "uuid": "d3c18259-aaa9-4b2a-b949-da024d641aa4", 00:14:49.534 "strip_size_kb": 64, 00:14:49.534 "state": "offline", 00:14:49.534 "raid_level": "raid0", 00:14:49.534 "superblock": true, 00:14:49.534 "num_base_bdevs": 3, 00:14:49.534 "num_base_bdevs_discovered": 2, 00:14:49.534 "num_base_bdevs_operational": 2, 00:14:49.534 "base_bdevs_list": [ 00:14:49.534 { 00:14:49.534 "name": null, 00:14:49.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.534 "is_configured": false, 00:14:49.534 "data_offset": 2048, 00:14:49.534 "data_size": 63488 00:14:49.534 }, 00:14:49.534 { 00:14:49.534 "name": "BaseBdev2", 00:14:49.534 "uuid": "025086a5-5942-4c57-9d29-0a6d0e3526f5", 00:14:49.534 "is_configured": true, 00:14:49.534 "data_offset": 2048, 00:14:49.534 "data_size": 63488 00:14:49.534 }, 00:14:49.534 { 00:14:49.534 "name": "BaseBdev3", 00:14:49.534 "uuid": "c6eba39d-ee6b-432b-991d-6a0ba6d5a214", 00:14:49.534 "is_configured": true, 00:14:49.534 "data_offset": 2048, 00:14:49.534 "data_size": 63488 00:14:49.534 } 00:14:49.534 ] 00:14:49.534 }' 00:14:49.534 11:25:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.534 11:25:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.794 11:25:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:49.794 11:25:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:49.794 11:25:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.794 11:25:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:50.053 11:25:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:50.053 11:25:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.053 11:25:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:50.312 [2024-11-26 11:25:08.352052] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.312 11:25:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:50.312 11:25:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:50.312 11:25:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.312 11:25:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:50.571 11:25:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:50.571 11:25:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.571 11:25:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:50.830 [2024-11-26 11:25:08.815554] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.830 [2024-11-26 11:25:08.815616] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:50.830 11:25:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:50.830 11:25:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:50.830 11:25:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.830 11:25:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.088 11:25:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:51.088 11:25:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:51.088 11:25:09 -- bdev/bdev_raid.sh@287 -- # killprocess 81794 00:14:51.088 11:25:09 -- common/autotest_common.sh@936 -- # '[' -z 81794 ']' 00:14:51.088 11:25:09 -- common/autotest_common.sh@940 -- # kill -0 81794 00:14:51.088 11:25:09 -- common/autotest_common.sh@941 -- # uname 00:14:51.088 11:25:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.088 11:25:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81794 00:14:51.088 killing process with pid 81794 00:14:51.088 11:25:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.088 11:25:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.088 11:25:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81794' 00:14:51.088 11:25:09 -- common/autotest_common.sh@955 -- # kill 81794 00:14:51.088 [2024-11-26 11:25:09.141666] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.088 11:25:09 -- common/autotest_common.sh@960 -- # wait 81794 00:14:51.088 [2024-11-26 11:25:09.141748] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:51.347 00:14:51.347 real 0m10.119s 00:14:51.347 user 0m17.770s 00:14:51.347 sys 0m1.575s 00:14:51.347 11:25:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.347 ************************************ 00:14:51.347 END TEST raid_state_function_test_sb 00:14:51.347 ************************************ 00:14:51.347 11:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:51.347 11:25:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:51.347 11:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.347 11:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:51.347 ************************************ 00:14:51.347 START TEST raid_superblock_test 00:14:51.347 ************************************ 00:14:51.347 11:25:09 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=82139 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 82139 /var/tmp/spdk-raid.sock 00:14:51.347 11:25:09 -- common/autotest_common.sh@829 -- # '[' -z 82139 ']' 00:14:51.347 11:25:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:51.347 11:25:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:51.347 11:25:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:51.348 11:25:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:51.348 11:25:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.348 11:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:51.348 [2024-11-26 11:25:09.443855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:51.348 [2024-11-26 11:25:09.444076] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82139 ] 00:14:51.606 [2024-11-26 11:25:09.599850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.606 [2024-11-26 11:25:09.636294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.606 [2024-11-26 11:25:09.669670] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.175 11:25:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.175 11:25:10 -- common/autotest_common.sh@862 -- # return 0 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.175 11:25:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:52.434 malloc1 00:14:52.434 11:25:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.694 [2024-11-26 11:25:10.817285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.694 [2024-11-26 11:25:10.817403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.694 [2024-11-26 11:25:10.817446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:52.694 [2024-11-26 11:25:10.817469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.694 [2024-11-26 11:25:10.820053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.694 [2024-11-26 11:25:10.820096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.694 pt1 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.694 11:25:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:52.953 malloc2 00:14:52.953 11:25:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.213 [2024-11-26 11:25:11.268603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.213 [2024-11-26 11:25:11.268910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.213 [2024-11-26 11:25:11.268995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:53.213 [2024-11-26 11:25:11.269293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.213 [2024-11-26 11:25:11.271896] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.213 [2024-11-26 11:25:11.272107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.213 pt2 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.213 11:25:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:53.472 malloc3 00:14:53.472 11:25:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:53.731 [2024-11-26 11:25:11.717235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:53.731 [2024-11-26 11:25:11.717346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.731 [2024-11-26 11:25:11.717385] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:14:53.731 [2024-11-26 11:25:11.717409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.731 [2024-11-26 11:25:11.720178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.731 [2024-11-26 11:25:11.720391] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:53.731 pt3 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:53.731 [2024-11-26 11:25:11.941485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.731 [2024-11-26 11:25:11.943958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.731 [2024-11-26 11:25:11.944067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:53.731 [2024-11-26 11:25:11.944317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:14:53.731 [2024-11-26 11:25:11.944338] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:53.731 [2024-11-26 11:25:11.944459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:53.731 [2024-11-26 11:25:11.944830] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:14:53.731 [2024-11-26 11:25:11.944846] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:14:53.731 [2024-11-26 11:25:11.945074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.731 11:25:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.992 11:25:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.992 11:25:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.992 "name": "raid_bdev1", 00:14:53.992 "uuid": "d8e60249-5ecf-4671-aea3-ab0622bfb40b", 00:14:53.992 "strip_size_kb": 64, 00:14:53.992 "state": "online", 00:14:53.992 "raid_level": "raid0", 00:14:53.992 "superblock": true, 00:14:53.992 "num_base_bdevs": 3, 00:14:53.992 "num_base_bdevs_discovered": 3, 00:14:53.992 "num_base_bdevs_operational": 3, 00:14:53.992 "base_bdevs_list": [ 00:14:53.992 { 00:14:53.992 "name": "pt1", 00:14:53.992 "uuid": "1fc524fd-8f9b-5473-9c13-bef23b8f1756", 00:14:53.992 "is_configured": true, 00:14:53.992 "data_offset": 2048, 00:14:53.992 "data_size": 63488 00:14:53.992 }, 00:14:53.992 { 00:14:53.992 "name": "pt2", 00:14:53.992 "uuid": "9b6daeab-845d-5bc2-8092-267578c3c8a3", 00:14:53.992 "is_configured": true, 00:14:53.992 "data_offset": 2048, 00:14:53.992 "data_size": 63488 00:14:53.992 }, 00:14:53.992 { 00:14:53.992 "name": "pt3", 00:14:53.992 "uuid": "5419cb1d-cb9b-5fe8-8ea1-c09950b63457", 00:14:53.992 "is_configured": true, 00:14:53.992 "data_offset": 2048, 00:14:53.992 "data_size": 63488 00:14:53.992 } 00:14:53.992 ] 00:14:53.992 }' 00:14:53.992 11:25:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.992 11:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.652 11:25:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:54.652 11:25:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:54.652 [2024-11-26 11:25:12.753997] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.652 11:25:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d8e60249-5ecf-4671-aea3-ab0622bfb40b 00:14:54.652 11:25:12 -- bdev/bdev_raid.sh@380 -- # '[' -z d8e60249-5ecf-4671-aea3-ab0622bfb40b ']' 00:14:54.652 11:25:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:54.912 [2024-11-26 11:25:12.973772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.912 [2024-11-26 11:25:12.974010] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.912 [2024-11-26 11:25:12.974145] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.912 [2024-11-26 11:25:12.974233] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.912 [2024-11-26 11:25:12.974253] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:14:54.912 11:25:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.912 11:25:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:55.171 11:25:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:55.171 11:25:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:55.171 11:25:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.171 11:25:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:55.430 11:25:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.430 11:25:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:55.430 11:25:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.689 11:25:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:55.689 11:25:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:55.689 11:25:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:55.948 11:25:14 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:55.948 11:25:14 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:55.948 11:25:14 -- common/autotest_common.sh@650 -- # local es=0 00:14:55.948 11:25:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:55.948 11:25:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.948 11:25:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.948 11:25:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.948 11:25:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.948 11:25:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.948 11:25:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.948 11:25:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.948 11:25:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:55.948 11:25:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:56.206 [2024-11-26 11:25:14.378336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:56.206 [2024-11-26 11:25:14.380624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:56.206 [2024-11-26 11:25:14.380683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:56.206 [2024-11-26 11:25:14.380745] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:56.206 [2024-11-26 11:25:14.380841] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:56.206 [2024-11-26 11:25:14.380898] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:56.206 [2024-11-26 11:25:14.380924] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.206 [2024-11-26 11:25:14.380950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:14:56.206 request: 00:14:56.206 { 00:14:56.206 "name": "raid_bdev1", 00:14:56.206 "raid_level": "raid0", 00:14:56.206 "base_bdevs": [ 00:14:56.206 "malloc1", 00:14:56.206 "malloc2", 00:14:56.206 "malloc3" 00:14:56.206 ], 00:14:56.206 "superblock": false, 00:14:56.206 "strip_size_kb": 64, 00:14:56.206 "method": "bdev_raid_create", 00:14:56.206 "req_id": 1 00:14:56.206 } 00:14:56.206 Got JSON-RPC error response 00:14:56.206 response: 00:14:56.206 { 00:14:56.206 "code": -17, 00:14:56.206 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:56.206 } 00:14:56.206 11:25:14 -- common/autotest_common.sh@653 -- # es=1 00:14:56.206 11:25:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.206 11:25:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.206 11:25:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.206 11:25:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.206 11:25:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:56.464 11:25:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:56.464 11:25:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:56.464 11:25:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.723 [2024-11-26 11:25:14.858413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.723 [2024-11-26 11:25:14.858513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.724 [2024-11-26 11:25:14.858543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:14:56.724 [2024-11-26 11:25:14.858560] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.724 [2024-11-26 11:25:14.861658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.724 [2024-11-26 11:25:14.861830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.724 [2024-11-26 11:25:14.862062] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:56.724 [2024-11-26 11:25:14.862233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.724 pt1 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.724 11:25:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.983 11:25:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.983 "name": "raid_bdev1", 00:14:56.983 "uuid": "d8e60249-5ecf-4671-aea3-ab0622bfb40b", 00:14:56.983 "strip_size_kb": 64, 00:14:56.983 "state": "configuring", 00:14:56.983 "raid_level": "raid0", 00:14:56.983 "superblock": true, 00:14:56.983 "num_base_bdevs": 3, 00:14:56.983 "num_base_bdevs_discovered": 1, 00:14:56.983 "num_base_bdevs_operational": 3, 00:14:56.983 "base_bdevs_list": [ 00:14:56.983 { 00:14:56.983 "name": "pt1", 00:14:56.983 "uuid": "1fc524fd-8f9b-5473-9c13-bef23b8f1756", 00:14:56.983 "is_configured": true, 00:14:56.983 "data_offset": 2048, 00:14:56.983 "data_size": 63488 00:14:56.983 }, 00:14:56.983 { 00:14:56.983 "name": null, 00:14:56.983 "uuid": "9b6daeab-845d-5bc2-8092-267578c3c8a3", 00:14:56.983 "is_configured": false, 00:14:56.983 "data_offset": 2048, 00:14:56.983 "data_size": 63488 00:14:56.983 }, 00:14:56.983 { 00:14:56.983 "name": null, 00:14:56.983 "uuid": "5419cb1d-cb9b-5fe8-8ea1-c09950b63457", 00:14:56.983 "is_configured": false, 00:14:56.983 "data_offset": 2048, 00:14:56.983 "data_size": 63488 00:14:56.983 } 00:14:56.983 ] 00:14:56.983 }' 00:14:56.983 11:25:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.983 11:25:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.242 11:25:15 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:14:57.242 11:25:15 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.501 [2024-11-26 11:25:15.626743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.501 [2024-11-26 11:25:15.627046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.501 [2024-11-26 11:25:15.627090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:14:57.501 [2024-11-26 11:25:15.627109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.501 [2024-11-26 11:25:15.627597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.501 [2024-11-26 11:25:15.627627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.501 [2024-11-26 11:25:15.627703] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:57.501 [2024-11-26 11:25:15.627735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:57.501 pt2 00:14:57.501 11:25:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:57.760 [2024-11-26 11:25:15.890882] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.760 11:25:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.019 11:25:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.019 "name": "raid_bdev1", 00:14:58.019 "uuid": "d8e60249-5ecf-4671-aea3-ab0622bfb40b", 00:14:58.019 "strip_size_kb": 64, 00:14:58.019 "state": "configuring", 00:14:58.019 "raid_level": "raid0", 00:14:58.019 "superblock": true, 00:14:58.019 "num_base_bdevs": 3, 00:14:58.019 "num_base_bdevs_discovered": 1, 00:14:58.019 "num_base_bdevs_operational": 3, 00:14:58.019 "base_bdevs_list": [ 00:14:58.019 { 00:14:58.019 "name": "pt1", 00:14:58.019 "uuid": "1fc524fd-8f9b-5473-9c13-bef23b8f1756", 00:14:58.019 "is_configured": true, 00:14:58.019 "data_offset": 2048, 00:14:58.019 "data_size": 63488 00:14:58.019 }, 00:14:58.019 { 00:14:58.019 "name": null, 00:14:58.019 "uuid": "9b6daeab-845d-5bc2-8092-267578c3c8a3", 00:14:58.019 "is_configured": false, 00:14:58.019 "data_offset": 2048, 00:14:58.019 "data_size": 63488 00:14:58.019 }, 00:14:58.019 { 00:14:58.019 "name": null, 00:14:58.019 "uuid": "5419cb1d-cb9b-5fe8-8ea1-c09950b63457", 00:14:58.019 "is_configured": false, 00:14:58.019 "data_offset": 2048, 00:14:58.019 "data_size": 63488 00:14:58.019 } 00:14:58.019 ] 00:14:58.019 }' 00:14:58.019 11:25:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.019 11:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.278 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:58.278 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:58.278 11:25:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.537 [2024-11-26 11:25:16.643123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.537 [2024-11-26 11:25:16.643200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.537 [2024-11-26 11:25:16.643264] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:14:58.537 [2024-11-26 11:25:16.643294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.537 [2024-11-26 11:25:16.643746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.537 [2024-11-26 11:25:16.643777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.537 [2024-11-26 11:25:16.643875] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:58.537 [2024-11-26 11:25:16.643922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.537 pt2 00:14:58.537 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:58.537 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:58.537 11:25:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.796 [2024-11-26 11:25:16.863215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.796 [2024-11-26 11:25:16.863338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.796 [2024-11-26 11:25:16.863372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:14:58.796 [2024-11-26 11:25:16.863386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.796 [2024-11-26 11:25:16.863820] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.796 [2024-11-26 11:25:16.863865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.796 [2024-11-26 11:25:16.863990] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:58.796 [2024-11-26 11:25:16.864034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.796 [2024-11-26 11:25:16.864183] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:14:58.796 [2024-11-26 11:25:16.864198] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.796 [2024-11-26 11:25:16.864300] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:14:58.796 [2024-11-26 11:25:16.864640] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:14:58.797 [2024-11-26 11:25:16.864659] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:14:58.797 [2024-11-26 11:25:16.864769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.797 pt3 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.797 11:25:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.055 11:25:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.055 "name": "raid_bdev1", 00:14:59.055 "uuid": "d8e60249-5ecf-4671-aea3-ab0622bfb40b", 00:14:59.055 "strip_size_kb": 64, 00:14:59.055 "state": "online", 00:14:59.055 "raid_level": "raid0", 00:14:59.055 "superblock": true, 00:14:59.055 "num_base_bdevs": 3, 00:14:59.055 "num_base_bdevs_discovered": 3, 00:14:59.055 "num_base_bdevs_operational": 3, 00:14:59.055 "base_bdevs_list": [ 00:14:59.055 { 00:14:59.055 "name": "pt1", 00:14:59.055 "uuid": "1fc524fd-8f9b-5473-9c13-bef23b8f1756", 00:14:59.055 "is_configured": true, 00:14:59.055 "data_offset": 2048, 00:14:59.055 "data_size": 63488 00:14:59.055 }, 00:14:59.055 { 00:14:59.055 "name": "pt2", 00:14:59.055 "uuid": "9b6daeab-845d-5bc2-8092-267578c3c8a3", 00:14:59.055 "is_configured": true, 00:14:59.055 "data_offset": 2048, 00:14:59.055 "data_size": 63488 00:14:59.055 }, 00:14:59.055 { 00:14:59.055 "name": "pt3", 00:14:59.055 "uuid": "5419cb1d-cb9b-5fe8-8ea1-c09950b63457", 00:14:59.055 "is_configured": true, 00:14:59.055 "data_offset": 2048, 00:14:59.055 "data_size": 63488 00:14:59.055 } 00:14:59.055 ] 00:14:59.055 }' 00:14:59.055 11:25:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.055 11:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.314 11:25:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:59.314 11:25:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:59.573 [2024-11-26 11:25:17.671696] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.573 11:25:17 -- bdev/bdev_raid.sh@430 -- # '[' d8e60249-5ecf-4671-aea3-ab0622bfb40b '!=' d8e60249-5ecf-4671-aea3-ab0622bfb40b ']' 00:14:59.573 11:25:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:59.573 11:25:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:59.573 11:25:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:59.573 11:25:17 -- bdev/bdev_raid.sh@511 -- # killprocess 82139 00:14:59.573 11:25:17 -- common/autotest_common.sh@936 -- # '[' -z 82139 ']' 00:14:59.573 11:25:17 -- common/autotest_common.sh@940 -- # kill -0 82139 00:14:59.573 11:25:17 -- common/autotest_common.sh@941 -- # uname 00:14:59.573 11:25:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.573 11:25:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82139 00:14:59.573 killing process with pid 82139 00:14:59.573 11:25:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.573 11:25:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.573 11:25:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82139' 00:14:59.573 11:25:17 -- common/autotest_common.sh@955 -- # kill 82139 00:14:59.573 [2024-11-26 11:25:17.724741] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.573 11:25:17 -- common/autotest_common.sh@960 -- # wait 82139 00:14:59.573 [2024-11-26 11:25:17.724827] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.573 [2024-11-26 11:25:17.724925] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.573 [2024-11-26 11:25:17.724943] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:14:59.573 [2024-11-26 11:25:17.748026] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.832 11:25:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:59.832 00:14:59.832 real 0m8.551s 00:14:59.832 user 0m14.911s 00:14:59.832 sys 0m1.292s 00:14:59.832 11:25:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.832 ************************************ 00:14:59.832 END TEST raid_superblock_test 00:14:59.832 11:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 ************************************ 00:14:59.832 11:25:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:59.832 11:25:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:59.832 11:25:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:59.832 11:25:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.832 11:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 ************************************ 00:14:59.832 START TEST raid_state_function_test 00:14:59.832 ************************************ 00:14:59.832 11:25:18 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:59.832 Process raid pid: 82409 00:14:59.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=82409 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82409' 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82409 /var/tmp/spdk-raid.sock 00:14:59.832 11:25:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.832 11:25:18 -- common/autotest_common.sh@829 -- # '[' -z 82409 ']' 00:14:59.832 11:25:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.832 11:25:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.832 11:25:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.832 11:25:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.832 11:25:18 -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 [2024-11-26 11:25:18.063312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.833 [2024-11-26 11:25:18.063657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.091 [2024-11-26 11:25:18.220500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.091 [2024-11-26 11:25:18.255852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.091 [2024-11-26 11:25:18.290139] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.027 11:25:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.027 11:25:18 -- common/autotest_common.sh@862 -- # return 0 00:15:01.027 11:25:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:01.027 [2024-11-26 11:25:19.230105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.027 [2024-11-26 11:25:19.230203] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.027 [2024-11-26 11:25:19.230246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.027 [2024-11-26 11:25:19.230261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.027 [2024-11-26 11:25:19.230287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.027 [2024-11-26 11:25:19.230300] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.027 11:25:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.593 11:25:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.593 "name": "Existed_Raid", 00:15:01.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.593 "strip_size_kb": 64, 00:15:01.593 "state": "configuring", 00:15:01.593 "raid_level": "concat", 00:15:01.593 "superblock": false, 00:15:01.593 "num_base_bdevs": 3, 00:15:01.593 "num_base_bdevs_discovered": 0, 00:15:01.593 "num_base_bdevs_operational": 3, 00:15:01.593 "base_bdevs_list": [ 00:15:01.593 { 00:15:01.593 "name": "BaseBdev1", 00:15:01.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.593 "is_configured": false, 00:15:01.593 "data_offset": 0, 00:15:01.593 "data_size": 0 00:15:01.593 }, 00:15:01.593 { 00:15:01.593 "name": "BaseBdev2", 00:15:01.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.593 "is_configured": false, 00:15:01.593 "data_offset": 0, 00:15:01.593 "data_size": 0 00:15:01.593 }, 00:15:01.593 { 00:15:01.593 "name": "BaseBdev3", 00:15:01.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.593 "is_configured": false, 00:15:01.593 "data_offset": 0, 00:15:01.593 "data_size": 0 00:15:01.593 } 00:15:01.593 ] 00:15:01.593 }' 00:15:01.593 11:25:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.593 11:25:19 -- common/autotest_common.sh@10 -- # set +x 00:15:01.852 11:25:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.852 [2024-11-26 11:25:20.062316] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.852 [2024-11-26 11:25:20.062365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:01.852 11:25:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:02.111 [2024-11-26 11:25:20.314427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.111 [2024-11-26 11:25:20.314487] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.111 [2024-11-26 11:25:20.314524] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.111 [2024-11-26 11:25:20.314537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.111 [2024-11-26 11:25:20.314548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.111 [2024-11-26 11:25:20.314558] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.111 11:25:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.370 [2024-11-26 11:25:20.573347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.370 BaseBdev1 00:15:02.370 11:25:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:02.370 11:25:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:02.370 11:25:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.370 11:25:20 -- common/autotest_common.sh@899 -- # local i 00:15:02.370 11:25:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.370 11:25:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.370 11:25:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.630 11:25:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.890 [ 00:15:02.890 { 00:15:02.890 "name": "BaseBdev1", 00:15:02.890 "aliases": [ 00:15:02.890 "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01" 00:15:02.890 ], 00:15:02.890 "product_name": "Malloc disk", 00:15:02.890 "block_size": 512, 00:15:02.890 "num_blocks": 65536, 00:15:02.890 "uuid": "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01", 00:15:02.890 "assigned_rate_limits": { 00:15:02.890 "rw_ios_per_sec": 0, 00:15:02.890 "rw_mbytes_per_sec": 0, 00:15:02.890 "r_mbytes_per_sec": 0, 00:15:02.890 "w_mbytes_per_sec": 0 00:15:02.890 }, 00:15:02.890 "claimed": true, 00:15:02.890 "claim_type": "exclusive_write", 00:15:02.890 "zoned": false, 00:15:02.890 "supported_io_types": { 00:15:02.890 "read": true, 00:15:02.890 "write": true, 00:15:02.890 "unmap": true, 00:15:02.890 "write_zeroes": true, 00:15:02.890 "flush": true, 00:15:02.890 "reset": true, 00:15:02.890 "compare": false, 00:15:02.890 "compare_and_write": false, 00:15:02.890 "abort": true, 00:15:02.890 "nvme_admin": false, 00:15:02.890 "nvme_io": false 00:15:02.890 }, 00:15:02.890 "memory_domains": [ 00:15:02.890 { 00:15:02.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.890 "dma_device_type": 2 00:15:02.890 } 00:15:02.890 ], 00:15:02.890 "driver_specific": {} 00:15:02.890 } 00:15:02.890 ] 00:15:02.890 11:25:21 -- common/autotest_common.sh@905 -- # return 0 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.890 11:25:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.149 11:25:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.149 "name": "Existed_Raid", 00:15:03.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.149 "strip_size_kb": 64, 00:15:03.149 "state": "configuring", 00:15:03.149 "raid_level": "concat", 00:15:03.149 "superblock": false, 00:15:03.149 "num_base_bdevs": 3, 00:15:03.149 "num_base_bdevs_discovered": 1, 00:15:03.149 "num_base_bdevs_operational": 3, 00:15:03.149 "base_bdevs_list": [ 00:15:03.149 { 00:15:03.149 "name": "BaseBdev1", 00:15:03.149 "uuid": "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01", 00:15:03.149 "is_configured": true, 00:15:03.149 "data_offset": 0, 00:15:03.149 "data_size": 65536 00:15:03.149 }, 00:15:03.149 { 00:15:03.149 "name": "BaseBdev2", 00:15:03.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.149 "is_configured": false, 00:15:03.149 "data_offset": 0, 00:15:03.149 "data_size": 0 00:15:03.149 }, 00:15:03.149 { 00:15:03.149 "name": "BaseBdev3", 00:15:03.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.149 "is_configured": false, 00:15:03.149 "data_offset": 0, 00:15:03.149 "data_size": 0 00:15:03.149 } 00:15:03.149 ] 00:15:03.149 }' 00:15:03.149 11:25:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.149 11:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:03.407 11:25:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.668 [2024-11-26 11:25:21.841913] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.668 [2024-11-26 11:25:21.842032] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:03.668 11:25:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:03.668 11:25:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:03.926 [2024-11-26 11:25:22.086070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.926 [2024-11-26 11:25:22.088491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.926 [2024-11-26 11:25:22.088661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.926 [2024-11-26 11:25:22.088695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.926 [2024-11-26 11:25:22.088711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.926 11:25:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.184 11:25:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.184 "name": "Existed_Raid", 00:15:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.184 "strip_size_kb": 64, 00:15:04.184 "state": "configuring", 00:15:04.184 "raid_level": "concat", 00:15:04.184 "superblock": false, 00:15:04.184 "num_base_bdevs": 3, 00:15:04.184 "num_base_bdevs_discovered": 1, 00:15:04.184 "num_base_bdevs_operational": 3, 00:15:04.184 "base_bdevs_list": [ 00:15:04.184 { 00:15:04.184 "name": "BaseBdev1", 00:15:04.184 "uuid": "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01", 00:15:04.184 "is_configured": true, 00:15:04.184 "data_offset": 0, 00:15:04.184 "data_size": 65536 00:15:04.184 }, 00:15:04.184 { 00:15:04.184 "name": "BaseBdev2", 00:15:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.184 "is_configured": false, 00:15:04.184 "data_offset": 0, 00:15:04.184 "data_size": 0 00:15:04.184 }, 00:15:04.184 { 00:15:04.184 "name": "BaseBdev3", 00:15:04.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.185 "is_configured": false, 00:15:04.185 "data_offset": 0, 00:15:04.185 "data_size": 0 00:15:04.185 } 00:15:04.185 ] 00:15:04.185 }' 00:15:04.185 11:25:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.185 11:25:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.444 11:25:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.703 [2024-11-26 11:25:22.931368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.703 BaseBdev2 00:15:04.962 11:25:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:04.962 11:25:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:04.962 11:25:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.962 11:25:22 -- common/autotest_common.sh@899 -- # local i 00:15:04.962 11:25:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.962 11:25:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.962 11:25:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.962 11:25:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.221 [ 00:15:05.221 { 00:15:05.221 "name": "BaseBdev2", 00:15:05.221 "aliases": [ 00:15:05.221 "0d987372-02d9-4a1c-9e44-58cfeee092a1" 00:15:05.221 ], 00:15:05.221 "product_name": "Malloc disk", 00:15:05.221 "block_size": 512, 00:15:05.221 "num_blocks": 65536, 00:15:05.221 "uuid": "0d987372-02d9-4a1c-9e44-58cfeee092a1", 00:15:05.221 "assigned_rate_limits": { 00:15:05.221 "rw_ios_per_sec": 0, 00:15:05.221 "rw_mbytes_per_sec": 0, 00:15:05.221 "r_mbytes_per_sec": 0, 00:15:05.221 "w_mbytes_per_sec": 0 00:15:05.221 }, 00:15:05.221 "claimed": true, 00:15:05.221 "claim_type": "exclusive_write", 00:15:05.221 "zoned": false, 00:15:05.221 "supported_io_types": { 00:15:05.221 "read": true, 00:15:05.221 "write": true, 00:15:05.221 "unmap": true, 00:15:05.221 "write_zeroes": true, 00:15:05.221 "flush": true, 00:15:05.221 "reset": true, 00:15:05.221 "compare": false, 00:15:05.221 "compare_and_write": false, 00:15:05.221 "abort": true, 00:15:05.221 "nvme_admin": false, 00:15:05.221 "nvme_io": false 00:15:05.221 }, 00:15:05.221 "memory_domains": [ 00:15:05.221 { 00:15:05.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.221 "dma_device_type": 2 00:15:05.221 } 00:15:05.221 ], 00:15:05.221 "driver_specific": {} 00:15:05.221 } 00:15:05.221 ] 00:15:05.221 11:25:23 -- common/autotest_common.sh@905 -- # return 0 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.221 11:25:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.480 11:25:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.480 "name": "Existed_Raid", 00:15:05.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.480 "strip_size_kb": 64, 00:15:05.480 "state": "configuring", 00:15:05.480 "raid_level": "concat", 00:15:05.480 "superblock": false, 00:15:05.480 "num_base_bdevs": 3, 00:15:05.480 "num_base_bdevs_discovered": 2, 00:15:05.480 "num_base_bdevs_operational": 3, 00:15:05.480 "base_bdevs_list": [ 00:15:05.480 { 00:15:05.480 "name": "BaseBdev1", 00:15:05.480 "uuid": "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01", 00:15:05.480 "is_configured": true, 00:15:05.480 "data_offset": 0, 00:15:05.480 "data_size": 65536 00:15:05.480 }, 00:15:05.480 { 00:15:05.480 "name": "BaseBdev2", 00:15:05.480 "uuid": "0d987372-02d9-4a1c-9e44-58cfeee092a1", 00:15:05.480 "is_configured": true, 00:15:05.480 "data_offset": 0, 00:15:05.480 "data_size": 65536 00:15:05.480 }, 00:15:05.480 { 00:15:05.480 "name": "BaseBdev3", 00:15:05.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.480 "is_configured": false, 00:15:05.480 "data_offset": 0, 00:15:05.480 "data_size": 0 00:15:05.480 } 00:15:05.480 ] 00:15:05.480 }' 00:15:05.480 11:25:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.480 11:25:23 -- common/autotest_common.sh@10 -- # set +x 00:15:05.739 11:25:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.998 [2024-11-26 11:25:24.136829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.998 [2024-11-26 11:25:24.137151] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:05.998 [2024-11-26 11:25:24.137201] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:05.998 [2024-11-26 11:25:24.137412] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:05.998 [2024-11-26 11:25:24.138014] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:05.998 [2024-11-26 11:25:24.138183] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:05.998 [2024-11-26 11:25:24.138619] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.998 BaseBdev3 00:15:05.998 11:25:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:05.998 11:25:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:05.998 11:25:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.998 11:25:24 -- common/autotest_common.sh@899 -- # local i 00:15:05.998 11:25:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.998 11:25:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.998 11:25:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.256 11:25:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:06.516 [ 00:15:06.516 { 00:15:06.516 "name": "BaseBdev3", 00:15:06.516 "aliases": [ 00:15:06.516 "db64c86f-457c-4161-8be4-fa4897f73596" 00:15:06.516 ], 00:15:06.516 "product_name": "Malloc disk", 00:15:06.516 "block_size": 512, 00:15:06.516 "num_blocks": 65536, 00:15:06.516 "uuid": "db64c86f-457c-4161-8be4-fa4897f73596", 00:15:06.516 "assigned_rate_limits": { 00:15:06.516 "rw_ios_per_sec": 0, 00:15:06.516 "rw_mbytes_per_sec": 0, 00:15:06.516 "r_mbytes_per_sec": 0, 00:15:06.516 "w_mbytes_per_sec": 0 00:15:06.516 }, 00:15:06.516 "claimed": true, 00:15:06.516 "claim_type": "exclusive_write", 00:15:06.516 "zoned": false, 00:15:06.516 "supported_io_types": { 00:15:06.516 "read": true, 00:15:06.516 "write": true, 00:15:06.516 "unmap": true, 00:15:06.516 "write_zeroes": true, 00:15:06.516 "flush": true, 00:15:06.516 "reset": true, 00:15:06.516 "compare": false, 00:15:06.516 "compare_and_write": false, 00:15:06.516 "abort": true, 00:15:06.516 "nvme_admin": false, 00:15:06.516 "nvme_io": false 00:15:06.516 }, 00:15:06.516 "memory_domains": [ 00:15:06.516 { 00:15:06.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.516 "dma_device_type": 2 00:15:06.516 } 00:15:06.516 ], 00:15:06.516 "driver_specific": {} 00:15:06.516 } 00:15:06.516 ] 00:15:06.516 11:25:24 -- common/autotest_common.sh@905 -- # return 0 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.516 11:25:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.775 11:25:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.776 "name": "Existed_Raid", 00:15:06.776 "uuid": "c52778ab-3321-4173-b708-9e0f319a5f9b", 00:15:06.776 "strip_size_kb": 64, 00:15:06.776 "state": "online", 00:15:06.776 "raid_level": "concat", 00:15:06.776 "superblock": false, 00:15:06.776 "num_base_bdevs": 3, 00:15:06.776 "num_base_bdevs_discovered": 3, 00:15:06.776 "num_base_bdevs_operational": 3, 00:15:06.776 "base_bdevs_list": [ 00:15:06.776 { 00:15:06.776 "name": "BaseBdev1", 00:15:06.776 "uuid": "a4a6dc2b-c7c3-4821-a7bb-70a203e2dc01", 00:15:06.776 "is_configured": true, 00:15:06.776 "data_offset": 0, 00:15:06.776 "data_size": 65536 00:15:06.776 }, 00:15:06.776 { 00:15:06.776 "name": "BaseBdev2", 00:15:06.776 "uuid": "0d987372-02d9-4a1c-9e44-58cfeee092a1", 00:15:06.776 "is_configured": true, 00:15:06.776 "data_offset": 0, 00:15:06.776 "data_size": 65536 00:15:06.776 }, 00:15:06.776 { 00:15:06.776 "name": "BaseBdev3", 00:15:06.776 "uuid": "db64c86f-457c-4161-8be4-fa4897f73596", 00:15:06.776 "is_configured": true, 00:15:06.776 "data_offset": 0, 00:15:06.776 "data_size": 65536 00:15:06.776 } 00:15:06.776 ] 00:15:06.776 }' 00:15:06.776 11:25:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.776 11:25:24 -- common/autotest_common.sh@10 -- # set +x 00:15:07.344 11:25:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:07.344 [2024-11-26 11:25:25.557575] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.344 [2024-11-26 11:25:25.557862] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.344 [2024-11-26 11:25:25.558082] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.603 11:25:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.862 11:25:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.862 "name": "Existed_Raid", 00:15:07.862 "uuid": "c52778ab-3321-4173-b708-9e0f319a5f9b", 00:15:07.862 "strip_size_kb": 64, 00:15:07.862 "state": "offline", 00:15:07.862 "raid_level": "concat", 00:15:07.862 "superblock": false, 00:15:07.862 "num_base_bdevs": 3, 00:15:07.862 "num_base_bdevs_discovered": 2, 00:15:07.862 "num_base_bdevs_operational": 2, 00:15:07.862 "base_bdevs_list": [ 00:15:07.862 { 00:15:07.862 "name": null, 00:15:07.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.862 "is_configured": false, 00:15:07.862 "data_offset": 0, 00:15:07.862 "data_size": 65536 00:15:07.862 }, 00:15:07.862 { 00:15:07.862 "name": "BaseBdev2", 00:15:07.862 "uuid": "0d987372-02d9-4a1c-9e44-58cfeee092a1", 00:15:07.862 "is_configured": true, 00:15:07.862 "data_offset": 0, 00:15:07.862 "data_size": 65536 00:15:07.862 }, 00:15:07.862 { 00:15:07.862 "name": "BaseBdev3", 00:15:07.862 "uuid": "db64c86f-457c-4161-8be4-fa4897f73596", 00:15:07.862 "is_configured": true, 00:15:07.862 "data_offset": 0, 00:15:07.862 "data_size": 65536 00:15:07.862 } 00:15:07.862 ] 00:15:07.862 }' 00:15:07.862 11:25:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.862 11:25:25 -- common/autotest_common.sh@10 -- # set +x 00:15:08.121 11:25:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:08.121 11:25:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.121 11:25:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.121 11:25:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:08.380 11:25:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:08.380 11:25:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.380 11:25:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:08.639 [2024-11-26 11:25:26.658842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.639 11:25:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:08.639 11:25:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.639 11:25:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.639 11:25:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:08.898 11:25:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:08.898 11:25:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.898 11:25:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:09.157 [2024-11-26 11:25:27.211048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.157 [2024-11-26 11:25:27.211111] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:09.157 11:25:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:09.157 11:25:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:09.157 11:25:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.157 11:25:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.416 11:25:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:09.416 11:25:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:09.416 11:25:27 -- bdev/bdev_raid.sh@287 -- # killprocess 82409 00:15:09.416 11:25:27 -- common/autotest_common.sh@936 -- # '[' -z 82409 ']' 00:15:09.416 11:25:27 -- common/autotest_common.sh@940 -- # kill -0 82409 00:15:09.416 11:25:27 -- common/autotest_common.sh@941 -- # uname 00:15:09.416 11:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.416 11:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82409 00:15:09.416 killing process with pid 82409 00:15:09.416 11:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.416 11:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.416 11:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82409' 00:15:09.416 11:25:27 -- common/autotest_common.sh@955 -- # kill 82409 00:15:09.416 11:25:27 -- common/autotest_common.sh@960 -- # wait 82409 00:15:09.416 [2024-11-26 11:25:27.502452] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.416 [2024-11-26 11:25:27.502540] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:09.674 00:15:09.674 real 0m9.701s 00:15:09.674 user 0m16.954s 00:15:09.674 sys 0m1.554s 00:15:09.674 11:25:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:09.674 ************************************ 00:15:09.674 END TEST raid_state_function_test 00:15:09.674 ************************************ 00:15:09.674 11:25:27 -- common/autotest_common.sh@10 -- # set +x 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:09.674 11:25:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:09.674 11:25:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.674 11:25:27 -- common/autotest_common.sh@10 -- # set +x 00:15:09.674 ************************************ 00:15:09.674 START TEST raid_state_function_test_sb 00:15:09.674 ************************************ 00:15:09.674 11:25:27 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:09.674 Process raid pid: 82744 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=82744 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82744' 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:09.674 11:25:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82744 /var/tmp/spdk-raid.sock 00:15:09.674 11:25:27 -- common/autotest_common.sh@829 -- # '[' -z 82744 ']' 00:15:09.674 11:25:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.674 11:25:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.674 11:25:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.674 11:25:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.674 11:25:27 -- common/autotest_common.sh@10 -- # set +x 00:15:09.674 [2024-11-26 11:25:27.816562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:09.674 [2024-11-26 11:25:27.816704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.933 [2024-11-26 11:25:27.977961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.934 [2024-11-26 11:25:28.014129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.934 [2024-11-26 11:25:28.047642] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.871 11:25:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.871 11:25:28 -- common/autotest_common.sh@862 -- # return 0 00:15:10.871 11:25:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:10.871 [2024-11-26 11:25:29.012311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.871 [2024-11-26 11:25:29.012393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.871 [2024-11-26 11:25:29.012411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.871 [2024-11-26 11:25:29.012424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.871 [2024-11-26 11:25:29.012435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.871 [2024-11-26 11:25:29.012448] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.871 11:25:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.130 11:25:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.130 "name": "Existed_Raid", 00:15:11.130 "uuid": "9ac135ab-02aa-4b9b-ac94-958277c135ed", 00:15:11.130 "strip_size_kb": 64, 00:15:11.130 "state": "configuring", 00:15:11.130 "raid_level": "concat", 00:15:11.130 "superblock": true, 00:15:11.130 "num_base_bdevs": 3, 00:15:11.130 "num_base_bdevs_discovered": 0, 00:15:11.130 "num_base_bdevs_operational": 3, 00:15:11.130 "base_bdevs_list": [ 00:15:11.130 { 00:15:11.130 "name": "BaseBdev1", 00:15:11.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.130 "is_configured": false, 00:15:11.130 "data_offset": 0, 00:15:11.130 "data_size": 0 00:15:11.130 }, 00:15:11.130 { 00:15:11.130 "name": "BaseBdev2", 00:15:11.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.130 "is_configured": false, 00:15:11.130 "data_offset": 0, 00:15:11.130 "data_size": 0 00:15:11.130 }, 00:15:11.130 { 00:15:11.130 "name": "BaseBdev3", 00:15:11.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.130 "is_configured": false, 00:15:11.130 "data_offset": 0, 00:15:11.130 "data_size": 0 00:15:11.130 } 00:15:11.130 ] 00:15:11.130 }' 00:15:11.130 11:25:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.130 11:25:29 -- common/autotest_common.sh@10 -- # set +x 00:15:11.696 11:25:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.696 [2024-11-26 11:25:29.872495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.696 [2024-11-26 11:25:29.872813] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:11.696 11:25:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:11.954 [2024-11-26 11:25:30.112670] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.954 [2024-11-26 11:25:30.112977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.954 [2024-11-26 11:25:30.113106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.954 [2024-11-26 11:25:30.113135] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.954 [2024-11-26 11:25:30.113151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.954 [2024-11-26 11:25:30.113178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.954 11:25:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.213 [2024-11-26 11:25:30.376196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.213 BaseBdev1 00:15:12.213 11:25:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:12.213 11:25:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.213 11:25:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.213 11:25:30 -- common/autotest_common.sh@899 -- # local i 00:15:12.213 11:25:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.213 11:25:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.213 11:25:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.472 11:25:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.731 [ 00:15:12.731 { 00:15:12.731 "name": "BaseBdev1", 00:15:12.731 "aliases": [ 00:15:12.731 "7931fb6e-5f81-41f9-9afe-5238d8c95a5e" 00:15:12.731 ], 00:15:12.731 "product_name": "Malloc disk", 00:15:12.731 "block_size": 512, 00:15:12.731 "num_blocks": 65536, 00:15:12.731 "uuid": "7931fb6e-5f81-41f9-9afe-5238d8c95a5e", 00:15:12.731 "assigned_rate_limits": { 00:15:12.731 "rw_ios_per_sec": 0, 00:15:12.731 "rw_mbytes_per_sec": 0, 00:15:12.731 "r_mbytes_per_sec": 0, 00:15:12.731 "w_mbytes_per_sec": 0 00:15:12.731 }, 00:15:12.731 "claimed": true, 00:15:12.731 "claim_type": "exclusive_write", 00:15:12.731 "zoned": false, 00:15:12.731 "supported_io_types": { 00:15:12.731 "read": true, 00:15:12.731 "write": true, 00:15:12.731 "unmap": true, 00:15:12.731 "write_zeroes": true, 00:15:12.731 "flush": true, 00:15:12.731 "reset": true, 00:15:12.731 "compare": false, 00:15:12.731 "compare_and_write": false, 00:15:12.731 "abort": true, 00:15:12.731 "nvme_admin": false, 00:15:12.731 "nvme_io": false 00:15:12.731 }, 00:15:12.731 "memory_domains": [ 00:15:12.731 { 00:15:12.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.731 "dma_device_type": 2 00:15:12.731 } 00:15:12.731 ], 00:15:12.731 "driver_specific": {} 00:15:12.731 } 00:15:12.731 ] 00:15:12.731 11:25:30 -- common/autotest_common.sh@905 -- # return 0 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.731 11:25:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.990 11:25:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.990 "name": "Existed_Raid", 00:15:12.990 "uuid": "7c473581-05ac-4c9f-b096-564fd6f3b6be", 00:15:12.990 "strip_size_kb": 64, 00:15:12.990 "state": "configuring", 00:15:12.990 "raid_level": "concat", 00:15:12.990 "superblock": true, 00:15:12.990 "num_base_bdevs": 3, 00:15:12.990 "num_base_bdevs_discovered": 1, 00:15:12.990 "num_base_bdevs_operational": 3, 00:15:12.990 "base_bdevs_list": [ 00:15:12.990 { 00:15:12.990 "name": "BaseBdev1", 00:15:12.990 "uuid": "7931fb6e-5f81-41f9-9afe-5238d8c95a5e", 00:15:12.990 "is_configured": true, 00:15:12.990 "data_offset": 2048, 00:15:12.990 "data_size": 63488 00:15:12.990 }, 00:15:12.990 { 00:15:12.990 "name": "BaseBdev2", 00:15:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.990 "is_configured": false, 00:15:12.990 "data_offset": 0, 00:15:12.990 "data_size": 0 00:15:12.990 }, 00:15:12.990 { 00:15:12.990 "name": "BaseBdev3", 00:15:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.990 "is_configured": false, 00:15:12.990 "data_offset": 0, 00:15:12.990 "data_size": 0 00:15:12.990 } 00:15:12.990 ] 00:15:12.990 }' 00:15:12.990 11:25:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.990 11:25:31 -- common/autotest_common.sh@10 -- # set +x 00:15:13.558 11:25:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.558 [2024-11-26 11:25:31.704796] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.558 [2024-11-26 11:25:31.704891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:13.558 11:25:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:13.558 11:25:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:13.829 11:25:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.161 BaseBdev1 00:15:14.161 11:25:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:14.161 11:25:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:14.161 11:25:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.161 11:25:32 -- common/autotest_common.sh@899 -- # local i 00:15:14.161 11:25:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.161 11:25:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.161 11:25:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.449 11:25:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.708 [ 00:15:14.708 { 00:15:14.708 "name": "BaseBdev1", 00:15:14.708 "aliases": [ 00:15:14.708 "81b7ba99-1717-4db2-b64b-67faf5af5156" 00:15:14.708 ], 00:15:14.708 "product_name": "Malloc disk", 00:15:14.708 "block_size": 512, 00:15:14.709 "num_blocks": 65536, 00:15:14.709 "uuid": "81b7ba99-1717-4db2-b64b-67faf5af5156", 00:15:14.709 "assigned_rate_limits": { 00:15:14.709 "rw_ios_per_sec": 0, 00:15:14.709 "rw_mbytes_per_sec": 0, 00:15:14.709 "r_mbytes_per_sec": 0, 00:15:14.709 "w_mbytes_per_sec": 0 00:15:14.709 }, 00:15:14.709 "claimed": false, 00:15:14.709 "zoned": false, 00:15:14.709 "supported_io_types": { 00:15:14.709 "read": true, 00:15:14.709 "write": true, 00:15:14.709 "unmap": true, 00:15:14.709 "write_zeroes": true, 00:15:14.709 "flush": true, 00:15:14.709 "reset": true, 00:15:14.709 "compare": false, 00:15:14.709 "compare_and_write": false, 00:15:14.709 "abort": true, 00:15:14.709 "nvme_admin": false, 00:15:14.709 "nvme_io": false 00:15:14.709 }, 00:15:14.709 "memory_domains": [ 00:15:14.709 { 00:15:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.709 "dma_device_type": 2 00:15:14.709 } 00:15:14.709 ], 00:15:14.709 "driver_specific": {} 00:15:14.709 } 00:15:14.709 ] 00:15:14.709 11:25:32 -- common/autotest_common.sh@905 -- # return 0 00:15:14.709 11:25:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:14.967 [2024-11-26 11:25:33.027679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.968 [2024-11-26 11:25:33.029954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.968 [2024-11-26 11:25:33.030004] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.968 [2024-11-26 11:25:33.030023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.968 [2024-11-26 11:25:33.030038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.968 11:25:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.227 11:25:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.227 "name": "Existed_Raid", 00:15:15.227 "uuid": "0acf60d8-beed-4a45-af3d-8ef33c139e77", 00:15:15.227 "strip_size_kb": 64, 00:15:15.227 "state": "configuring", 00:15:15.227 "raid_level": "concat", 00:15:15.227 "superblock": true, 00:15:15.227 "num_base_bdevs": 3, 00:15:15.227 "num_base_bdevs_discovered": 1, 00:15:15.227 "num_base_bdevs_operational": 3, 00:15:15.227 "base_bdevs_list": [ 00:15:15.227 { 00:15:15.227 "name": "BaseBdev1", 00:15:15.227 "uuid": "81b7ba99-1717-4db2-b64b-67faf5af5156", 00:15:15.227 "is_configured": true, 00:15:15.227 "data_offset": 2048, 00:15:15.227 "data_size": 63488 00:15:15.227 }, 00:15:15.227 { 00:15:15.227 "name": "BaseBdev2", 00:15:15.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.227 "is_configured": false, 00:15:15.227 "data_offset": 0, 00:15:15.227 "data_size": 0 00:15:15.227 }, 00:15:15.227 { 00:15:15.227 "name": "BaseBdev3", 00:15:15.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.227 "is_configured": false, 00:15:15.227 "data_offset": 0, 00:15:15.227 "data_size": 0 00:15:15.227 } 00:15:15.227 ] 00:15:15.227 }' 00:15:15.227 11:25:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.227 11:25:33 -- common/autotest_common.sh@10 -- # set +x 00:15:15.486 11:25:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.745 [2024-11-26 11:25:33.924668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.745 BaseBdev2 00:15:15.745 11:25:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:15.745 11:25:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:15.745 11:25:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.745 11:25:33 -- common/autotest_common.sh@899 -- # local i 00:15:15.745 11:25:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.745 11:25:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.745 11:25:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.004 11:25:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.263 [ 00:15:16.263 { 00:15:16.263 "name": "BaseBdev2", 00:15:16.263 "aliases": [ 00:15:16.263 "29c57f24-c6c0-471f-9a7f-767cb7fcb23c" 00:15:16.263 ], 00:15:16.263 "product_name": "Malloc disk", 00:15:16.263 "block_size": 512, 00:15:16.263 "num_blocks": 65536, 00:15:16.263 "uuid": "29c57f24-c6c0-471f-9a7f-767cb7fcb23c", 00:15:16.263 "assigned_rate_limits": { 00:15:16.263 "rw_ios_per_sec": 0, 00:15:16.263 "rw_mbytes_per_sec": 0, 00:15:16.263 "r_mbytes_per_sec": 0, 00:15:16.263 "w_mbytes_per_sec": 0 00:15:16.263 }, 00:15:16.263 "claimed": true, 00:15:16.263 "claim_type": "exclusive_write", 00:15:16.263 "zoned": false, 00:15:16.263 "supported_io_types": { 00:15:16.263 "read": true, 00:15:16.263 "write": true, 00:15:16.263 "unmap": true, 00:15:16.263 "write_zeroes": true, 00:15:16.263 "flush": true, 00:15:16.263 "reset": true, 00:15:16.263 "compare": false, 00:15:16.263 "compare_and_write": false, 00:15:16.263 "abort": true, 00:15:16.263 "nvme_admin": false, 00:15:16.263 "nvme_io": false 00:15:16.263 }, 00:15:16.263 "memory_domains": [ 00:15:16.263 { 00:15:16.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.263 "dma_device_type": 2 00:15:16.263 } 00:15:16.263 ], 00:15:16.263 "driver_specific": {} 00:15:16.263 } 00:15:16.263 ] 00:15:16.263 11:25:34 -- common/autotest_common.sh@905 -- # return 0 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.263 11:25:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.523 11:25:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.523 "name": "Existed_Raid", 00:15:16.523 "uuid": "0acf60d8-beed-4a45-af3d-8ef33c139e77", 00:15:16.523 "strip_size_kb": 64, 00:15:16.523 "state": "configuring", 00:15:16.523 "raid_level": "concat", 00:15:16.523 "superblock": true, 00:15:16.523 "num_base_bdevs": 3, 00:15:16.523 "num_base_bdevs_discovered": 2, 00:15:16.523 "num_base_bdevs_operational": 3, 00:15:16.523 "base_bdevs_list": [ 00:15:16.523 { 00:15:16.523 "name": "BaseBdev1", 00:15:16.523 "uuid": "81b7ba99-1717-4db2-b64b-67faf5af5156", 00:15:16.523 "is_configured": true, 00:15:16.523 "data_offset": 2048, 00:15:16.523 "data_size": 63488 00:15:16.523 }, 00:15:16.523 { 00:15:16.523 "name": "BaseBdev2", 00:15:16.523 "uuid": "29c57f24-c6c0-471f-9a7f-767cb7fcb23c", 00:15:16.523 "is_configured": true, 00:15:16.523 "data_offset": 2048, 00:15:16.523 "data_size": 63488 00:15:16.523 }, 00:15:16.523 { 00:15:16.523 "name": "BaseBdev3", 00:15:16.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.523 "is_configured": false, 00:15:16.523 "data_offset": 0, 00:15:16.523 "data_size": 0 00:15:16.523 } 00:15:16.523 ] 00:15:16.523 }' 00:15:16.523 11:25:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.523 11:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.782 11:25:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:17.041 [2024-11-26 11:25:35.105666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.041 [2024-11-26 11:25:35.106180] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:17.041 [2024-11-26 11:25:35.106332] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:17.041 [2024-11-26 11:25:35.106510] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:17.041 [2024-11-26 11:25:35.106957] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:17.041 BaseBdev3 00:15:17.041 [2024-11-26 11:25:35.107144] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:17.041 [2024-11-26 11:25:35.107421] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.041 11:25:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:17.041 11:25:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:17.041 11:25:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.041 11:25:35 -- common/autotest_common.sh@899 -- # local i 00:15:17.041 11:25:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.041 11:25:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.041 11:25:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.300 11:25:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.559 [ 00:15:17.559 { 00:15:17.559 "name": "BaseBdev3", 00:15:17.559 "aliases": [ 00:15:17.559 "6a50fb64-f4e8-43b4-8994-73eca1ee97ff" 00:15:17.559 ], 00:15:17.559 "product_name": "Malloc disk", 00:15:17.559 "block_size": 512, 00:15:17.559 "num_blocks": 65536, 00:15:17.559 "uuid": "6a50fb64-f4e8-43b4-8994-73eca1ee97ff", 00:15:17.559 "assigned_rate_limits": { 00:15:17.559 "rw_ios_per_sec": 0, 00:15:17.559 "rw_mbytes_per_sec": 0, 00:15:17.559 "r_mbytes_per_sec": 0, 00:15:17.559 "w_mbytes_per_sec": 0 00:15:17.559 }, 00:15:17.559 "claimed": true, 00:15:17.559 "claim_type": "exclusive_write", 00:15:17.559 "zoned": false, 00:15:17.559 "supported_io_types": { 00:15:17.559 "read": true, 00:15:17.559 "write": true, 00:15:17.559 "unmap": true, 00:15:17.559 "write_zeroes": true, 00:15:17.559 "flush": true, 00:15:17.559 "reset": true, 00:15:17.559 "compare": false, 00:15:17.559 "compare_and_write": false, 00:15:17.559 "abort": true, 00:15:17.559 "nvme_admin": false, 00:15:17.559 "nvme_io": false 00:15:17.559 }, 00:15:17.559 "memory_domains": [ 00:15:17.559 { 00:15:17.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.559 "dma_device_type": 2 00:15:17.559 } 00:15:17.559 ], 00:15:17.559 "driver_specific": {} 00:15:17.559 } 00:15:17.559 ] 00:15:17.559 11:25:35 -- common/autotest_common.sh@905 -- # return 0 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.559 11:25:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.559 "name": "Existed_Raid", 00:15:17.559 "uuid": "0acf60d8-beed-4a45-af3d-8ef33c139e77", 00:15:17.559 "strip_size_kb": 64, 00:15:17.559 "state": "online", 00:15:17.559 "raid_level": "concat", 00:15:17.559 "superblock": true, 00:15:17.559 "num_base_bdevs": 3, 00:15:17.559 "num_base_bdevs_discovered": 3, 00:15:17.559 "num_base_bdevs_operational": 3, 00:15:17.559 "base_bdevs_list": [ 00:15:17.559 { 00:15:17.559 "name": "BaseBdev1", 00:15:17.559 "uuid": "81b7ba99-1717-4db2-b64b-67faf5af5156", 00:15:17.559 "is_configured": true, 00:15:17.559 "data_offset": 2048, 00:15:17.559 "data_size": 63488 00:15:17.559 }, 00:15:17.559 { 00:15:17.559 "name": "BaseBdev2", 00:15:17.559 "uuid": "29c57f24-c6c0-471f-9a7f-767cb7fcb23c", 00:15:17.559 "is_configured": true, 00:15:17.559 "data_offset": 2048, 00:15:17.559 "data_size": 63488 00:15:17.559 }, 00:15:17.559 { 00:15:17.559 "name": "BaseBdev3", 00:15:17.559 "uuid": "6a50fb64-f4e8-43b4-8994-73eca1ee97ff", 00:15:17.559 "is_configured": true, 00:15:17.559 "data_offset": 2048, 00:15:17.559 "data_size": 63488 00:15:17.559 } 00:15:17.559 ] 00:15:17.559 }' 00:15:17.819 11:25:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.819 11:25:35 -- common/autotest_common.sh@10 -- # set +x 00:15:18.077 11:25:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.337 [2024-11-26 11:25:36.334367] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.337 [2024-11-26 11:25:36.334616] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.337 [2024-11-26 11:25:36.334824] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.337 11:25:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.597 11:25:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.597 "name": "Existed_Raid", 00:15:18.597 "uuid": "0acf60d8-beed-4a45-af3d-8ef33c139e77", 00:15:18.597 "strip_size_kb": 64, 00:15:18.597 "state": "offline", 00:15:18.597 "raid_level": "concat", 00:15:18.597 "superblock": true, 00:15:18.597 "num_base_bdevs": 3, 00:15:18.597 "num_base_bdevs_discovered": 2, 00:15:18.597 "num_base_bdevs_operational": 2, 00:15:18.597 "base_bdevs_list": [ 00:15:18.597 { 00:15:18.597 "name": null, 00:15:18.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.597 "is_configured": false, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev2", 00:15:18.597 "uuid": "29c57f24-c6c0-471f-9a7f-767cb7fcb23c", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev3", 00:15:18.597 "uuid": "6a50fb64-f4e8-43b4-8994-73eca1ee97ff", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 } 00:15:18.597 ] 00:15:18.597 }' 00:15:18.597 11:25:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.597 11:25:36 -- common/autotest_common.sh@10 -- # set +x 00:15:18.857 11:25:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:18.857 11:25:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:18.857 11:25:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.857 11:25:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:19.116 11:25:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:19.116 11:25:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.116 11:25:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:19.375 [2024-11-26 11:25:37.426405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.375 11:25:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:19.375 11:25:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:19.375 11:25:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.375 11:25:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:19.634 11:25:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:19.634 11:25:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.634 11:25:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:19.634 [2024-11-26 11:25:37.854023] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:19.634 [2024-11-26 11:25:37.854275] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:19.894 11:25:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:19.894 11:25:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:19.894 11:25:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.894 11:25:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.154 11:25:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:20.154 11:25:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:20.154 11:25:38 -- bdev/bdev_raid.sh@287 -- # killprocess 82744 00:15:20.154 11:25:38 -- common/autotest_common.sh@936 -- # '[' -z 82744 ']' 00:15:20.154 11:25:38 -- common/autotest_common.sh@940 -- # kill -0 82744 00:15:20.154 11:25:38 -- common/autotest_common.sh@941 -- # uname 00:15:20.154 11:25:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.154 11:25:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82744 00:15:20.154 killing process with pid 82744 00:15:20.154 11:25:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.154 11:25:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.154 11:25:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82744' 00:15:20.154 11:25:38 -- common/autotest_common.sh@955 -- # kill 82744 00:15:20.154 11:25:38 -- common/autotest_common.sh@960 -- # wait 82744 00:15:20.154 [2024-11-26 11:25:38.167128] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.154 [2024-11-26 11:25:38.167223] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.154 ************************************ 00:15:20.154 END TEST raid_state_function_test_sb 00:15:20.154 ************************************ 00:15:20.154 11:25:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:20.154 00:15:20.154 real 0m10.598s 00:15:20.154 user 0m18.669s 00:15:20.154 sys 0m1.627s 00:15:20.154 11:25:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.154 11:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:20.414 11:25:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:20.414 11:25:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.414 11:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.414 ************************************ 00:15:20.414 START TEST raid_superblock_test 00:15:20.414 ************************************ 00:15:20.414 11:25:38 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:20.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=83092 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 83092 /var/tmp/spdk-raid.sock 00:15:20.414 11:25:38 -- common/autotest_common.sh@829 -- # '[' -z 83092 ']' 00:15:20.414 11:25:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.414 11:25:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:20.414 11:25:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.414 11:25:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.414 11:25:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.414 11:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.415 [2024-11-26 11:25:38.468560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.415 [2024-11-26 11:25:38.468740] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83092 ] 00:15:20.415 [2024-11-26 11:25:38.632749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.674 [2024-11-26 11:25:38.667355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.674 [2024-11-26 11:25:38.699284] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.240 11:25:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.240 11:25:39 -- common/autotest_common.sh@862 -- # return 0 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.240 11:25:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:21.498 malloc1 00:15:21.498 11:25:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:21.756 [2024-11-26 11:25:39.809402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:21.756 [2024-11-26 11:25:39.809506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.756 [2024-11-26 11:25:39.809545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:21.756 [2024-11-26 11:25:39.809566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.756 [2024-11-26 11:25:39.812995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.756 [2024-11-26 11:25:39.813042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:21.756 pt1 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.756 11:25:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:22.014 malloc2 00:15:22.014 11:25:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.014 [2024-11-26 11:25:40.236546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.014 [2024-11-26 11:25:40.236636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.014 [2024-11-26 11:25:40.236673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:22.014 [2024-11-26 11:25:40.236688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.014 [2024-11-26 11:25:40.239246] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.014 [2024-11-26 11:25:40.239301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.014 pt2 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:22.272 malloc3 00:15:22.272 11:25:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:22.531 [2024-11-26 11:25:40.762026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:22.531 [2024-11-26 11:25:40.762143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.531 [2024-11-26 11:25:40.762180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:22.531 [2024-11-26 11:25:40.762195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.531 [2024-11-26 11:25:40.764945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.531 [2024-11-26 11:25:40.765020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:22.531 pt3 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:22.790 [2024-11-26 11:25:40.982078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.790 [2024-11-26 11:25:40.984558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.790 [2024-11-26 11:25:40.984637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:22.790 [2024-11-26 11:25:40.984828] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:15:22.790 [2024-11-26 11:25:40.984849] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:22.790 [2024-11-26 11:25:40.985037] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:22.790 [2024-11-26 11:25:40.985473] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:15:22.790 [2024-11-26 11:25:40.985558] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:15:22.790 [2024-11-26 11:25:40.985716] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.790 11:25:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.790 11:25:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.790 11:25:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.049 11:25:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.049 "name": "raid_bdev1", 00:15:23.049 "uuid": "b4b85e3d-e357-4aad-8b3e-ae1891c0b92d", 00:15:23.049 "strip_size_kb": 64, 00:15:23.049 "state": "online", 00:15:23.049 "raid_level": "concat", 00:15:23.049 "superblock": true, 00:15:23.049 "num_base_bdevs": 3, 00:15:23.049 "num_base_bdevs_discovered": 3, 00:15:23.049 "num_base_bdevs_operational": 3, 00:15:23.049 "base_bdevs_list": [ 00:15:23.049 { 00:15:23.049 "name": "pt1", 00:15:23.049 "uuid": "7366dc3e-9c16-523d-b031-6860466be388", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 }, 00:15:23.049 { 00:15:23.049 "name": "pt2", 00:15:23.049 "uuid": "fd8e60fa-1c97-515b-a8a3-d9428f54f6b7", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 }, 00:15:23.049 { 00:15:23.049 "name": "pt3", 00:15:23.049 "uuid": "bb21114d-69d0-5346-820e-19a8cc84f8b0", 00:15:23.049 "is_configured": true, 00:15:23.049 "data_offset": 2048, 00:15:23.049 "data_size": 63488 00:15:23.049 } 00:15:23.049 ] 00:15:23.049 }' 00:15:23.049 11:25:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.049 11:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:23.616 11:25:41 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.616 11:25:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:23.616 [2024-11-26 11:25:41.786582] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.616 11:25:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b4b85e3d-e357-4aad-8b3e-ae1891c0b92d 00:15:23.616 11:25:41 -- bdev/bdev_raid.sh@380 -- # '[' -z b4b85e3d-e357-4aad-8b3e-ae1891c0b92d ']' 00:15:23.616 11:25:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:23.875 [2024-11-26 11:25:42.006340] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.875 [2024-11-26 11:25:42.006381] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.875 [2024-11-26 11:25:42.006505] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.875 [2024-11-26 11:25:42.006589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.875 [2024-11-26 11:25:42.006609] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:15:23.875 11:25:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:23.875 11:25:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.133 11:25:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:24.133 11:25:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:24.133 11:25:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.133 11:25:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:24.392 11:25:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.392 11:25:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:24.650 11:25:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.650 11:25:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:24.909 11:25:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:24.909 11:25:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:25.168 11:25:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:25.168 11:25:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:25.168 11:25:43 -- common/autotest_common.sh@650 -- # local es=0 00:15:25.168 11:25:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:25.168 11:25:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.168 11:25:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.168 11:25:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.168 11:25:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.168 11:25:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.168 11:25:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.168 11:25:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.168 11:25:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:25.168 11:25:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:25.168 [2024-11-26 11:25:43.394688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:25.168 [2024-11-26 11:25:43.397129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:25.168 [2024-11-26 11:25:43.397181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:25.168 [2024-11-26 11:25:43.397241] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:25.168 [2024-11-26 11:25:43.397323] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:25.168 [2024-11-26 11:25:43.397359] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:25.168 [2024-11-26 11:25:43.397378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.168 [2024-11-26 11:25:43.397396] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:15:25.168 request: 00:15:25.168 { 00:15:25.168 "name": "raid_bdev1", 00:15:25.168 "raid_level": "concat", 00:15:25.168 "base_bdevs": [ 00:15:25.168 "malloc1", 00:15:25.168 "malloc2", 00:15:25.168 "malloc3" 00:15:25.168 ], 00:15:25.168 "superblock": false, 00:15:25.168 "strip_size_kb": 64, 00:15:25.168 "method": "bdev_raid_create", 00:15:25.168 "req_id": 1 00:15:25.168 } 00:15:25.168 Got JSON-RPC error response 00:15:25.168 response: 00:15:25.168 { 00:15:25.168 "code": -17, 00:15:25.168 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:25.168 } 00:15:25.427 11:25:43 -- common/autotest_common.sh@653 -- # es=1 00:15:25.427 11:25:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:25.427 11:25:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:25.427 11:25:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:25.427 11:25:43 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.427 11:25:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:25.427 11:25:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:25.427 11:25:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:25.427 11:25:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.687 [2024-11-26 11:25:43.882818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.687 [2024-11-26 11:25:43.883142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.687 [2024-11-26 11:25:43.883211] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:25.687 [2024-11-26 11:25:43.883487] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.687 [2024-11-26 11:25:43.886073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.687 [2024-11-26 11:25:43.886276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.687 [2024-11-26 11:25:43.886467] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:25.687 [2024-11-26 11:25:43.886665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.687 pt1 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.687 11:25:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.946 11:25:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.946 "name": "raid_bdev1", 00:15:25.946 "uuid": "b4b85e3d-e357-4aad-8b3e-ae1891c0b92d", 00:15:25.946 "strip_size_kb": 64, 00:15:25.946 "state": "configuring", 00:15:25.946 "raid_level": "concat", 00:15:25.946 "superblock": true, 00:15:25.946 "num_base_bdevs": 3, 00:15:25.946 "num_base_bdevs_discovered": 1, 00:15:25.946 "num_base_bdevs_operational": 3, 00:15:25.946 "base_bdevs_list": [ 00:15:25.946 { 00:15:25.946 "name": "pt1", 00:15:25.946 "uuid": "7366dc3e-9c16-523d-b031-6860466be388", 00:15:25.946 "is_configured": true, 00:15:25.946 "data_offset": 2048, 00:15:25.946 "data_size": 63488 00:15:25.946 }, 00:15:25.946 { 00:15:25.946 "name": null, 00:15:25.946 "uuid": "fd8e60fa-1c97-515b-a8a3-d9428f54f6b7", 00:15:25.946 "is_configured": false, 00:15:25.946 "data_offset": 2048, 00:15:25.946 "data_size": 63488 00:15:25.946 }, 00:15:25.946 { 00:15:25.946 "name": null, 00:15:25.946 "uuid": "bb21114d-69d0-5346-820e-19a8cc84f8b0", 00:15:25.946 "is_configured": false, 00:15:25.946 "data_offset": 2048, 00:15:25.946 "data_size": 63488 00:15:25.946 } 00:15:25.946 ] 00:15:25.946 }' 00:15:25.946 11:25:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.946 11:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:26.205 11:25:44 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:26.205 11:25:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.464 [2024-11-26 11:25:44.671155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.464 [2024-11-26 11:25:44.671443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.464 [2024-11-26 11:25:44.671513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:26.464 [2024-11-26 11:25:44.671537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.464 [2024-11-26 11:25:44.672025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.464 [2024-11-26 11:25:44.672075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.464 [2024-11-26 11:25:44.672157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:26.464 [2024-11-26 11:25:44.672192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.464 pt2 00:15:26.464 11:25:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:26.722 [2024-11-26 11:25:44.887261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.722 11:25:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.723 11:25:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.723 11:25:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.723 11:25:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.981 11:25:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.981 "name": "raid_bdev1", 00:15:26.981 "uuid": "b4b85e3d-e357-4aad-8b3e-ae1891c0b92d", 00:15:26.981 "strip_size_kb": 64, 00:15:26.981 "state": "configuring", 00:15:26.981 "raid_level": "concat", 00:15:26.981 "superblock": true, 00:15:26.981 "num_base_bdevs": 3, 00:15:26.981 "num_base_bdevs_discovered": 1, 00:15:26.981 "num_base_bdevs_operational": 3, 00:15:26.981 "base_bdevs_list": [ 00:15:26.981 { 00:15:26.981 "name": "pt1", 00:15:26.981 "uuid": "7366dc3e-9c16-523d-b031-6860466be388", 00:15:26.981 "is_configured": true, 00:15:26.981 "data_offset": 2048, 00:15:26.981 "data_size": 63488 00:15:26.981 }, 00:15:26.981 { 00:15:26.981 "name": null, 00:15:26.981 "uuid": "fd8e60fa-1c97-515b-a8a3-d9428f54f6b7", 00:15:26.981 "is_configured": false, 00:15:26.981 "data_offset": 2048, 00:15:26.981 "data_size": 63488 00:15:26.981 }, 00:15:26.981 { 00:15:26.981 "name": null, 00:15:26.981 "uuid": "bb21114d-69d0-5346-820e-19a8cc84f8b0", 00:15:26.981 "is_configured": false, 00:15:26.981 "data_offset": 2048, 00:15:26.981 "data_size": 63488 00:15:26.981 } 00:15:26.981 ] 00:15:26.981 }' 00:15:26.981 11:25:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.981 11:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:27.240 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:27.240 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:27.240 11:25:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.499 [2024-11-26 11:25:45.683551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.499 [2024-11-26 11:25:45.683630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.499 [2024-11-26 11:25:45.683662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:15:27.499 [2024-11-26 11:25:45.683677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.499 [2024-11-26 11:25:45.684164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.499 [2024-11-26 11:25:45.684190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.499 [2024-11-26 11:25:45.684273] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:27.499 [2024-11-26 11:25:45.684309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.499 pt2 00:15:27.499 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:27.499 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:27.499 11:25:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:27.758 [2024-11-26 11:25:45.895600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:27.758 [2024-11-26 11:25:45.895701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.758 [2024-11-26 11:25:45.895731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:15:27.758 [2024-11-26 11:25:45.895744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.758 [2024-11-26 11:25:45.896262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.758 [2024-11-26 11:25:45.896287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:27.758 [2024-11-26 11:25:45.896383] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:27.758 [2024-11-26 11:25:45.896411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:27.758 [2024-11-26 11:25:45.896557] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:27.758 [2024-11-26 11:25:45.896580] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:27.758 [2024-11-26 11:25:45.896670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:27.758 [2024-11-26 11:25:45.897064] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:27.758 [2024-11-26 11:25:45.897084] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:27.758 [2024-11-26 11:25:45.897217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.758 pt3 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.758 11:25:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.017 11:25:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.017 "name": "raid_bdev1", 00:15:28.017 "uuid": "b4b85e3d-e357-4aad-8b3e-ae1891c0b92d", 00:15:28.017 "strip_size_kb": 64, 00:15:28.017 "state": "online", 00:15:28.017 "raid_level": "concat", 00:15:28.017 "superblock": true, 00:15:28.017 "num_base_bdevs": 3, 00:15:28.017 "num_base_bdevs_discovered": 3, 00:15:28.017 "num_base_bdevs_operational": 3, 00:15:28.017 "base_bdevs_list": [ 00:15:28.017 { 00:15:28.017 "name": "pt1", 00:15:28.017 "uuid": "7366dc3e-9c16-523d-b031-6860466be388", 00:15:28.017 "is_configured": true, 00:15:28.017 "data_offset": 2048, 00:15:28.017 "data_size": 63488 00:15:28.017 }, 00:15:28.017 { 00:15:28.017 "name": "pt2", 00:15:28.017 "uuid": "fd8e60fa-1c97-515b-a8a3-d9428f54f6b7", 00:15:28.017 "is_configured": true, 00:15:28.017 "data_offset": 2048, 00:15:28.017 "data_size": 63488 00:15:28.017 }, 00:15:28.017 { 00:15:28.017 "name": "pt3", 00:15:28.017 "uuid": "bb21114d-69d0-5346-820e-19a8cc84f8b0", 00:15:28.017 "is_configured": true, 00:15:28.017 "data_offset": 2048, 00:15:28.017 "data_size": 63488 00:15:28.017 } 00:15:28.017 ] 00:15:28.017 }' 00:15:28.017 11:25:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.017 11:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 11:25:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:28.275 11:25:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:28.534 [2024-11-26 11:25:46.660196] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.534 11:25:46 -- bdev/bdev_raid.sh@430 -- # '[' b4b85e3d-e357-4aad-8b3e-ae1891c0b92d '!=' b4b85e3d-e357-4aad-8b3e-ae1891c0b92d ']' 00:15:28.534 11:25:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:28.534 11:25:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:28.534 11:25:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:28.534 11:25:46 -- bdev/bdev_raid.sh@511 -- # killprocess 83092 00:15:28.534 11:25:46 -- common/autotest_common.sh@936 -- # '[' -z 83092 ']' 00:15:28.534 11:25:46 -- common/autotest_common.sh@940 -- # kill -0 83092 00:15:28.534 11:25:46 -- common/autotest_common.sh@941 -- # uname 00:15:28.534 11:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.534 11:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83092 00:15:28.534 killing process with pid 83092 00:15:28.534 11:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.534 11:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.534 11:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83092' 00:15:28.534 11:25:46 -- common/autotest_common.sh@955 -- # kill 83092 00:15:28.534 [2024-11-26 11:25:46.707215] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.534 11:25:46 -- common/autotest_common.sh@960 -- # wait 83092 00:15:28.534 [2024-11-26 11:25:46.707327] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.534 [2024-11-26 11:25:46.707393] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.534 [2024-11-26 11:25:46.707410] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:28.534 [2024-11-26 11:25:46.729916] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.795 ************************************ 00:15:28.795 END TEST raid_superblock_test 00:15:28.795 ************************************ 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:28.795 00:15:28.795 real 0m8.521s 00:15:28.795 user 0m14.826s 00:15:28.795 sys 0m1.297s 00:15:28.795 11:25:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:28.795 11:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:28.795 11:25:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:28.795 11:25:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.795 11:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:28.795 ************************************ 00:15:28.795 START TEST raid_state_function_test 00:15:28.795 ************************************ 00:15:28.795 11:25:46 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.795 Process raid pid: 83364 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=83364 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 83364' 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 83364 /var/tmp/spdk-raid.sock 00:15:28.795 11:25:46 -- common/autotest_common.sh@829 -- # '[' -z 83364 ']' 00:15:28.795 11:25:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.795 11:25:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.795 11:25:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.795 11:25:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.795 11:25:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.795 11:25:46 -- common/autotest_common.sh@10 -- # set +x 00:15:29.063 [2024-11-26 11:25:47.040799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:29.063 [2024-11-26 11:25:47.040979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.063 [2024-11-26 11:25:47.211967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.063 [2024-11-26 11:25:47.254567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.063 [2024-11-26 11:25:47.293256] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.008 11:25:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.008 11:25:48 -- common/autotest_common.sh@862 -- # return 0 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.008 [2024-11-26 11:25:48.212437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.008 [2024-11-26 11:25:48.212509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.008 [2024-11-26 11:25:48.212529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.008 [2024-11-26 11:25:48.212542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.008 [2024-11-26 11:25:48.212554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.008 [2024-11-26 11:25:48.212564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.008 11:25:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.267 11:25:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.267 "name": "Existed_Raid", 00:15:30.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.267 "strip_size_kb": 0, 00:15:30.267 "state": "configuring", 00:15:30.267 "raid_level": "raid1", 00:15:30.267 "superblock": false, 00:15:30.267 "num_base_bdevs": 3, 00:15:30.267 "num_base_bdevs_discovered": 0, 00:15:30.267 "num_base_bdevs_operational": 3, 00:15:30.267 "base_bdevs_list": [ 00:15:30.267 { 00:15:30.267 "name": "BaseBdev1", 00:15:30.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.267 "is_configured": false, 00:15:30.267 "data_offset": 0, 00:15:30.267 "data_size": 0 00:15:30.267 }, 00:15:30.267 { 00:15:30.267 "name": "BaseBdev2", 00:15:30.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.267 "is_configured": false, 00:15:30.267 "data_offset": 0, 00:15:30.267 "data_size": 0 00:15:30.267 }, 00:15:30.267 { 00:15:30.267 "name": "BaseBdev3", 00:15:30.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.267 "is_configured": false, 00:15:30.267 "data_offset": 0, 00:15:30.267 "data_size": 0 00:15:30.267 } 00:15:30.267 ] 00:15:30.267 }' 00:15:30.267 11:25:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.267 11:25:48 -- common/autotest_common.sh@10 -- # set +x 00:15:30.834 11:25:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.834 [2024-11-26 11:25:49.060646] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.834 [2024-11-26 11:25:49.060719] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:31.095 11:25:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:31.095 [2024-11-26 11:25:49.308744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.095 [2024-11-26 11:25:49.308823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.095 [2024-11-26 11:25:49.308856] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.095 [2024-11-26 11:25:49.308887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.095 [2024-11-26 11:25:49.308898] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.095 [2024-11-26 11:25:49.308923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.095 11:25:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.353 [2024-11-26 11:25:49.525882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.353 BaseBdev1 00:15:31.353 11:25:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:31.353 11:25:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:31.353 11:25:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.353 11:25:49 -- common/autotest_common.sh@899 -- # local i 00:15:31.353 11:25:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.353 11:25:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.353 11:25:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.612 11:25:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.872 [ 00:15:31.872 { 00:15:31.872 "name": "BaseBdev1", 00:15:31.872 "aliases": [ 00:15:31.872 "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b" 00:15:31.872 ], 00:15:31.872 "product_name": "Malloc disk", 00:15:31.872 "block_size": 512, 00:15:31.872 "num_blocks": 65536, 00:15:31.872 "uuid": "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b", 00:15:31.872 "assigned_rate_limits": { 00:15:31.872 "rw_ios_per_sec": 0, 00:15:31.872 "rw_mbytes_per_sec": 0, 00:15:31.872 "r_mbytes_per_sec": 0, 00:15:31.872 "w_mbytes_per_sec": 0 00:15:31.872 }, 00:15:31.872 "claimed": true, 00:15:31.872 "claim_type": "exclusive_write", 00:15:31.872 "zoned": false, 00:15:31.872 "supported_io_types": { 00:15:31.872 "read": true, 00:15:31.872 "write": true, 00:15:31.872 "unmap": true, 00:15:31.872 "write_zeroes": true, 00:15:31.872 "flush": true, 00:15:31.872 "reset": true, 00:15:31.872 "compare": false, 00:15:31.872 "compare_and_write": false, 00:15:31.872 "abort": true, 00:15:31.872 "nvme_admin": false, 00:15:31.872 "nvme_io": false 00:15:31.872 }, 00:15:31.872 "memory_domains": [ 00:15:31.872 { 00:15:31.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.872 "dma_device_type": 2 00:15:31.872 } 00:15:31.872 ], 00:15:31.872 "driver_specific": {} 00:15:31.872 } 00:15:31.872 ] 00:15:31.872 11:25:50 -- common/autotest_common.sh@905 -- # return 0 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.872 11:25:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.132 11:25:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.132 "name": "Existed_Raid", 00:15:32.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.132 "strip_size_kb": 0, 00:15:32.132 "state": "configuring", 00:15:32.132 "raid_level": "raid1", 00:15:32.132 "superblock": false, 00:15:32.132 "num_base_bdevs": 3, 00:15:32.132 "num_base_bdevs_discovered": 1, 00:15:32.132 "num_base_bdevs_operational": 3, 00:15:32.132 "base_bdevs_list": [ 00:15:32.132 { 00:15:32.132 "name": "BaseBdev1", 00:15:32.132 "uuid": "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b", 00:15:32.132 "is_configured": true, 00:15:32.132 "data_offset": 0, 00:15:32.132 "data_size": 65536 00:15:32.132 }, 00:15:32.132 { 00:15:32.132 "name": "BaseBdev2", 00:15:32.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.132 "is_configured": false, 00:15:32.132 "data_offset": 0, 00:15:32.132 "data_size": 0 00:15:32.132 }, 00:15:32.132 { 00:15:32.132 "name": "BaseBdev3", 00:15:32.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.132 "is_configured": false, 00:15:32.132 "data_offset": 0, 00:15:32.132 "data_size": 0 00:15:32.132 } 00:15:32.132 ] 00:15:32.132 }' 00:15:32.132 11:25:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.132 11:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:32.391 11:25:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.650 [2024-11-26 11:25:50.750336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.650 [2024-11-26 11:25:50.750447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:32.650 11:25:50 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:32.651 11:25:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:32.910 [2024-11-26 11:25:50.974520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.910 [2024-11-26 11:25:50.976991] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.910 [2024-11-26 11:25:50.977045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.910 [2024-11-26 11:25:50.977069] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.910 [2024-11-26 11:25:50.977083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.910 11:25:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.168 11:25:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.168 "name": "Existed_Raid", 00:15:33.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.168 "strip_size_kb": 0, 00:15:33.169 "state": "configuring", 00:15:33.169 "raid_level": "raid1", 00:15:33.169 "superblock": false, 00:15:33.169 "num_base_bdevs": 3, 00:15:33.169 "num_base_bdevs_discovered": 1, 00:15:33.169 "num_base_bdevs_operational": 3, 00:15:33.169 "base_bdevs_list": [ 00:15:33.169 { 00:15:33.169 "name": "BaseBdev1", 00:15:33.169 "uuid": "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b", 00:15:33.169 "is_configured": true, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 65536 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "name": "BaseBdev2", 00:15:33.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.169 "is_configured": false, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 0 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "name": "BaseBdev3", 00:15:33.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.169 "is_configured": false, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 0 00:15:33.169 } 00:15:33.169 ] 00:15:33.169 }' 00:15:33.169 11:25:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.169 11:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.427 11:25:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.685 [2024-11-26 11:25:51.801314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.685 BaseBdev2 00:15:33.685 11:25:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:33.685 11:25:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:33.685 11:25:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.685 11:25:51 -- common/autotest_common.sh@899 -- # local i 00:15:33.685 11:25:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.685 11:25:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.685 11:25:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.944 11:25:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.203 [ 00:15:34.203 { 00:15:34.203 "name": "BaseBdev2", 00:15:34.203 "aliases": [ 00:15:34.203 "4f1efbf2-732d-468a-ae15-e0e4b91537b0" 00:15:34.203 ], 00:15:34.203 "product_name": "Malloc disk", 00:15:34.203 "block_size": 512, 00:15:34.203 "num_blocks": 65536, 00:15:34.203 "uuid": "4f1efbf2-732d-468a-ae15-e0e4b91537b0", 00:15:34.203 "assigned_rate_limits": { 00:15:34.203 "rw_ios_per_sec": 0, 00:15:34.203 "rw_mbytes_per_sec": 0, 00:15:34.203 "r_mbytes_per_sec": 0, 00:15:34.203 "w_mbytes_per_sec": 0 00:15:34.203 }, 00:15:34.203 "claimed": true, 00:15:34.203 "claim_type": "exclusive_write", 00:15:34.203 "zoned": false, 00:15:34.203 "supported_io_types": { 00:15:34.203 "read": true, 00:15:34.203 "write": true, 00:15:34.203 "unmap": true, 00:15:34.203 "write_zeroes": true, 00:15:34.203 "flush": true, 00:15:34.203 "reset": true, 00:15:34.203 "compare": false, 00:15:34.203 "compare_and_write": false, 00:15:34.203 "abort": true, 00:15:34.203 "nvme_admin": false, 00:15:34.203 "nvme_io": false 00:15:34.203 }, 00:15:34.203 "memory_domains": [ 00:15:34.203 { 00:15:34.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.203 "dma_device_type": 2 00:15:34.203 } 00:15:34.203 ], 00:15:34.203 "driver_specific": {} 00:15:34.203 } 00:15:34.203 ] 00:15:34.203 11:25:52 -- common/autotest_common.sh@905 -- # return 0 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.203 11:25:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.461 11:25:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.462 "name": "Existed_Raid", 00:15:34.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.462 "strip_size_kb": 0, 00:15:34.462 "state": "configuring", 00:15:34.462 "raid_level": "raid1", 00:15:34.462 "superblock": false, 00:15:34.462 "num_base_bdevs": 3, 00:15:34.462 "num_base_bdevs_discovered": 2, 00:15:34.462 "num_base_bdevs_operational": 3, 00:15:34.462 "base_bdevs_list": [ 00:15:34.462 { 00:15:34.462 "name": "BaseBdev1", 00:15:34.462 "uuid": "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b", 00:15:34.462 "is_configured": true, 00:15:34.462 "data_offset": 0, 00:15:34.462 "data_size": 65536 00:15:34.462 }, 00:15:34.462 { 00:15:34.462 "name": "BaseBdev2", 00:15:34.462 "uuid": "4f1efbf2-732d-468a-ae15-e0e4b91537b0", 00:15:34.462 "is_configured": true, 00:15:34.462 "data_offset": 0, 00:15:34.462 "data_size": 65536 00:15:34.462 }, 00:15:34.462 { 00:15:34.462 "name": "BaseBdev3", 00:15:34.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.462 "is_configured": false, 00:15:34.462 "data_offset": 0, 00:15:34.462 "data_size": 0 00:15:34.462 } 00:15:34.462 ] 00:15:34.462 }' 00:15:34.462 11:25:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.462 11:25:52 -- common/autotest_common.sh@10 -- # set +x 00:15:34.720 11:25:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.979 [2024-11-26 11:25:53.162976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.979 [2024-11-26 11:25:53.163188] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:34.979 [2024-11-26 11:25:53.163330] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:34.979 [2024-11-26 11:25:53.163531] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:34.979 [2024-11-26 11:25:53.164066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:34.979 [2024-11-26 11:25:53.164219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:34.979 [2024-11-26 11:25:53.164620] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.979 BaseBdev3 00:15:34.979 11:25:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:34.979 11:25:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:34.979 11:25:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:34.979 11:25:53 -- common/autotest_common.sh@899 -- # local i 00:15:34.979 11:25:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:34.979 11:25:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:34.979 11:25:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.238 11:25:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.496 [ 00:15:35.496 { 00:15:35.496 "name": "BaseBdev3", 00:15:35.496 "aliases": [ 00:15:35.496 "963d6cc8-07d7-4ae7-9ddf-485dbfca698e" 00:15:35.496 ], 00:15:35.496 "product_name": "Malloc disk", 00:15:35.496 "block_size": 512, 00:15:35.496 "num_blocks": 65536, 00:15:35.496 "uuid": "963d6cc8-07d7-4ae7-9ddf-485dbfca698e", 00:15:35.496 "assigned_rate_limits": { 00:15:35.496 "rw_ios_per_sec": 0, 00:15:35.496 "rw_mbytes_per_sec": 0, 00:15:35.496 "r_mbytes_per_sec": 0, 00:15:35.496 "w_mbytes_per_sec": 0 00:15:35.496 }, 00:15:35.496 "claimed": true, 00:15:35.496 "claim_type": "exclusive_write", 00:15:35.496 "zoned": false, 00:15:35.496 "supported_io_types": { 00:15:35.496 "read": true, 00:15:35.496 "write": true, 00:15:35.496 "unmap": true, 00:15:35.496 "write_zeroes": true, 00:15:35.496 "flush": true, 00:15:35.496 "reset": true, 00:15:35.496 "compare": false, 00:15:35.496 "compare_and_write": false, 00:15:35.496 "abort": true, 00:15:35.496 "nvme_admin": false, 00:15:35.496 "nvme_io": false 00:15:35.496 }, 00:15:35.496 "memory_domains": [ 00:15:35.496 { 00:15:35.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.496 "dma_device_type": 2 00:15:35.496 } 00:15:35.496 ], 00:15:35.496 "driver_specific": {} 00:15:35.496 } 00:15:35.496 ] 00:15:35.496 11:25:53 -- common/autotest_common.sh@905 -- # return 0 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.496 11:25:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.754 11:25:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.754 "name": "Existed_Raid", 00:15:35.754 "uuid": "f01add62-3edb-44b5-898e-2e179849e244", 00:15:35.754 "strip_size_kb": 0, 00:15:35.754 "state": "online", 00:15:35.754 "raid_level": "raid1", 00:15:35.754 "superblock": false, 00:15:35.754 "num_base_bdevs": 3, 00:15:35.754 "num_base_bdevs_discovered": 3, 00:15:35.754 "num_base_bdevs_operational": 3, 00:15:35.754 "base_bdevs_list": [ 00:15:35.754 { 00:15:35.754 "name": "BaseBdev1", 00:15:35.754 "uuid": "a6478ff3-c0fb-4d71-b941-a2b8e2f7176b", 00:15:35.754 "is_configured": true, 00:15:35.754 "data_offset": 0, 00:15:35.754 "data_size": 65536 00:15:35.754 }, 00:15:35.754 { 00:15:35.754 "name": "BaseBdev2", 00:15:35.754 "uuid": "4f1efbf2-732d-468a-ae15-e0e4b91537b0", 00:15:35.754 "is_configured": true, 00:15:35.754 "data_offset": 0, 00:15:35.754 "data_size": 65536 00:15:35.754 }, 00:15:35.754 { 00:15:35.754 "name": "BaseBdev3", 00:15:35.754 "uuid": "963d6cc8-07d7-4ae7-9ddf-485dbfca698e", 00:15:35.754 "is_configured": true, 00:15:35.754 "data_offset": 0, 00:15:35.754 "data_size": 65536 00:15:35.754 } 00:15:35.754 ] 00:15:35.754 }' 00:15:35.754 11:25:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.754 11:25:53 -- common/autotest_common.sh@10 -- # set +x 00:15:36.012 11:25:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.270 [2024-11-26 11:25:54.447535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.270 11:25:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.529 11:25:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.529 "name": "Existed_Raid", 00:15:36.529 "uuid": "f01add62-3edb-44b5-898e-2e179849e244", 00:15:36.529 "strip_size_kb": 0, 00:15:36.529 "state": "online", 00:15:36.529 "raid_level": "raid1", 00:15:36.529 "superblock": false, 00:15:36.529 "num_base_bdevs": 3, 00:15:36.529 "num_base_bdevs_discovered": 2, 00:15:36.529 "num_base_bdevs_operational": 2, 00:15:36.529 "base_bdevs_list": [ 00:15:36.529 { 00:15:36.529 "name": null, 00:15:36.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.529 "is_configured": false, 00:15:36.529 "data_offset": 0, 00:15:36.529 "data_size": 65536 00:15:36.529 }, 00:15:36.529 { 00:15:36.529 "name": "BaseBdev2", 00:15:36.529 "uuid": "4f1efbf2-732d-468a-ae15-e0e4b91537b0", 00:15:36.529 "is_configured": true, 00:15:36.529 "data_offset": 0, 00:15:36.529 "data_size": 65536 00:15:36.529 }, 00:15:36.529 { 00:15:36.529 "name": "BaseBdev3", 00:15:36.529 "uuid": "963d6cc8-07d7-4ae7-9ddf-485dbfca698e", 00:15:36.529 "is_configured": true, 00:15:36.529 "data_offset": 0, 00:15:36.529 "data_size": 65536 00:15:36.529 } 00:15:36.529 ] 00:15:36.529 }' 00:15:36.529 11:25:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.529 11:25:54 -- common/autotest_common.sh@10 -- # set +x 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.096 11:25:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.354 [2024-11-26 11:25:55.567073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.612 11:25:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:37.612 11:25:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.612 11:25:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.612 11:25:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:37.871 11:25:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:37.871 11:25:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.871 11:25:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:37.871 [2024-11-26 11:25:56.090624] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.871 [2024-11-26 11:25:56.090901] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.871 [2024-11-26 11:25:56.091106] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.871 [2024-11-26 11:25:56.097448] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.871 [2024-11-26 11:25:56.097668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:38.129 11:25:56 -- bdev/bdev_raid.sh@287 -- # killprocess 83364 00:15:38.129 11:25:56 -- common/autotest_common.sh@936 -- # '[' -z 83364 ']' 00:15:38.129 11:25:56 -- common/autotest_common.sh@940 -- # kill -0 83364 00:15:38.129 11:25:56 -- common/autotest_common.sh@941 -- # uname 00:15:38.129 11:25:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.129 11:25:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83364 00:15:38.129 killing process with pid 83364 00:15:38.129 11:25:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.129 11:25:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.129 11:25:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83364' 00:15:38.129 11:25:56 -- common/autotest_common.sh@955 -- # kill 83364 00:15:38.129 [2024-11-26 11:25:56.356547] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.130 11:25:56 -- common/autotest_common.sh@960 -- # wait 83364 00:15:38.130 [2024-11-26 11:25:56.356618] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.387 ************************************ 00:15:38.387 END TEST raid_state_function_test 00:15:38.387 ************************************ 00:15:38.387 11:25:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:38.387 00:15:38.387 real 0m9.563s 00:15:38.387 user 0m16.756s 00:15:38.387 sys 0m1.536s 00:15:38.387 11:25:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:38.387 11:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:38.387 11:25:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:38.387 11:25:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:38.387 11:25:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.387 11:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:38.387 ************************************ 00:15:38.387 START TEST raid_state_function_test_sb 00:15:38.387 ************************************ 00:15:38.387 11:25:56 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:15:38.387 11:25:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:38.387 11:25:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:38.387 11:25:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.388 Process raid pid: 83700 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=83700 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 83700' 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 83700 /var/tmp/spdk-raid.sock 00:15:38.388 11:25:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:38.388 11:25:56 -- common/autotest_common.sh@829 -- # '[' -z 83700 ']' 00:15:38.388 11:25:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.388 11:25:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.388 11:25:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.388 11:25:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.388 11:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:38.645 [2024-11-26 11:25:56.650356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.645 [2024-11-26 11:25:56.650713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.645 [2024-11-26 11:25:56.808362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.645 [2024-11-26 11:25:56.849287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.902 [2024-11-26 11:25:56.882956] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.465 11:25:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.465 11:25:57 -- common/autotest_common.sh@862 -- # return 0 00:15:39.465 11:25:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:39.721 [2024-11-26 11:25:57.807497] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.721 [2024-11-26 11:25:57.807730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.721 [2024-11-26 11:25:57.807858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.721 [2024-11-26 11:25:57.807902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.721 [2024-11-26 11:25:57.807920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.721 [2024-11-26 11:25:57.807933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.721 11:25:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.977 11:25:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.977 "name": "Existed_Raid", 00:15:39.977 "uuid": "5d7b7967-f98b-4dfd-83a6-3a5cfd54a948", 00:15:39.977 "strip_size_kb": 0, 00:15:39.977 "state": "configuring", 00:15:39.977 "raid_level": "raid1", 00:15:39.977 "superblock": true, 00:15:39.977 "num_base_bdevs": 3, 00:15:39.977 "num_base_bdevs_discovered": 0, 00:15:39.977 "num_base_bdevs_operational": 3, 00:15:39.977 "base_bdevs_list": [ 00:15:39.977 { 00:15:39.977 "name": "BaseBdev1", 00:15:39.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.977 "is_configured": false, 00:15:39.977 "data_offset": 0, 00:15:39.977 "data_size": 0 00:15:39.977 }, 00:15:39.977 { 00:15:39.977 "name": "BaseBdev2", 00:15:39.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.977 "is_configured": false, 00:15:39.977 "data_offset": 0, 00:15:39.977 "data_size": 0 00:15:39.977 }, 00:15:39.977 { 00:15:39.977 "name": "BaseBdev3", 00:15:39.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.977 "is_configured": false, 00:15:39.977 "data_offset": 0, 00:15:39.977 "data_size": 0 00:15:39.977 } 00:15:39.977 ] 00:15:39.977 }' 00:15:39.977 11:25:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.977 11:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 11:25:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.491 [2024-11-26 11:25:58.527618] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.491 [2024-11-26 11:25:58.527884] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:40.491 11:25:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:40.749 [2024-11-26 11:25:58.739703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.749 [2024-11-26 11:25:58.740020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.749 [2024-11-26 11:25:58.740055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.749 [2024-11-26 11:25:58.740073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.749 [2024-11-26 11:25:58.740086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.749 [2024-11-26 11:25:58.740098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.749 11:25:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.749 [2024-11-26 11:25:58.966378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.749 BaseBdev1 00:15:40.749 11:25:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:40.749 11:25:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:40.749 11:25:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:40.749 11:25:58 -- common/autotest_common.sh@899 -- # local i 00:15:40.749 11:25:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:40.749 11:25:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:41.006 11:25:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.262 11:25:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.262 [ 00:15:41.262 { 00:15:41.262 "name": "BaseBdev1", 00:15:41.262 "aliases": [ 00:15:41.262 "7cf7d3d9-f500-4e12-9192-f8a2058f409c" 00:15:41.262 ], 00:15:41.262 "product_name": "Malloc disk", 00:15:41.262 "block_size": 512, 00:15:41.262 "num_blocks": 65536, 00:15:41.262 "uuid": "7cf7d3d9-f500-4e12-9192-f8a2058f409c", 00:15:41.262 "assigned_rate_limits": { 00:15:41.262 "rw_ios_per_sec": 0, 00:15:41.262 "rw_mbytes_per_sec": 0, 00:15:41.262 "r_mbytes_per_sec": 0, 00:15:41.262 "w_mbytes_per_sec": 0 00:15:41.262 }, 00:15:41.262 "claimed": true, 00:15:41.262 "claim_type": "exclusive_write", 00:15:41.262 "zoned": false, 00:15:41.262 "supported_io_types": { 00:15:41.262 "read": true, 00:15:41.262 "write": true, 00:15:41.262 "unmap": true, 00:15:41.262 "write_zeroes": true, 00:15:41.262 "flush": true, 00:15:41.262 "reset": true, 00:15:41.262 "compare": false, 00:15:41.262 "compare_and_write": false, 00:15:41.262 "abort": true, 00:15:41.262 "nvme_admin": false, 00:15:41.262 "nvme_io": false 00:15:41.262 }, 00:15:41.262 "memory_domains": [ 00:15:41.262 { 00:15:41.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.262 "dma_device_type": 2 00:15:41.262 } 00:15:41.262 ], 00:15:41.262 "driver_specific": {} 00:15:41.262 } 00:15:41.262 ] 00:15:41.262 11:25:59 -- common/autotest_common.sh@905 -- # return 0 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.262 11:25:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.519 11:25:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.520 "name": "Existed_Raid", 00:15:41.520 "uuid": "e28929c2-9d7f-407f-8384-dc40cb401f65", 00:15:41.520 "strip_size_kb": 0, 00:15:41.520 "state": "configuring", 00:15:41.520 "raid_level": "raid1", 00:15:41.520 "superblock": true, 00:15:41.520 "num_base_bdevs": 3, 00:15:41.520 "num_base_bdevs_discovered": 1, 00:15:41.520 "num_base_bdevs_operational": 3, 00:15:41.520 "base_bdevs_list": [ 00:15:41.520 { 00:15:41.520 "name": "BaseBdev1", 00:15:41.520 "uuid": "7cf7d3d9-f500-4e12-9192-f8a2058f409c", 00:15:41.520 "is_configured": true, 00:15:41.520 "data_offset": 2048, 00:15:41.520 "data_size": 63488 00:15:41.520 }, 00:15:41.520 { 00:15:41.520 "name": "BaseBdev2", 00:15:41.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.520 "is_configured": false, 00:15:41.520 "data_offset": 0, 00:15:41.520 "data_size": 0 00:15:41.520 }, 00:15:41.520 { 00:15:41.520 "name": "BaseBdev3", 00:15:41.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.520 "is_configured": false, 00:15:41.520 "data_offset": 0, 00:15:41.520 "data_size": 0 00:15:41.520 } 00:15:41.520 ] 00:15:41.520 }' 00:15:41.520 11:25:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.520 11:25:59 -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 11:25:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:42.035 [2024-11-26 11:26:00.198873] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.035 [2024-11-26 11:26:00.199202] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:42.035 11:26:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:42.035 11:26:00 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:42.292 11:26:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.550 BaseBdev1 00:15:42.550 11:26:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:42.550 11:26:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:42.550 11:26:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.550 11:26:00 -- common/autotest_common.sh@899 -- # local i 00:15:42.550 11:26:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.550 11:26:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.550 11:26:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:42.810 11:26:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.810 [ 00:15:42.810 { 00:15:42.810 "name": "BaseBdev1", 00:15:42.810 "aliases": [ 00:15:42.810 "0c8c6833-503b-45ab-a03c-87028b2f1cd8" 00:15:42.810 ], 00:15:42.810 "product_name": "Malloc disk", 00:15:42.810 "block_size": 512, 00:15:42.810 "num_blocks": 65536, 00:15:42.810 "uuid": "0c8c6833-503b-45ab-a03c-87028b2f1cd8", 00:15:42.810 "assigned_rate_limits": { 00:15:42.810 "rw_ios_per_sec": 0, 00:15:42.810 "rw_mbytes_per_sec": 0, 00:15:42.810 "r_mbytes_per_sec": 0, 00:15:42.810 "w_mbytes_per_sec": 0 00:15:42.810 }, 00:15:42.810 "claimed": false, 00:15:42.810 "zoned": false, 00:15:42.810 "supported_io_types": { 00:15:42.810 "read": true, 00:15:42.810 "write": true, 00:15:42.810 "unmap": true, 00:15:42.810 "write_zeroes": true, 00:15:42.810 "flush": true, 00:15:42.810 "reset": true, 00:15:42.810 "compare": false, 00:15:42.810 "compare_and_write": false, 00:15:42.810 "abort": true, 00:15:42.810 "nvme_admin": false, 00:15:42.810 "nvme_io": false 00:15:42.810 }, 00:15:42.810 "memory_domains": [ 00:15:42.810 { 00:15:42.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.810 "dma_device_type": 2 00:15:42.810 } 00:15:42.810 ], 00:15:42.810 "driver_specific": {} 00:15:42.810 } 00:15:42.810 ] 00:15:43.079 11:26:01 -- common/autotest_common.sh@905 -- # return 0 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:43.079 [2024-11-26 11:26:01.232417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.079 [2024-11-26 11:26:01.234694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.079 [2024-11-26 11:26:01.234770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.079 [2024-11-26 11:26:01.234791] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.079 [2024-11-26 11:26:01.234805] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.079 11:26:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.351 11:26:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.351 "name": "Existed_Raid", 00:15:43.351 "uuid": "6de659ed-9caf-4f16-ad10-cb752ac55a18", 00:15:43.352 "strip_size_kb": 0, 00:15:43.352 "state": "configuring", 00:15:43.352 "raid_level": "raid1", 00:15:43.352 "superblock": true, 00:15:43.352 "num_base_bdevs": 3, 00:15:43.352 "num_base_bdevs_discovered": 1, 00:15:43.352 "num_base_bdevs_operational": 3, 00:15:43.352 "base_bdevs_list": [ 00:15:43.352 { 00:15:43.352 "name": "BaseBdev1", 00:15:43.352 "uuid": "0c8c6833-503b-45ab-a03c-87028b2f1cd8", 00:15:43.352 "is_configured": true, 00:15:43.352 "data_offset": 2048, 00:15:43.352 "data_size": 63488 00:15:43.352 }, 00:15:43.352 { 00:15:43.352 "name": "BaseBdev2", 00:15:43.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.352 "is_configured": false, 00:15:43.352 "data_offset": 0, 00:15:43.352 "data_size": 0 00:15:43.352 }, 00:15:43.352 { 00:15:43.352 "name": "BaseBdev3", 00:15:43.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.352 "is_configured": false, 00:15:43.352 "data_offset": 0, 00:15:43.352 "data_size": 0 00:15:43.352 } 00:15:43.352 ] 00:15:43.352 }' 00:15:43.352 11:26:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.352 11:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 11:26:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.867 [2024-11-26 11:26:01.983793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.867 BaseBdev2 00:15:43.867 11:26:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:43.867 11:26:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:43.867 11:26:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:43.868 11:26:02 -- common/autotest_common.sh@899 -- # local i 00:15:43.868 11:26:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:43.868 11:26:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:43.868 11:26:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.126 11:26:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.386 [ 00:15:44.386 { 00:15:44.386 "name": "BaseBdev2", 00:15:44.386 "aliases": [ 00:15:44.386 "0d040464-f308-4907-80a5-1cdca445695b" 00:15:44.386 ], 00:15:44.386 "product_name": "Malloc disk", 00:15:44.386 "block_size": 512, 00:15:44.386 "num_blocks": 65536, 00:15:44.386 "uuid": "0d040464-f308-4907-80a5-1cdca445695b", 00:15:44.386 "assigned_rate_limits": { 00:15:44.386 "rw_ios_per_sec": 0, 00:15:44.386 "rw_mbytes_per_sec": 0, 00:15:44.386 "r_mbytes_per_sec": 0, 00:15:44.386 "w_mbytes_per_sec": 0 00:15:44.386 }, 00:15:44.386 "claimed": true, 00:15:44.386 "claim_type": "exclusive_write", 00:15:44.386 "zoned": false, 00:15:44.386 "supported_io_types": { 00:15:44.386 "read": true, 00:15:44.386 "write": true, 00:15:44.386 "unmap": true, 00:15:44.386 "write_zeroes": true, 00:15:44.386 "flush": true, 00:15:44.386 "reset": true, 00:15:44.386 "compare": false, 00:15:44.386 "compare_and_write": false, 00:15:44.386 "abort": true, 00:15:44.386 "nvme_admin": false, 00:15:44.386 "nvme_io": false 00:15:44.386 }, 00:15:44.386 "memory_domains": [ 00:15:44.386 { 00:15:44.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.386 "dma_device_type": 2 00:15:44.386 } 00:15:44.386 ], 00:15:44.386 "driver_specific": {} 00:15:44.386 } 00:15:44.386 ] 00:15:44.386 11:26:02 -- common/autotest_common.sh@905 -- # return 0 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.386 "name": "Existed_Raid", 00:15:44.386 "uuid": "6de659ed-9caf-4f16-ad10-cb752ac55a18", 00:15:44.386 "strip_size_kb": 0, 00:15:44.386 "state": "configuring", 00:15:44.386 "raid_level": "raid1", 00:15:44.386 "superblock": true, 00:15:44.386 "num_base_bdevs": 3, 00:15:44.386 "num_base_bdevs_discovered": 2, 00:15:44.386 "num_base_bdevs_operational": 3, 00:15:44.386 "base_bdevs_list": [ 00:15:44.386 { 00:15:44.386 "name": "BaseBdev1", 00:15:44.386 "uuid": "0c8c6833-503b-45ab-a03c-87028b2f1cd8", 00:15:44.386 "is_configured": true, 00:15:44.386 "data_offset": 2048, 00:15:44.386 "data_size": 63488 00:15:44.386 }, 00:15:44.386 { 00:15:44.386 "name": "BaseBdev2", 00:15:44.386 "uuid": "0d040464-f308-4907-80a5-1cdca445695b", 00:15:44.386 "is_configured": true, 00:15:44.386 "data_offset": 2048, 00:15:44.386 "data_size": 63488 00:15:44.386 }, 00:15:44.386 { 00:15:44.386 "name": "BaseBdev3", 00:15:44.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.386 "is_configured": false, 00:15:44.386 "data_offset": 0, 00:15:44.386 "data_size": 0 00:15:44.386 } 00:15:44.386 ] 00:15:44.386 }' 00:15:44.386 11:26:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.386 11:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:44.955 11:26:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.955 [2024-11-26 11:26:03.132821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.955 [2024-11-26 11:26:03.133394] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:44.955 [2024-11-26 11:26:03.133566] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:44.955 [2024-11-26 11:26:03.133736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:44.955 BaseBdev3 00:15:44.955 [2024-11-26 11:26:03.134262] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:44.956 [2024-11-26 11:26:03.134284] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:44.956 [2024-11-26 11:26:03.134463] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.956 11:26:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:44.956 11:26:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:44.956 11:26:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.956 11:26:03 -- common/autotest_common.sh@899 -- # local i 00:15:44.956 11:26:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.956 11:26:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.956 11:26:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.215 11:26:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.473 [ 00:15:45.473 { 00:15:45.473 "name": "BaseBdev3", 00:15:45.473 "aliases": [ 00:15:45.473 "e6218b23-3afa-4770-9f96-1d003c922d31" 00:15:45.473 ], 00:15:45.473 "product_name": "Malloc disk", 00:15:45.473 "block_size": 512, 00:15:45.473 "num_blocks": 65536, 00:15:45.473 "uuid": "e6218b23-3afa-4770-9f96-1d003c922d31", 00:15:45.473 "assigned_rate_limits": { 00:15:45.473 "rw_ios_per_sec": 0, 00:15:45.473 "rw_mbytes_per_sec": 0, 00:15:45.473 "r_mbytes_per_sec": 0, 00:15:45.473 "w_mbytes_per_sec": 0 00:15:45.473 }, 00:15:45.473 "claimed": true, 00:15:45.473 "claim_type": "exclusive_write", 00:15:45.473 "zoned": false, 00:15:45.473 "supported_io_types": { 00:15:45.473 "read": true, 00:15:45.473 "write": true, 00:15:45.473 "unmap": true, 00:15:45.473 "write_zeroes": true, 00:15:45.473 "flush": true, 00:15:45.473 "reset": true, 00:15:45.473 "compare": false, 00:15:45.473 "compare_and_write": false, 00:15:45.473 "abort": true, 00:15:45.473 "nvme_admin": false, 00:15:45.473 "nvme_io": false 00:15:45.473 }, 00:15:45.473 "memory_domains": [ 00:15:45.473 { 00:15:45.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.473 "dma_device_type": 2 00:15:45.473 } 00:15:45.473 ], 00:15:45.473 "driver_specific": {} 00:15:45.473 } 00:15:45.473 ] 00:15:45.473 11:26:03 -- common/autotest_common.sh@905 -- # return 0 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.473 11:26:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.732 11:26:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.732 "name": "Existed_Raid", 00:15:45.732 "uuid": "6de659ed-9caf-4f16-ad10-cb752ac55a18", 00:15:45.732 "strip_size_kb": 0, 00:15:45.732 "state": "online", 00:15:45.732 "raid_level": "raid1", 00:15:45.732 "superblock": true, 00:15:45.732 "num_base_bdevs": 3, 00:15:45.732 "num_base_bdevs_discovered": 3, 00:15:45.732 "num_base_bdevs_operational": 3, 00:15:45.732 "base_bdevs_list": [ 00:15:45.732 { 00:15:45.733 "name": "BaseBdev1", 00:15:45.733 "uuid": "0c8c6833-503b-45ab-a03c-87028b2f1cd8", 00:15:45.733 "is_configured": true, 00:15:45.733 "data_offset": 2048, 00:15:45.733 "data_size": 63488 00:15:45.733 }, 00:15:45.733 { 00:15:45.733 "name": "BaseBdev2", 00:15:45.733 "uuid": "0d040464-f308-4907-80a5-1cdca445695b", 00:15:45.733 "is_configured": true, 00:15:45.733 "data_offset": 2048, 00:15:45.733 "data_size": 63488 00:15:45.733 }, 00:15:45.733 { 00:15:45.733 "name": "BaseBdev3", 00:15:45.733 "uuid": "e6218b23-3afa-4770-9f96-1d003c922d31", 00:15:45.733 "is_configured": true, 00:15:45.733 "data_offset": 2048, 00:15:45.733 "data_size": 63488 00:15:45.733 } 00:15:45.733 ] 00:15:45.733 }' 00:15:45.733 11:26:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.733 11:26:03 -- common/autotest_common.sh@10 -- # set +x 00:15:45.991 11:26:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.250 [2024-11-26 11:26:04.317365] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.250 11:26:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.508 11:26:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.508 "name": "Existed_Raid", 00:15:46.508 "uuid": "6de659ed-9caf-4f16-ad10-cb752ac55a18", 00:15:46.508 "strip_size_kb": 0, 00:15:46.508 "state": "online", 00:15:46.508 "raid_level": "raid1", 00:15:46.508 "superblock": true, 00:15:46.508 "num_base_bdevs": 3, 00:15:46.508 "num_base_bdevs_discovered": 2, 00:15:46.509 "num_base_bdevs_operational": 2, 00:15:46.509 "base_bdevs_list": [ 00:15:46.509 { 00:15:46.509 "name": null, 00:15:46.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.509 "is_configured": false, 00:15:46.509 "data_offset": 2048, 00:15:46.509 "data_size": 63488 00:15:46.509 }, 00:15:46.509 { 00:15:46.509 "name": "BaseBdev2", 00:15:46.509 "uuid": "0d040464-f308-4907-80a5-1cdca445695b", 00:15:46.509 "is_configured": true, 00:15:46.509 "data_offset": 2048, 00:15:46.509 "data_size": 63488 00:15:46.509 }, 00:15:46.509 { 00:15:46.509 "name": "BaseBdev3", 00:15:46.509 "uuid": "e6218b23-3afa-4770-9f96-1d003c922d31", 00:15:46.509 "is_configured": true, 00:15:46.509 "data_offset": 2048, 00:15:46.509 "data_size": 63488 00:15:46.509 } 00:15:46.509 ] 00:15:46.509 }' 00:15:46.509 11:26:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.509 11:26:04 -- common/autotest_common.sh@10 -- # set +x 00:15:46.767 11:26:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:46.767 11:26:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:46.767 11:26:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.767 11:26:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:47.025 11:26:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:47.025 11:26:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.025 11:26:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:47.284 [2024-11-26 11:26:05.308783] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.284 11:26:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:47.284 11:26:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.284 11:26:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.284 11:26:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:47.542 11:26:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:47.542 11:26:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.542 11:26:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:47.800 [2024-11-26 11:26:05.808190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.800 [2024-11-26 11:26:05.808260] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.800 [2024-11-26 11:26:05.808345] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.800 [2024-11-26 11:26:05.815732] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.800 [2024-11-26 11:26:05.816040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:47.800 11:26:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:47.800 11:26:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.800 11:26:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.800 11:26:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.058 11:26:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:48.058 11:26:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:48.058 11:26:06 -- bdev/bdev_raid.sh@287 -- # killprocess 83700 00:15:48.058 11:26:06 -- common/autotest_common.sh@936 -- # '[' -z 83700 ']' 00:15:48.058 11:26:06 -- common/autotest_common.sh@940 -- # kill -0 83700 00:15:48.058 11:26:06 -- common/autotest_common.sh@941 -- # uname 00:15:48.058 11:26:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.058 11:26:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83700 00:15:48.058 killing process with pid 83700 00:15:48.058 11:26:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.058 11:26:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.058 11:26:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83700' 00:15:48.059 11:26:06 -- common/autotest_common.sh@955 -- # kill 83700 00:15:48.059 [2024-11-26 11:26:06.121469] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.059 11:26:06 -- common/autotest_common.sh@960 -- # wait 83700 00:15:48.059 [2024-11-26 11:26:06.121549] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:48.317 00:15:48.317 real 0m9.710s 00:15:48.317 user 0m17.033s 00:15:48.317 sys 0m1.494s 00:15:48.317 11:26:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:48.317 11:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:48.317 ************************************ 00:15:48.317 END TEST raid_state_function_test_sb 00:15:48.317 ************************************ 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:48.317 11:26:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:48.317 11:26:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.317 11:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:48.317 ************************************ 00:15:48.317 START TEST raid_superblock_test 00:15:48.317 ************************************ 00:15:48.317 11:26:06 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=84040 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:48.317 11:26:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 84040 /var/tmp/spdk-raid.sock 00:15:48.317 11:26:06 -- common/autotest_common.sh@829 -- # '[' -z 84040 ']' 00:15:48.317 11:26:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.317 11:26:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.317 11:26:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.317 11:26:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.318 11:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 [2024-11-26 11:26:06.415298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:48.318 [2024-11-26 11:26:06.415423] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84040 ] 00:15:48.576 [2024-11-26 11:26:06.570829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.576 [2024-11-26 11:26:06.604655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.576 [2024-11-26 11:26:06.636323] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.511 11:26:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.511 11:26:07 -- common/autotest_common.sh@862 -- # return 0 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:49.511 malloc1 00:15:49.511 11:26:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.770 [2024-11-26 11:26:07.827007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.770 [2024-11-26 11:26:07.827356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.770 [2024-11-26 11:26:07.827410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:49.770 [2024-11-26 11:26:07.827433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.770 [2024-11-26 11:26:07.830168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.770 [2024-11-26 11:26:07.830211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.770 pt1 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.770 11:26:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:50.029 malloc2 00:15:50.029 11:26:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.287 [2024-11-26 11:26:08.338131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.287 [2024-11-26 11:26:08.338237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.287 [2024-11-26 11:26:08.338287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:50.287 [2024-11-26 11:26:08.338301] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.287 [2024-11-26 11:26:08.340734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.287 [2024-11-26 11:26:08.340776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.287 pt2 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.287 11:26:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:50.545 malloc3 00:15:50.545 11:26:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.803 [2024-11-26 11:26:08.799486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.804 [2024-11-26 11:26:08.799825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.804 [2024-11-26 11:26:08.799894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:50.804 [2024-11-26 11:26:08.799914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.804 [2024-11-26 11:26:08.802535] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.804 [2024-11-26 11:26:08.802576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.804 pt3 00:15:50.804 11:26:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:50.804 11:26:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.804 11:26:08 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:50.804 [2024-11-26 11:26:09.011626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.804 [2024-11-26 11:26:09.014450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.804 [2024-11-26 11:26:09.014530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.804 [2024-11-26 11:26:09.014734] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:15:50.804 [2024-11-26 11:26:09.014758] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:50.804 [2024-11-26 11:26:09.014870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:50.804 [2024-11-26 11:26:09.015567] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:15:50.804 [2024-11-26 11:26:09.015703] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:15:50.804 [2024-11-26 11:26:09.015977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.804 11:26:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.370 11:26:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.370 "name": "raid_bdev1", 00:15:51.370 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:51.370 "strip_size_kb": 0, 00:15:51.370 "state": "online", 00:15:51.370 "raid_level": "raid1", 00:15:51.370 "superblock": true, 00:15:51.370 "num_base_bdevs": 3, 00:15:51.370 "num_base_bdevs_discovered": 3, 00:15:51.370 "num_base_bdevs_operational": 3, 00:15:51.370 "base_bdevs_list": [ 00:15:51.370 { 00:15:51.370 "name": "pt1", 00:15:51.370 "uuid": "2ed309f6-e0eb-5064-8bfe-3b89f2bc3a7d", 00:15:51.370 "is_configured": true, 00:15:51.370 "data_offset": 2048, 00:15:51.370 "data_size": 63488 00:15:51.370 }, 00:15:51.370 { 00:15:51.370 "name": "pt2", 00:15:51.370 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:51.370 "is_configured": true, 00:15:51.370 "data_offset": 2048, 00:15:51.370 "data_size": 63488 00:15:51.370 }, 00:15:51.370 { 00:15:51.370 "name": "pt3", 00:15:51.370 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:51.370 "is_configured": true, 00:15:51.370 "data_offset": 2048, 00:15:51.370 "data_size": 63488 00:15:51.370 } 00:15:51.370 ] 00:15:51.370 }' 00:15:51.370 11:26:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.370 11:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:51.628 11:26:09 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:51.628 11:26:09 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:51.628 [2024-11-26 11:26:09.824396] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.628 11:26:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=df2c3af6-8bb7-45b2-b2e1-695efed2bd00 00:15:51.628 11:26:09 -- bdev/bdev_raid.sh@380 -- # '[' -z df2c3af6-8bb7-45b2-b2e1-695efed2bd00 ']' 00:15:51.628 11:26:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:51.886 [2024-11-26 11:26:10.084231] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.886 [2024-11-26 11:26:10.084267] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.886 [2024-11-26 11:26:10.084395] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.886 [2024-11-26 11:26:10.084491] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.886 [2024-11-26 11:26:10.084512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:15:51.886 11:26:10 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.886 11:26:10 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:52.144 11:26:10 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:52.144 11:26:10 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:52.144 11:26:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.144 11:26:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:52.402 11:26:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.402 11:26:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:52.660 11:26:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:52.660 11:26:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:52.918 11:26:10 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:52.918 11:26:10 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:53.176 11:26:11 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:53.176 11:26:11 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.176 11:26:11 -- common/autotest_common.sh@650 -- # local es=0 00:15:53.176 11:26:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.176 11:26:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.176 11:26:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.176 11:26:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.176 11:26:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.176 11:26:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.176 11:26:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.176 11:26:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.176 11:26:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:53.176 11:26:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:53.434 [2024-11-26 11:26:11.420611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:53.434 [2024-11-26 11:26:11.422833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:53.434 [2024-11-26 11:26:11.422919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:53.434 [2024-11-26 11:26:11.422984] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:53.434 [2024-11-26 11:26:11.423096] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:53.434 [2024-11-26 11:26:11.423134] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:53.434 [2024-11-26 11:26:11.423155] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.434 [2024-11-26 11:26:11.423170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:15:53.434 request: 00:15:53.434 { 00:15:53.434 "name": "raid_bdev1", 00:15:53.434 "raid_level": "raid1", 00:15:53.434 "base_bdevs": [ 00:15:53.434 "malloc1", 00:15:53.434 "malloc2", 00:15:53.434 "malloc3" 00:15:53.434 ], 00:15:53.434 "superblock": false, 00:15:53.434 "method": "bdev_raid_create", 00:15:53.434 "req_id": 1 00:15:53.434 } 00:15:53.434 Got JSON-RPC error response 00:15:53.434 response: 00:15:53.434 { 00:15:53.434 "code": -17, 00:15:53.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:53.434 } 00:15:53.434 11:26:11 -- common/autotest_common.sh@653 -- # es=1 00:15:53.434 11:26:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:53.434 11:26:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:53.434 11:26:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:53.434 11:26:11 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.434 11:26:11 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.693 [2024-11-26 11:26:11.884699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.693 [2024-11-26 11:26:11.884789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.693 [2024-11-26 11:26:11.884832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:53.693 [2024-11-26 11:26:11.884848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.693 [2024-11-26 11:26:11.887414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.693 [2024-11-26 11:26:11.887474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.693 [2024-11-26 11:26:11.887566] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:53.693 [2024-11-26 11:26:11.887630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.693 pt1 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.693 11:26:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.951 11:26:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.951 "name": "raid_bdev1", 00:15:53.951 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:53.951 "strip_size_kb": 0, 00:15:53.951 "state": "configuring", 00:15:53.951 "raid_level": "raid1", 00:15:53.951 "superblock": true, 00:15:53.951 "num_base_bdevs": 3, 00:15:53.951 "num_base_bdevs_discovered": 1, 00:15:53.951 "num_base_bdevs_operational": 3, 00:15:53.951 "base_bdevs_list": [ 00:15:53.951 { 00:15:53.951 "name": "pt1", 00:15:53.951 "uuid": "2ed309f6-e0eb-5064-8bfe-3b89f2bc3a7d", 00:15:53.951 "is_configured": true, 00:15:53.951 "data_offset": 2048, 00:15:53.951 "data_size": 63488 00:15:53.951 }, 00:15:53.951 { 00:15:53.951 "name": null, 00:15:53.951 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:53.951 "is_configured": false, 00:15:53.951 "data_offset": 2048, 00:15:53.951 "data_size": 63488 00:15:53.951 }, 00:15:53.951 { 00:15:53.951 "name": null, 00:15:53.951 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:53.951 "is_configured": false, 00:15:53.951 "data_offset": 2048, 00:15:53.951 "data_size": 63488 00:15:53.951 } 00:15:53.951 ] 00:15:53.951 }' 00:15:53.951 11:26:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.951 11:26:12 -- common/autotest_common.sh@10 -- # set +x 00:15:54.519 11:26:12 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:54.519 11:26:12 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.519 [2024-11-26 11:26:12.684999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.519 [2024-11-26 11:26:12.685089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.519 [2024-11-26 11:26:12.685119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:54.519 [2024-11-26 11:26:12.685135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.519 [2024-11-26 11:26:12.685595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.519 [2024-11-26 11:26:12.685626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.519 [2024-11-26 11:26:12.685704] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:54.519 [2024-11-26 11:26:12.685745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.519 pt2 00:15:54.519 11:26:12 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:54.777 [2024-11-26 11:26:12.893056] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.777 11:26:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.036 11:26:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.036 "name": "raid_bdev1", 00:15:55.036 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:55.036 "strip_size_kb": 0, 00:15:55.036 "state": "configuring", 00:15:55.036 "raid_level": "raid1", 00:15:55.036 "superblock": true, 00:15:55.036 "num_base_bdevs": 3, 00:15:55.036 "num_base_bdevs_discovered": 1, 00:15:55.036 "num_base_bdevs_operational": 3, 00:15:55.036 "base_bdevs_list": [ 00:15:55.036 { 00:15:55.036 "name": "pt1", 00:15:55.036 "uuid": "2ed309f6-e0eb-5064-8bfe-3b89f2bc3a7d", 00:15:55.036 "is_configured": true, 00:15:55.036 "data_offset": 2048, 00:15:55.036 "data_size": 63488 00:15:55.036 }, 00:15:55.036 { 00:15:55.036 "name": null, 00:15:55.036 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:55.036 "is_configured": false, 00:15:55.036 "data_offset": 2048, 00:15:55.036 "data_size": 63488 00:15:55.036 }, 00:15:55.036 { 00:15:55.036 "name": null, 00:15:55.036 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:55.036 "is_configured": false, 00:15:55.036 "data_offset": 2048, 00:15:55.036 "data_size": 63488 00:15:55.036 } 00:15:55.036 ] 00:15:55.036 }' 00:15:55.036 11:26:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.036 11:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:55.294 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:55.294 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.294 11:26:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.552 [2024-11-26 11:26:13.661299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.552 [2024-11-26 11:26:13.661385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.552 [2024-11-26 11:26:13.661417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:15:55.552 [2024-11-26 11:26:13.661430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.552 [2024-11-26 11:26:13.661856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.552 [2024-11-26 11:26:13.661935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.552 [2024-11-26 11:26:13.662051] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:55.552 [2024-11-26 11:26:13.662079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.552 pt2 00:15:55.552 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:55.552 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.552 11:26:13 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.811 [2024-11-26 11:26:13.917359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.811 [2024-11-26 11:26:13.917448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.811 [2024-11-26 11:26:13.917481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:15:55.811 [2024-11-26 11:26:13.917511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.811 [2024-11-26 11:26:13.917989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.811 [2024-11-26 11:26:13.918014] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.811 [2024-11-26 11:26:13.918091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:55.811 [2024-11-26 11:26:13.918118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.811 [2024-11-26 11:26:13.918268] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:55.811 [2024-11-26 11:26:13.918284] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.811 [2024-11-26 11:26:13.918364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:55.811 [2024-11-26 11:26:13.918706] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:55.811 [2024-11-26 11:26:13.918726] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:55.811 [2024-11-26 11:26:13.918841] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.811 pt3 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.812 11:26:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.070 11:26:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.070 "name": "raid_bdev1", 00:15:56.070 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:56.070 "strip_size_kb": 0, 00:15:56.070 "state": "online", 00:15:56.070 "raid_level": "raid1", 00:15:56.070 "superblock": true, 00:15:56.070 "num_base_bdevs": 3, 00:15:56.070 "num_base_bdevs_discovered": 3, 00:15:56.070 "num_base_bdevs_operational": 3, 00:15:56.070 "base_bdevs_list": [ 00:15:56.070 { 00:15:56.070 "name": "pt1", 00:15:56.070 "uuid": "2ed309f6-e0eb-5064-8bfe-3b89f2bc3a7d", 00:15:56.070 "is_configured": true, 00:15:56.070 "data_offset": 2048, 00:15:56.070 "data_size": 63488 00:15:56.070 }, 00:15:56.070 { 00:15:56.070 "name": "pt2", 00:15:56.070 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:56.070 "is_configured": true, 00:15:56.070 "data_offset": 2048, 00:15:56.070 "data_size": 63488 00:15:56.070 }, 00:15:56.070 { 00:15:56.070 "name": "pt3", 00:15:56.070 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:56.070 "is_configured": true, 00:15:56.070 "data_offset": 2048, 00:15:56.070 "data_size": 63488 00:15:56.070 } 00:15:56.070 ] 00:15:56.070 }' 00:15:56.070 11:26:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.070 11:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:56.329 11:26:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.329 11:26:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:56.587 [2024-11-26 11:26:14.709849] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.587 11:26:14 -- bdev/bdev_raid.sh@430 -- # '[' df2c3af6-8bb7-45b2-b2e1-695efed2bd00 '!=' df2c3af6-8bb7-45b2-b2e1-695efed2bd00 ']' 00:15:56.587 11:26:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:56.587 11:26:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:56.587 11:26:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:56.587 11:26:14 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:56.845 [2024-11-26 11:26:14.969708] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.845 11:26:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.104 11:26:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.104 "name": "raid_bdev1", 00:15:57.104 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:57.104 "strip_size_kb": 0, 00:15:57.104 "state": "online", 00:15:57.104 "raid_level": "raid1", 00:15:57.104 "superblock": true, 00:15:57.104 "num_base_bdevs": 3, 00:15:57.104 "num_base_bdevs_discovered": 2, 00:15:57.104 "num_base_bdevs_operational": 2, 00:15:57.104 "base_bdevs_list": [ 00:15:57.104 { 00:15:57.104 "name": null, 00:15:57.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.104 "is_configured": false, 00:15:57.104 "data_offset": 2048, 00:15:57.104 "data_size": 63488 00:15:57.104 }, 00:15:57.104 { 00:15:57.104 "name": "pt2", 00:15:57.104 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:57.104 "is_configured": true, 00:15:57.104 "data_offset": 2048, 00:15:57.104 "data_size": 63488 00:15:57.104 }, 00:15:57.104 { 00:15:57.104 "name": "pt3", 00:15:57.104 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:57.104 "is_configured": true, 00:15:57.104 "data_offset": 2048, 00:15:57.104 "data_size": 63488 00:15:57.104 } 00:15:57.104 ] 00:15:57.104 }' 00:15:57.104 11:26:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.104 11:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:57.361 11:26:15 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:57.619 [2024-11-26 11:26:15.673886] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.619 [2024-11-26 11:26:15.673947] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.619 [2024-11-26 11:26:15.674025] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.619 [2024-11-26 11:26:15.674102] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.619 [2024-11-26 11:26:15.674121] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:57.619 11:26:15 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.619 11:26:15 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:57.877 11:26:15 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:57.878 11:26:15 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:57.878 11:26:15 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:57.878 11:26:15 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:57.878 11:26:15 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:58.135 11:26:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:58.135 11:26:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:58.135 11:26:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:58.393 11:26:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.394 [2024-11-26 11:26:16.579190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.394 [2024-11-26 11:26:16.579479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.394 [2024-11-26 11:26:16.579521] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:15:58.394 [2024-11-26 11:26:16.579539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.394 [2024-11-26 11:26:16.582037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.394 [2024-11-26 11:26:16.582083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.394 [2024-11-26 11:26:16.582161] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:58.394 [2024-11-26 11:26:16.582210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.394 pt2 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.394 11:26:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.652 11:26:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.652 "name": "raid_bdev1", 00:15:58.652 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:58.652 "strip_size_kb": 0, 00:15:58.652 "state": "configuring", 00:15:58.652 "raid_level": "raid1", 00:15:58.652 "superblock": true, 00:15:58.652 "num_base_bdevs": 3, 00:15:58.652 "num_base_bdevs_discovered": 1, 00:15:58.652 "num_base_bdevs_operational": 2, 00:15:58.652 "base_bdevs_list": [ 00:15:58.652 { 00:15:58.652 "name": null, 00:15:58.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.652 "is_configured": false, 00:15:58.652 "data_offset": 2048, 00:15:58.652 "data_size": 63488 00:15:58.652 }, 00:15:58.652 { 00:15:58.652 "name": "pt2", 00:15:58.652 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:58.652 "is_configured": true, 00:15:58.652 "data_offset": 2048, 00:15:58.652 "data_size": 63488 00:15:58.652 }, 00:15:58.652 { 00:15:58.652 "name": null, 00:15:58.652 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:58.652 "is_configured": false, 00:15:58.652 "data_offset": 2048, 00:15:58.652 "data_size": 63488 00:15:58.652 } 00:15:58.652 ] 00:15:58.652 }' 00:15:58.652 11:26:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.652 11:26:16 -- common/autotest_common.sh@10 -- # set +x 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@462 -- # i=2 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.266 [2024-11-26 11:26:17.395492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.266 [2024-11-26 11:26:17.395608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.266 [2024-11-26 11:26:17.395653] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:15:59.266 [2024-11-26 11:26:17.395668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.266 [2024-11-26 11:26:17.396366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.266 [2024-11-26 11:26:17.396531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.266 [2024-11-26 11:26:17.396731] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:59.266 [2024-11-26 11:26:17.396910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.266 [2024-11-26 11:26:17.397055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:15:59.266 [2024-11-26 11:26:17.397077] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.266 [2024-11-26 11:26:17.397168] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:15:59.266 [2024-11-26 11:26:17.397544] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:15:59.266 [2024-11-26 11:26:17.397560] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:15:59.266 [2024-11-26 11:26:17.397671] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.266 pt3 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.266 11:26:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.528 11:26:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.528 "name": "raid_bdev1", 00:15:59.528 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:15:59.528 "strip_size_kb": 0, 00:15:59.528 "state": "online", 00:15:59.528 "raid_level": "raid1", 00:15:59.528 "superblock": true, 00:15:59.528 "num_base_bdevs": 3, 00:15:59.528 "num_base_bdevs_discovered": 2, 00:15:59.528 "num_base_bdevs_operational": 2, 00:15:59.528 "base_bdevs_list": [ 00:15:59.528 { 00:15:59.528 "name": null, 00:15:59.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.528 "is_configured": false, 00:15:59.528 "data_offset": 2048, 00:15:59.528 "data_size": 63488 00:15:59.528 }, 00:15:59.528 { 00:15:59.528 "name": "pt2", 00:15:59.528 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:15:59.528 "is_configured": true, 00:15:59.528 "data_offset": 2048, 00:15:59.528 "data_size": 63488 00:15:59.528 }, 00:15:59.528 { 00:15:59.528 "name": "pt3", 00:15:59.528 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:15:59.528 "is_configured": true, 00:15:59.528 "data_offset": 2048, 00:15:59.528 "data_size": 63488 00:15:59.528 } 00:15:59.528 ] 00:15:59.528 }' 00:15:59.528 11:26:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.528 11:26:17 -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 11:26:18 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:15:59.785 11:26:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.043 [2024-11-26 11:26:18.207734] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.043 [2024-11-26 11:26:18.207773] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.043 [2024-11-26 11:26:18.207848] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.043 [2024-11-26 11:26:18.207978] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.043 [2024-11-26 11:26:18.207996] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:16:00.043 11:26:18 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:16:00.043 11:26:18 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.301 11:26:18 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:16:00.301 11:26:18 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:16:00.301 11:26:18 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:00.560 [2024-11-26 11:26:18.715953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:00.560 [2024-11-26 11:26:18.716058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.560 [2024-11-26 11:26:18.716089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:16:00.560 [2024-11-26 11:26:18.716102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.560 [2024-11-26 11:26:18.718599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.560 [2024-11-26 11:26:18.718643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:00.560 [2024-11-26 11:26:18.718746] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:00.560 [2024-11-26 11:26:18.718796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:00.560 pt1 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.560 11:26:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.818 11:26:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.818 "name": "raid_bdev1", 00:16:00.818 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:16:00.818 "strip_size_kb": 0, 00:16:00.818 "state": "configuring", 00:16:00.818 "raid_level": "raid1", 00:16:00.818 "superblock": true, 00:16:00.818 "num_base_bdevs": 3, 00:16:00.818 "num_base_bdevs_discovered": 1, 00:16:00.818 "num_base_bdevs_operational": 3, 00:16:00.818 "base_bdevs_list": [ 00:16:00.818 { 00:16:00.818 "name": "pt1", 00:16:00.818 "uuid": "2ed309f6-e0eb-5064-8bfe-3b89f2bc3a7d", 00:16:00.818 "is_configured": true, 00:16:00.818 "data_offset": 2048, 00:16:00.818 "data_size": 63488 00:16:00.818 }, 00:16:00.818 { 00:16:00.818 "name": null, 00:16:00.818 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:16:00.818 "is_configured": false, 00:16:00.818 "data_offset": 2048, 00:16:00.818 "data_size": 63488 00:16:00.818 }, 00:16:00.818 { 00:16:00.818 "name": null, 00:16:00.818 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:16:00.818 "is_configured": false, 00:16:00.818 "data_offset": 2048, 00:16:00.818 "data_size": 63488 00:16:00.818 } 00:16:00.818 ] 00:16:00.818 }' 00:16:00.818 11:26:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.818 11:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:01.077 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:16:01.077 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:01.077 11:26:19 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:01.338 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:01.338 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:01.338 11:26:19 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:01.596 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:01.596 11:26:19 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:01.596 11:26:19 -- bdev/bdev_raid.sh@489 -- # i=2 00:16:01.596 11:26:19 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:01.854 [2024-11-26 11:26:20.011586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:01.854 [2024-11-26 11:26:20.011923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.854 [2024-11-26 11:26:20.011969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:16:01.855 [2024-11-26 11:26:20.011985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.855 [2024-11-26 11:26:20.012476] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.855 [2024-11-26 11:26:20.012498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:01.855 [2024-11-26 11:26:20.012571] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:01.855 [2024-11-26 11:26:20.012587] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.855 [2024-11-26 11:26:20.012600] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.855 [2024-11-26 11:26:20.012624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:16:01.855 [2024-11-26 11:26:20.012670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.855 pt3 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.855 11:26:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.112 11:26:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.112 "name": "raid_bdev1", 00:16:02.112 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:16:02.112 "strip_size_kb": 0, 00:16:02.112 "state": "configuring", 00:16:02.112 "raid_level": "raid1", 00:16:02.112 "superblock": true, 00:16:02.112 "num_base_bdevs": 3, 00:16:02.112 "num_base_bdevs_discovered": 1, 00:16:02.112 "num_base_bdevs_operational": 2, 00:16:02.112 "base_bdevs_list": [ 00:16:02.112 { 00:16:02.112 "name": null, 00:16:02.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.112 "is_configured": false, 00:16:02.112 "data_offset": 2048, 00:16:02.112 "data_size": 63488 00:16:02.112 }, 00:16:02.112 { 00:16:02.112 "name": null, 00:16:02.112 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:16:02.112 "is_configured": false, 00:16:02.112 "data_offset": 2048, 00:16:02.112 "data_size": 63488 00:16:02.112 }, 00:16:02.112 { 00:16:02.112 "name": "pt3", 00:16:02.112 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:16:02.112 "is_configured": true, 00:16:02.112 "data_offset": 2048, 00:16:02.112 "data_size": 63488 00:16:02.112 } 00:16:02.112 ] 00:16:02.112 }' 00:16:02.112 11:26:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.112 11:26:20 -- common/autotest_common.sh@10 -- # set +x 00:16:02.370 11:26:20 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:16:02.370 11:26:20 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:02.370 11:26:20 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.628 [2024-11-26 11:26:20.791844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.628 [2024-11-26 11:26:20.792145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.628 [2024-11-26 11:26:20.792189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:16:02.628 [2024-11-26 11:26:20.792208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.628 [2024-11-26 11:26:20.792654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.628 [2024-11-26 11:26:20.792695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.628 [2024-11-26 11:26:20.792780] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:02.628 [2024-11-26 11:26:20.792811] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.628 [2024-11-26 11:26:20.792949] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:16:02.628 [2024-11-26 11:26:20.792976] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.628 [2024-11-26 11:26:20.793085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:16:02.628 [2024-11-26 11:26:20.793463] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:16:02.628 [2024-11-26 11:26:20.793479] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:16:02.628 [2024-11-26 11:26:20.793593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.628 pt2 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.628 11:26:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.887 11:26:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.887 "name": "raid_bdev1", 00:16:02.887 "uuid": "df2c3af6-8bb7-45b2-b2e1-695efed2bd00", 00:16:02.887 "strip_size_kb": 0, 00:16:02.887 "state": "online", 00:16:02.887 "raid_level": "raid1", 00:16:02.887 "superblock": true, 00:16:02.887 "num_base_bdevs": 3, 00:16:02.887 "num_base_bdevs_discovered": 2, 00:16:02.887 "num_base_bdevs_operational": 2, 00:16:02.887 "base_bdevs_list": [ 00:16:02.887 { 00:16:02.887 "name": null, 00:16:02.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.887 "is_configured": false, 00:16:02.887 "data_offset": 2048, 00:16:02.887 "data_size": 63488 00:16:02.887 }, 00:16:02.887 { 00:16:02.887 "name": "pt2", 00:16:02.887 "uuid": "5797b020-43ab-533e-9635-ac0a48f44b12", 00:16:02.887 "is_configured": true, 00:16:02.887 "data_offset": 2048, 00:16:02.887 "data_size": 63488 00:16:02.887 }, 00:16:02.887 { 00:16:02.887 "name": "pt3", 00:16:02.887 "uuid": "99184607-50ba-5867-96b1-842893ec0885", 00:16:02.887 "is_configured": true, 00:16:02.887 "data_offset": 2048, 00:16:02.887 "data_size": 63488 00:16:02.887 } 00:16:02.887 ] 00:16:02.887 }' 00:16:02.887 11:26:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.887 11:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:03.145 11:26:21 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:03.145 11:26:21 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:03.403 [2024-11-26 11:26:21.568403] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.403 11:26:21 -- bdev/bdev_raid.sh@506 -- # '[' df2c3af6-8bb7-45b2-b2e1-695efed2bd00 '!=' df2c3af6-8bb7-45b2-b2e1-695efed2bd00 ']' 00:16:03.403 11:26:21 -- bdev/bdev_raid.sh@511 -- # killprocess 84040 00:16:03.403 11:26:21 -- common/autotest_common.sh@936 -- # '[' -z 84040 ']' 00:16:03.403 11:26:21 -- common/autotest_common.sh@940 -- # kill -0 84040 00:16:03.403 11:26:21 -- common/autotest_common.sh@941 -- # uname 00:16:03.403 11:26:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:03.403 11:26:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84040 00:16:03.403 11:26:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:03.403 11:26:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:03.403 killing process with pid 84040 00:16:03.403 11:26:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84040' 00:16:03.403 11:26:21 -- common/autotest_common.sh@955 -- # kill 84040 00:16:03.403 11:26:21 -- common/autotest_common.sh@960 -- # wait 84040 00:16:03.403 [2024-11-26 11:26:21.624429] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.403 [2024-11-26 11:26:21.624513] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.403 [2024-11-26 11:26:21.624607] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.403 [2024-11-26 11:26:21.624622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:16:03.662 [2024-11-26 11:26:21.646800] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:03.662 00:16:03.662 real 0m15.461s 00:16:03.662 user 0m27.661s 00:16:03.662 sys 0m2.365s 00:16:03.662 11:26:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:03.662 ************************************ 00:16:03.662 END TEST raid_superblock_test 00:16:03.662 ************************************ 00:16:03.662 11:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:03.662 11:26:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:03.662 11:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.662 11:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:03.662 ************************************ 00:16:03.662 START TEST raid_state_function_test 00:16:03.662 ************************************ 00:16:03.662 11:26:21 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=84583 00:16:03.662 Process raid pid: 84583 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84583' 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84583 /var/tmp/spdk-raid.sock 00:16:03.662 11:26:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:03.662 11:26:21 -- common/autotest_common.sh@829 -- # '[' -z 84583 ']' 00:16:03.662 11:26:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:03.662 11:26:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:03.662 11:26:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:03.662 11:26:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.662 11:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:03.921 [2024-11-26 11:26:21.936691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:03.921 [2024-11-26 11:26:21.936869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.921 [2024-11-26 11:26:22.088966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.921 [2024-11-26 11:26:22.123572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.921 [2024-11-26 11:26:22.156322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.858 11:26:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.858 11:26:22 -- common/autotest_common.sh@862 -- # return 0 00:16:04.858 11:26:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:04.858 [2024-11-26 11:26:23.085599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.858 [2024-11-26 11:26:23.085685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.858 [2024-11-26 11:26:23.085719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.858 [2024-11-26 11:26:23.085732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.858 [2024-11-26 11:26:23.085744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.858 [2024-11-26 11:26:23.085756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.858 [2024-11-26 11:26:23.085769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.858 [2024-11-26 11:26:23.085780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.116 11:26:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.117 11:26:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.117 11:26:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.117 11:26:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.117 11:26:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.117 "name": "Existed_Raid", 00:16:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.117 "strip_size_kb": 64, 00:16:05.117 "state": "configuring", 00:16:05.117 "raid_level": "raid0", 00:16:05.117 "superblock": false, 00:16:05.117 "num_base_bdevs": 4, 00:16:05.117 "num_base_bdevs_discovered": 0, 00:16:05.117 "num_base_bdevs_operational": 4, 00:16:05.117 "base_bdevs_list": [ 00:16:05.117 { 00:16:05.117 "name": "BaseBdev1", 00:16:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.117 "is_configured": false, 00:16:05.117 "data_offset": 0, 00:16:05.117 "data_size": 0 00:16:05.117 }, 00:16:05.117 { 00:16:05.117 "name": "BaseBdev2", 00:16:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.117 "is_configured": false, 00:16:05.117 "data_offset": 0, 00:16:05.117 "data_size": 0 00:16:05.117 }, 00:16:05.117 { 00:16:05.117 "name": "BaseBdev3", 00:16:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.117 "is_configured": false, 00:16:05.117 "data_offset": 0, 00:16:05.117 "data_size": 0 00:16:05.117 }, 00:16:05.117 { 00:16:05.117 "name": "BaseBdev4", 00:16:05.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.117 "is_configured": false, 00:16:05.117 "data_offset": 0, 00:16:05.117 "data_size": 0 00:16:05.117 } 00:16:05.117 ] 00:16:05.117 }' 00:16:05.117 11:26:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.117 11:26:23 -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 11:26:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.682 [2024-11-26 11:26:23.857675] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.682 [2024-11-26 11:26:23.857742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:05.682 11:26:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:05.941 [2024-11-26 11:26:24.073771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.941 [2024-11-26 11:26:24.073846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.941 [2024-11-26 11:26:24.073880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.941 [2024-11-26 11:26:24.073905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.941 [2024-11-26 11:26:24.073919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.941 [2024-11-26 11:26:24.073930] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.941 [2024-11-26 11:26:24.073943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.941 [2024-11-26 11:26:24.073954] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.941 11:26:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.199 [2024-11-26 11:26:24.296753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.199 BaseBdev1 00:16:06.199 11:26:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:06.199 11:26:24 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:06.199 11:26:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.199 11:26:24 -- common/autotest_common.sh@899 -- # local i 00:16:06.199 11:26:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.199 11:26:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.199 11:26:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.457 11:26:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.715 [ 00:16:06.715 { 00:16:06.715 "name": "BaseBdev1", 00:16:06.715 "aliases": [ 00:16:06.715 "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd" 00:16:06.715 ], 00:16:06.715 "product_name": "Malloc disk", 00:16:06.715 "block_size": 512, 00:16:06.715 "num_blocks": 65536, 00:16:06.715 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:06.715 "assigned_rate_limits": { 00:16:06.715 "rw_ios_per_sec": 0, 00:16:06.715 "rw_mbytes_per_sec": 0, 00:16:06.715 "r_mbytes_per_sec": 0, 00:16:06.715 "w_mbytes_per_sec": 0 00:16:06.715 }, 00:16:06.715 "claimed": true, 00:16:06.715 "claim_type": "exclusive_write", 00:16:06.715 "zoned": false, 00:16:06.715 "supported_io_types": { 00:16:06.715 "read": true, 00:16:06.715 "write": true, 00:16:06.715 "unmap": true, 00:16:06.715 "write_zeroes": true, 00:16:06.715 "flush": true, 00:16:06.715 "reset": true, 00:16:06.715 "compare": false, 00:16:06.715 "compare_and_write": false, 00:16:06.715 "abort": true, 00:16:06.715 "nvme_admin": false, 00:16:06.715 "nvme_io": false 00:16:06.715 }, 00:16:06.715 "memory_domains": [ 00:16:06.715 { 00:16:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.715 "dma_device_type": 2 00:16:06.715 } 00:16:06.715 ], 00:16:06.715 "driver_specific": {} 00:16:06.715 } 00:16:06.715 ] 00:16:06.715 11:26:24 -- common/autotest_common.sh@905 -- # return 0 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.715 "name": "Existed_Raid", 00:16:06.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.715 "strip_size_kb": 64, 00:16:06.715 "state": "configuring", 00:16:06.715 "raid_level": "raid0", 00:16:06.715 "superblock": false, 00:16:06.715 "num_base_bdevs": 4, 00:16:06.715 "num_base_bdevs_discovered": 1, 00:16:06.715 "num_base_bdevs_operational": 4, 00:16:06.715 "base_bdevs_list": [ 00:16:06.715 { 00:16:06.715 "name": "BaseBdev1", 00:16:06.715 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:06.715 "is_configured": true, 00:16:06.715 "data_offset": 0, 00:16:06.715 "data_size": 65536 00:16:06.715 }, 00:16:06.715 { 00:16:06.715 "name": "BaseBdev2", 00:16:06.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.715 "is_configured": false, 00:16:06.715 "data_offset": 0, 00:16:06.715 "data_size": 0 00:16:06.715 }, 00:16:06.715 { 00:16:06.715 "name": "BaseBdev3", 00:16:06.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.715 "is_configured": false, 00:16:06.715 "data_offset": 0, 00:16:06.715 "data_size": 0 00:16:06.715 }, 00:16:06.715 { 00:16:06.715 "name": "BaseBdev4", 00:16:06.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.715 "is_configured": false, 00:16:06.715 "data_offset": 0, 00:16:06.715 "data_size": 0 00:16:06.715 } 00:16:06.715 ] 00:16:06.715 }' 00:16:06.715 11:26:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.715 11:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 11:26:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.231 [2024-11-26 11:26:25.397157] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.231 [2024-11-26 11:26:25.397240] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:07.231 11:26:25 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:07.231 11:26:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:07.490 [2024-11-26 11:26:25.597241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.490 [2024-11-26 11:26:25.599410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.490 [2024-11-26 11:26:25.599473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.490 [2024-11-26 11:26:25.599531] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.490 [2024-11-26 11:26:25.599547] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.490 [2024-11-26 11:26:25.599559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:07.490 [2024-11-26 11:26:25.599570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.490 11:26:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.748 11:26:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.748 "name": "Existed_Raid", 00:16:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.748 "strip_size_kb": 64, 00:16:07.748 "state": "configuring", 00:16:07.748 "raid_level": "raid0", 00:16:07.748 "superblock": false, 00:16:07.748 "num_base_bdevs": 4, 00:16:07.748 "num_base_bdevs_discovered": 1, 00:16:07.748 "num_base_bdevs_operational": 4, 00:16:07.748 "base_bdevs_list": [ 00:16:07.748 { 00:16:07.748 "name": "BaseBdev1", 00:16:07.748 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:07.748 "is_configured": true, 00:16:07.748 "data_offset": 0, 00:16:07.748 "data_size": 65536 00:16:07.748 }, 00:16:07.748 { 00:16:07.748 "name": "BaseBdev2", 00:16:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.748 "is_configured": false, 00:16:07.748 "data_offset": 0, 00:16:07.748 "data_size": 0 00:16:07.748 }, 00:16:07.748 { 00:16:07.748 "name": "BaseBdev3", 00:16:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.748 "is_configured": false, 00:16:07.748 "data_offset": 0, 00:16:07.748 "data_size": 0 00:16:07.748 }, 00:16:07.748 { 00:16:07.748 "name": "BaseBdev4", 00:16:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.748 "is_configured": false, 00:16:07.748 "data_offset": 0, 00:16:07.748 "data_size": 0 00:16:07.748 } 00:16:07.748 ] 00:16:07.748 }' 00:16:07.748 11:26:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.748 11:26:25 -- common/autotest_common.sh@10 -- # set +x 00:16:08.006 11:26:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.264 [2024-11-26 11:26:26.329690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.264 BaseBdev2 00:16:08.264 11:26:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:08.264 11:26:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:08.264 11:26:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.264 11:26:26 -- common/autotest_common.sh@899 -- # local i 00:16:08.264 11:26:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.264 11:26:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.264 11:26:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.521 11:26:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.779 [ 00:16:08.779 { 00:16:08.779 "name": "BaseBdev2", 00:16:08.779 "aliases": [ 00:16:08.779 "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3" 00:16:08.779 ], 00:16:08.779 "product_name": "Malloc disk", 00:16:08.779 "block_size": 512, 00:16:08.779 "num_blocks": 65536, 00:16:08.779 "uuid": "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3", 00:16:08.779 "assigned_rate_limits": { 00:16:08.779 "rw_ios_per_sec": 0, 00:16:08.779 "rw_mbytes_per_sec": 0, 00:16:08.779 "r_mbytes_per_sec": 0, 00:16:08.779 "w_mbytes_per_sec": 0 00:16:08.779 }, 00:16:08.779 "claimed": true, 00:16:08.779 "claim_type": "exclusive_write", 00:16:08.779 "zoned": false, 00:16:08.779 "supported_io_types": { 00:16:08.779 "read": true, 00:16:08.779 "write": true, 00:16:08.779 "unmap": true, 00:16:08.779 "write_zeroes": true, 00:16:08.779 "flush": true, 00:16:08.779 "reset": true, 00:16:08.779 "compare": false, 00:16:08.779 "compare_and_write": false, 00:16:08.779 "abort": true, 00:16:08.779 "nvme_admin": false, 00:16:08.779 "nvme_io": false 00:16:08.779 }, 00:16:08.779 "memory_domains": [ 00:16:08.779 { 00:16:08.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.780 "dma_device_type": 2 00:16:08.780 } 00:16:08.780 ], 00:16:08.780 "driver_specific": {} 00:16:08.780 } 00:16:08.780 ] 00:16:08.780 11:26:26 -- common/autotest_common.sh@905 -- # return 0 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.780 11:26:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.780 11:26:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.780 "name": "Existed_Raid", 00:16:08.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.780 "strip_size_kb": 64, 00:16:08.780 "state": "configuring", 00:16:08.780 "raid_level": "raid0", 00:16:08.780 "superblock": false, 00:16:08.780 "num_base_bdevs": 4, 00:16:08.780 "num_base_bdevs_discovered": 2, 00:16:08.780 "num_base_bdevs_operational": 4, 00:16:08.780 "base_bdevs_list": [ 00:16:08.780 { 00:16:08.780 "name": "BaseBdev1", 00:16:08.780 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:08.780 "is_configured": true, 00:16:08.780 "data_offset": 0, 00:16:08.780 "data_size": 65536 00:16:08.780 }, 00:16:08.780 { 00:16:08.780 "name": "BaseBdev2", 00:16:08.780 "uuid": "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3", 00:16:08.780 "is_configured": true, 00:16:08.780 "data_offset": 0, 00:16:08.780 "data_size": 65536 00:16:08.780 }, 00:16:08.780 { 00:16:08.780 "name": "BaseBdev3", 00:16:08.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.780 "is_configured": false, 00:16:08.780 "data_offset": 0, 00:16:08.780 "data_size": 0 00:16:08.780 }, 00:16:08.780 { 00:16:08.780 "name": "BaseBdev4", 00:16:08.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.780 "is_configured": false, 00:16:08.780 "data_offset": 0, 00:16:08.780 "data_size": 0 00:16:08.780 } 00:16:08.780 ] 00:16:08.780 }' 00:16:08.780 11:26:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.780 11:26:27 -- common/autotest_common.sh@10 -- # set +x 00:16:09.344 11:26:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.344 [2024-11-26 11:26:27.550713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.344 BaseBdev3 00:16:09.344 11:26:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:09.344 11:26:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:09.344 11:26:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:09.344 11:26:27 -- common/autotest_common.sh@899 -- # local i 00:16:09.344 11:26:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:09.344 11:26:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:09.344 11:26:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.602 11:26:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.859 [ 00:16:09.860 { 00:16:09.860 "name": "BaseBdev3", 00:16:09.860 "aliases": [ 00:16:09.860 "c0a8043d-7c34-4211-b81a-492ab61296a8" 00:16:09.860 ], 00:16:09.860 "product_name": "Malloc disk", 00:16:09.860 "block_size": 512, 00:16:09.860 "num_blocks": 65536, 00:16:09.860 "uuid": "c0a8043d-7c34-4211-b81a-492ab61296a8", 00:16:09.860 "assigned_rate_limits": { 00:16:09.860 "rw_ios_per_sec": 0, 00:16:09.860 "rw_mbytes_per_sec": 0, 00:16:09.860 "r_mbytes_per_sec": 0, 00:16:09.860 "w_mbytes_per_sec": 0 00:16:09.860 }, 00:16:09.860 "claimed": true, 00:16:09.860 "claim_type": "exclusive_write", 00:16:09.860 "zoned": false, 00:16:09.860 "supported_io_types": { 00:16:09.860 "read": true, 00:16:09.860 "write": true, 00:16:09.860 "unmap": true, 00:16:09.860 "write_zeroes": true, 00:16:09.860 "flush": true, 00:16:09.860 "reset": true, 00:16:09.860 "compare": false, 00:16:09.860 "compare_and_write": false, 00:16:09.860 "abort": true, 00:16:09.860 "nvme_admin": false, 00:16:09.860 "nvme_io": false 00:16:09.860 }, 00:16:09.860 "memory_domains": [ 00:16:09.860 { 00:16:09.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.860 "dma_device_type": 2 00:16:09.860 } 00:16:09.860 ], 00:16:09.860 "driver_specific": {} 00:16:09.860 } 00:16:09.860 ] 00:16:09.860 11:26:28 -- common/autotest_common.sh@905 -- # return 0 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.860 11:26:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.118 11:26:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.118 "name": "Existed_Raid", 00:16:10.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.118 "strip_size_kb": 64, 00:16:10.118 "state": "configuring", 00:16:10.118 "raid_level": "raid0", 00:16:10.118 "superblock": false, 00:16:10.118 "num_base_bdevs": 4, 00:16:10.118 "num_base_bdevs_discovered": 3, 00:16:10.118 "num_base_bdevs_operational": 4, 00:16:10.118 "base_bdevs_list": [ 00:16:10.118 { 00:16:10.118 "name": "BaseBdev1", 00:16:10.118 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:10.118 "is_configured": true, 00:16:10.118 "data_offset": 0, 00:16:10.118 "data_size": 65536 00:16:10.118 }, 00:16:10.118 { 00:16:10.118 "name": "BaseBdev2", 00:16:10.118 "uuid": "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3", 00:16:10.118 "is_configured": true, 00:16:10.118 "data_offset": 0, 00:16:10.118 "data_size": 65536 00:16:10.118 }, 00:16:10.118 { 00:16:10.118 "name": "BaseBdev3", 00:16:10.118 "uuid": "c0a8043d-7c34-4211-b81a-492ab61296a8", 00:16:10.118 "is_configured": true, 00:16:10.118 "data_offset": 0, 00:16:10.118 "data_size": 65536 00:16:10.118 }, 00:16:10.118 { 00:16:10.118 "name": "BaseBdev4", 00:16:10.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.118 "is_configured": false, 00:16:10.118 "data_offset": 0, 00:16:10.118 "data_size": 0 00:16:10.118 } 00:16:10.118 ] 00:16:10.118 }' 00:16:10.118 11:26:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.118 11:26:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.376 11:26:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:10.635 [2024-11-26 11:26:28.844827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:10.635 [2024-11-26 11:26:28.844877] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:10.635 [2024-11-26 11:26:28.844906] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:10.635 [2024-11-26 11:26:28.845075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:10.635 [2024-11-26 11:26:28.845498] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:10.635 [2024-11-26 11:26:28.845517] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:10.635 [2024-11-26 11:26:28.845767] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.635 BaseBdev4 00:16:10.635 11:26:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:10.635 11:26:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:10.635 11:26:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.635 11:26:28 -- common/autotest_common.sh@899 -- # local i 00:16:10.635 11:26:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.635 11:26:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.635 11:26:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.893 11:26:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:11.163 [ 00:16:11.163 { 00:16:11.163 "name": "BaseBdev4", 00:16:11.163 "aliases": [ 00:16:11.163 "e7b38263-09f3-4745-8a12-d820497da9ee" 00:16:11.163 ], 00:16:11.163 "product_name": "Malloc disk", 00:16:11.163 "block_size": 512, 00:16:11.163 "num_blocks": 65536, 00:16:11.163 "uuid": "e7b38263-09f3-4745-8a12-d820497da9ee", 00:16:11.163 "assigned_rate_limits": { 00:16:11.163 "rw_ios_per_sec": 0, 00:16:11.163 "rw_mbytes_per_sec": 0, 00:16:11.163 "r_mbytes_per_sec": 0, 00:16:11.163 "w_mbytes_per_sec": 0 00:16:11.163 }, 00:16:11.163 "claimed": true, 00:16:11.163 "claim_type": "exclusive_write", 00:16:11.163 "zoned": false, 00:16:11.163 "supported_io_types": { 00:16:11.163 "read": true, 00:16:11.163 "write": true, 00:16:11.163 "unmap": true, 00:16:11.163 "write_zeroes": true, 00:16:11.163 "flush": true, 00:16:11.163 "reset": true, 00:16:11.163 "compare": false, 00:16:11.163 "compare_and_write": false, 00:16:11.163 "abort": true, 00:16:11.163 "nvme_admin": false, 00:16:11.163 "nvme_io": false 00:16:11.163 }, 00:16:11.163 "memory_domains": [ 00:16:11.163 { 00:16:11.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.163 "dma_device_type": 2 00:16:11.163 } 00:16:11.163 ], 00:16:11.163 "driver_specific": {} 00:16:11.163 } 00:16:11.163 ] 00:16:11.163 11:26:29 -- common/autotest_common.sh@905 -- # return 0 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:11.163 11:26:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.164 11:26:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.164 11:26:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.164 11:26:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.164 11:26:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.164 11:26:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.427 11:26:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.427 "name": "Existed_Raid", 00:16:11.427 "uuid": "c55f3564-5e07-49d6-bf36-200251a95406", 00:16:11.427 "strip_size_kb": 64, 00:16:11.427 "state": "online", 00:16:11.427 "raid_level": "raid0", 00:16:11.427 "superblock": false, 00:16:11.427 "num_base_bdevs": 4, 00:16:11.427 "num_base_bdevs_discovered": 4, 00:16:11.427 "num_base_bdevs_operational": 4, 00:16:11.427 "base_bdevs_list": [ 00:16:11.427 { 00:16:11.427 "name": "BaseBdev1", 00:16:11.427 "uuid": "f2a697e8-1f43-4ae5-b85c-e09c0a06fdcd", 00:16:11.427 "is_configured": true, 00:16:11.427 "data_offset": 0, 00:16:11.427 "data_size": 65536 00:16:11.427 }, 00:16:11.427 { 00:16:11.427 "name": "BaseBdev2", 00:16:11.427 "uuid": "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3", 00:16:11.427 "is_configured": true, 00:16:11.427 "data_offset": 0, 00:16:11.427 "data_size": 65536 00:16:11.427 }, 00:16:11.427 { 00:16:11.427 "name": "BaseBdev3", 00:16:11.427 "uuid": "c0a8043d-7c34-4211-b81a-492ab61296a8", 00:16:11.427 "is_configured": true, 00:16:11.427 "data_offset": 0, 00:16:11.427 "data_size": 65536 00:16:11.427 }, 00:16:11.427 { 00:16:11.427 "name": "BaseBdev4", 00:16:11.427 "uuid": "e7b38263-09f3-4745-8a12-d820497da9ee", 00:16:11.427 "is_configured": true, 00:16:11.428 "data_offset": 0, 00:16:11.428 "data_size": 65536 00:16:11.428 } 00:16:11.428 ] 00:16:11.428 }' 00:16:11.428 11:26:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.428 11:26:29 -- common/autotest_common.sh@10 -- # set +x 00:16:11.686 11:26:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:11.945 [2024-11-26 11:26:30.129477] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.945 [2024-11-26 11:26:30.129520] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.945 [2024-11-26 11:26:30.129575] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.945 11:26:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.204 11:26:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.204 "name": "Existed_Raid", 00:16:12.204 "uuid": "c55f3564-5e07-49d6-bf36-200251a95406", 00:16:12.204 "strip_size_kb": 64, 00:16:12.204 "state": "offline", 00:16:12.204 "raid_level": "raid0", 00:16:12.204 "superblock": false, 00:16:12.204 "num_base_bdevs": 4, 00:16:12.204 "num_base_bdevs_discovered": 3, 00:16:12.204 "num_base_bdevs_operational": 3, 00:16:12.204 "base_bdevs_list": [ 00:16:12.204 { 00:16:12.204 "name": null, 00:16:12.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.204 "is_configured": false, 00:16:12.204 "data_offset": 0, 00:16:12.204 "data_size": 65536 00:16:12.204 }, 00:16:12.204 { 00:16:12.204 "name": "BaseBdev2", 00:16:12.204 "uuid": "7381676d-31f7-47ec-9cb1-4e5a7eb63bb3", 00:16:12.204 "is_configured": true, 00:16:12.204 "data_offset": 0, 00:16:12.204 "data_size": 65536 00:16:12.204 }, 00:16:12.204 { 00:16:12.204 "name": "BaseBdev3", 00:16:12.204 "uuid": "c0a8043d-7c34-4211-b81a-492ab61296a8", 00:16:12.204 "is_configured": true, 00:16:12.204 "data_offset": 0, 00:16:12.204 "data_size": 65536 00:16:12.204 }, 00:16:12.204 { 00:16:12.204 "name": "BaseBdev4", 00:16:12.204 "uuid": "e7b38263-09f3-4745-8a12-d820497da9ee", 00:16:12.204 "is_configured": true, 00:16:12.204 "data_offset": 0, 00:16:12.204 "data_size": 65536 00:16:12.204 } 00:16:12.204 ] 00:16:12.204 }' 00:16:12.204 11:26:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.204 11:26:30 -- common/autotest_common.sh@10 -- # set +x 00:16:12.463 11:26:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:12.463 11:26:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:12.463 11:26:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.463 11:26:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:12.721 11:26:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:12.721 11:26:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.721 11:26:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:12.980 [2024-11-26 11:26:31.157303] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.980 11:26:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:12.980 11:26:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:12.980 11:26:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.980 11:26:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:13.238 11:26:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:13.238 11:26:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.238 11:26:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:13.497 [2024-11-26 11:26:31.616862] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:13.497 11:26:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:13.497 11:26:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:13.497 11:26:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.497 11:26:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:13.790 11:26:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:13.790 11:26:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.790 11:26:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:14.049 [2024-11-26 11:26:32.071964] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:14.049 [2024-11-26 11:26:32.072043] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:14.049 11:26:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:14.049 11:26:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.049 11:26:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.049 11:26:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.307 11:26:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:14.307 11:26:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:14.307 11:26:32 -- bdev/bdev_raid.sh@287 -- # killprocess 84583 00:16:14.307 11:26:32 -- common/autotest_common.sh@936 -- # '[' -z 84583 ']' 00:16:14.307 11:26:32 -- common/autotest_common.sh@940 -- # kill -0 84583 00:16:14.307 11:26:32 -- common/autotest_common.sh@941 -- # uname 00:16:14.307 11:26:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.307 11:26:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84583 00:16:14.307 killing process with pid 84583 00:16:14.307 11:26:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:14.307 11:26:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:14.307 11:26:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84583' 00:16:14.307 11:26:32 -- common/autotest_common.sh@955 -- # kill 84583 00:16:14.307 [2024-11-26 11:26:32.331596] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.307 11:26:32 -- common/autotest_common.sh@960 -- # wait 84583 00:16:14.307 [2024-11-26 11:26:32.331674] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.307 11:26:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:14.307 00:16:14.307 real 0m10.636s 00:16:14.307 user 0m18.694s 00:16:14.307 sys 0m1.718s 00:16:14.307 ************************************ 00:16:14.307 END TEST raid_state_function_test 00:16:14.307 ************************************ 00:16:14.307 11:26:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:14.308 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:14.566 11:26:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:14.566 11:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.566 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 ************************************ 00:16:14.566 START TEST raid_state_function_test_sb 00:16:14.566 ************************************ 00:16:14.566 11:26:32 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:14.566 Process raid pid: 84966 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=84966 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84966' 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84966 /var/tmp/spdk-raid.sock 00:16:14.566 11:26:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:14.566 11:26:32 -- common/autotest_common.sh@829 -- # '[' -z 84966 ']' 00:16:14.566 11:26:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:14.566 11:26:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.566 11:26:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:14.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:14.566 11:26:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.566 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 [2024-11-26 11:26:32.633679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.566 [2024-11-26 11:26:32.634091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.566 [2024-11-26 11:26:32.798157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.824 [2024-11-26 11:26:32.835662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.824 [2024-11-26 11:26:32.869315] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.392 11:26:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.392 11:26:33 -- common/autotest_common.sh@862 -- # return 0 00:16:15.392 11:26:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:15.652 [2024-11-26 11:26:33.818600] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.652 [2024-11-26 11:26:33.818848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.652 [2024-11-26 11:26:33.818925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.652 [2024-11-26 11:26:33.818943] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.652 [2024-11-26 11:26:33.818970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.652 [2024-11-26 11:26:33.818984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.652 [2024-11-26 11:26:33.818997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:15.652 [2024-11-26 11:26:33.819007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.652 11:26:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.911 11:26:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.911 "name": "Existed_Raid", 00:16:15.911 "uuid": "83d77cf6-2f86-49d9-b41b-637be3d5506a", 00:16:15.911 "strip_size_kb": 64, 00:16:15.911 "state": "configuring", 00:16:15.911 "raid_level": "raid0", 00:16:15.911 "superblock": true, 00:16:15.911 "num_base_bdevs": 4, 00:16:15.911 "num_base_bdevs_discovered": 0, 00:16:15.911 "num_base_bdevs_operational": 4, 00:16:15.911 "base_bdevs_list": [ 00:16:15.911 { 00:16:15.911 "name": "BaseBdev1", 00:16:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.911 "is_configured": false, 00:16:15.911 "data_offset": 0, 00:16:15.911 "data_size": 0 00:16:15.911 }, 00:16:15.911 { 00:16:15.911 "name": "BaseBdev2", 00:16:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.911 "is_configured": false, 00:16:15.911 "data_offset": 0, 00:16:15.911 "data_size": 0 00:16:15.911 }, 00:16:15.911 { 00:16:15.911 "name": "BaseBdev3", 00:16:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.911 "is_configured": false, 00:16:15.911 "data_offset": 0, 00:16:15.911 "data_size": 0 00:16:15.911 }, 00:16:15.911 { 00:16:15.911 "name": "BaseBdev4", 00:16:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.911 "is_configured": false, 00:16:15.911 "data_offset": 0, 00:16:15.911 "data_size": 0 00:16:15.911 } 00:16:15.911 ] 00:16:15.911 }' 00:16:15.911 11:26:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.911 11:26:34 -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 11:26:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:16.478 [2024-11-26 11:26:34.590733] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.478 [2024-11-26 11:26:34.590790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:16.478 11:26:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:16.738 [2024-11-26 11:26:34.834812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.738 [2024-11-26 11:26:34.834857] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.738 [2024-11-26 11:26:34.835099] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.738 [2024-11-26 11:26:34.835131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.738 [2024-11-26 11:26:34.835149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.738 [2024-11-26 11:26:34.835161] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.738 [2024-11-26 11:26:34.835173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.738 [2024-11-26 11:26:34.835184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.738 11:26:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.997 BaseBdev1 00:16:16.997 [2024-11-26 11:26:35.032930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.997 11:26:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:16.997 11:26:35 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:16.997 11:26:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:16.997 11:26:35 -- common/autotest_common.sh@899 -- # local i 00:16:16.997 11:26:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:16.997 11:26:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:16.997 11:26:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.256 11:26:35 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.515 [ 00:16:17.515 { 00:16:17.515 "name": "BaseBdev1", 00:16:17.515 "aliases": [ 00:16:17.515 "5e96cfbd-e685-45af-a0a3-8dec1e080309" 00:16:17.515 ], 00:16:17.515 "product_name": "Malloc disk", 00:16:17.515 "block_size": 512, 00:16:17.515 "num_blocks": 65536, 00:16:17.515 "uuid": "5e96cfbd-e685-45af-a0a3-8dec1e080309", 00:16:17.515 "assigned_rate_limits": { 00:16:17.515 "rw_ios_per_sec": 0, 00:16:17.515 "rw_mbytes_per_sec": 0, 00:16:17.515 "r_mbytes_per_sec": 0, 00:16:17.515 "w_mbytes_per_sec": 0 00:16:17.515 }, 00:16:17.515 "claimed": true, 00:16:17.515 "claim_type": "exclusive_write", 00:16:17.515 "zoned": false, 00:16:17.515 "supported_io_types": { 00:16:17.515 "read": true, 00:16:17.515 "write": true, 00:16:17.515 "unmap": true, 00:16:17.515 "write_zeroes": true, 00:16:17.515 "flush": true, 00:16:17.515 "reset": true, 00:16:17.515 "compare": false, 00:16:17.515 "compare_and_write": false, 00:16:17.515 "abort": true, 00:16:17.515 "nvme_admin": false, 00:16:17.515 "nvme_io": false 00:16:17.515 }, 00:16:17.515 "memory_domains": [ 00:16:17.515 { 00:16:17.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.515 "dma_device_type": 2 00:16:17.515 } 00:16:17.515 ], 00:16:17.515 "driver_specific": {} 00:16:17.515 } 00:16:17.515 ] 00:16:17.515 11:26:35 -- common/autotest_common.sh@905 -- # return 0 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.515 11:26:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.774 11:26:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.774 "name": "Existed_Raid", 00:16:17.774 "uuid": "c22be2a4-b8d6-45d8-b86c-edf47cb892b7", 00:16:17.774 "strip_size_kb": 64, 00:16:17.774 "state": "configuring", 00:16:17.774 "raid_level": "raid0", 00:16:17.774 "superblock": true, 00:16:17.774 "num_base_bdevs": 4, 00:16:17.774 "num_base_bdevs_discovered": 1, 00:16:17.774 "num_base_bdevs_operational": 4, 00:16:17.774 "base_bdevs_list": [ 00:16:17.774 { 00:16:17.774 "name": "BaseBdev1", 00:16:17.774 "uuid": "5e96cfbd-e685-45af-a0a3-8dec1e080309", 00:16:17.774 "is_configured": true, 00:16:17.774 "data_offset": 2048, 00:16:17.774 "data_size": 63488 00:16:17.774 }, 00:16:17.774 { 00:16:17.774 "name": "BaseBdev2", 00:16:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.774 "is_configured": false, 00:16:17.774 "data_offset": 0, 00:16:17.774 "data_size": 0 00:16:17.774 }, 00:16:17.774 { 00:16:17.774 "name": "BaseBdev3", 00:16:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.774 "is_configured": false, 00:16:17.774 "data_offset": 0, 00:16:17.774 "data_size": 0 00:16:17.774 }, 00:16:17.774 { 00:16:17.774 "name": "BaseBdev4", 00:16:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.774 "is_configured": false, 00:16:17.774 "data_offset": 0, 00:16:17.774 "data_size": 0 00:16:17.774 } 00:16:17.774 ] 00:16:17.774 }' 00:16:17.774 11:26:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.774 11:26:35 -- common/autotest_common.sh@10 -- # set +x 00:16:18.033 11:26:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:18.292 [2024-11-26 11:26:36.321503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.292 [2024-11-26 11:26:36.321565] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:18.292 11:26:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:18.292 11:26:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:18.551 11:26:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.551 BaseBdev1 00:16:18.810 11:26:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:18.810 11:26:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:18.810 11:26:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:18.810 11:26:36 -- common/autotest_common.sh@899 -- # local i 00:16:18.810 11:26:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:18.810 11:26:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:18.810 11:26:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.810 11:26:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.068 [ 00:16:19.068 { 00:16:19.068 "name": "BaseBdev1", 00:16:19.068 "aliases": [ 00:16:19.068 "4508f7cf-de86-435c-a622-04c909cebc55" 00:16:19.068 ], 00:16:19.068 "product_name": "Malloc disk", 00:16:19.068 "block_size": 512, 00:16:19.068 "num_blocks": 65536, 00:16:19.068 "uuid": "4508f7cf-de86-435c-a622-04c909cebc55", 00:16:19.068 "assigned_rate_limits": { 00:16:19.068 "rw_ios_per_sec": 0, 00:16:19.068 "rw_mbytes_per_sec": 0, 00:16:19.068 "r_mbytes_per_sec": 0, 00:16:19.068 "w_mbytes_per_sec": 0 00:16:19.068 }, 00:16:19.068 "claimed": false, 00:16:19.068 "zoned": false, 00:16:19.068 "supported_io_types": { 00:16:19.068 "read": true, 00:16:19.068 "write": true, 00:16:19.068 "unmap": true, 00:16:19.068 "write_zeroes": true, 00:16:19.068 "flush": true, 00:16:19.068 "reset": true, 00:16:19.068 "compare": false, 00:16:19.068 "compare_and_write": false, 00:16:19.068 "abort": true, 00:16:19.068 "nvme_admin": false, 00:16:19.068 "nvme_io": false 00:16:19.068 }, 00:16:19.068 "memory_domains": [ 00:16:19.068 { 00:16:19.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.068 "dma_device_type": 2 00:16:19.068 } 00:16:19.068 ], 00:16:19.068 "driver_specific": {} 00:16:19.068 } 00:16:19.068 ] 00:16:19.068 11:26:37 -- common/autotest_common.sh@905 -- # return 0 00:16:19.068 11:26:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:19.327 [2024-11-26 11:26:37.346161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.327 [2024-11-26 11:26:37.348296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.327 [2024-11-26 11:26:37.348501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.327 [2024-11-26 11:26:37.348532] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.327 [2024-11-26 11:26:37.348546] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.327 [2024-11-26 11:26:37.348557] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:19.327 [2024-11-26 11:26:37.348568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.327 11:26:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.586 11:26:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.586 "name": "Existed_Raid", 00:16:19.586 "uuid": "6defc7db-81e9-434a-96f0-7d0857658b3f", 00:16:19.586 "strip_size_kb": 64, 00:16:19.586 "state": "configuring", 00:16:19.586 "raid_level": "raid0", 00:16:19.586 "superblock": true, 00:16:19.586 "num_base_bdevs": 4, 00:16:19.586 "num_base_bdevs_discovered": 1, 00:16:19.586 "num_base_bdevs_operational": 4, 00:16:19.586 "base_bdevs_list": [ 00:16:19.586 { 00:16:19.586 "name": "BaseBdev1", 00:16:19.586 "uuid": "4508f7cf-de86-435c-a622-04c909cebc55", 00:16:19.586 "is_configured": true, 00:16:19.586 "data_offset": 2048, 00:16:19.586 "data_size": 63488 00:16:19.586 }, 00:16:19.586 { 00:16:19.586 "name": "BaseBdev2", 00:16:19.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.586 "is_configured": false, 00:16:19.586 "data_offset": 0, 00:16:19.586 "data_size": 0 00:16:19.586 }, 00:16:19.586 { 00:16:19.586 "name": "BaseBdev3", 00:16:19.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.586 "is_configured": false, 00:16:19.586 "data_offset": 0, 00:16:19.586 "data_size": 0 00:16:19.586 }, 00:16:19.586 { 00:16:19.586 "name": "BaseBdev4", 00:16:19.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.586 "is_configured": false, 00:16:19.586 "data_offset": 0, 00:16:19.586 "data_size": 0 00:16:19.586 } 00:16:19.586 ] 00:16:19.586 }' 00:16:19.586 11:26:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.586 11:26:37 -- common/autotest_common.sh@10 -- # set +x 00:16:19.845 11:26:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.103 [2024-11-26 11:26:38.169780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.103 BaseBdev2 00:16:20.103 11:26:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:20.103 11:26:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:20.103 11:26:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:20.103 11:26:38 -- common/autotest_common.sh@899 -- # local i 00:16:20.103 11:26:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:20.103 11:26:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:20.103 11:26:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.362 11:26:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.362 [ 00:16:20.362 { 00:16:20.362 "name": "BaseBdev2", 00:16:20.362 "aliases": [ 00:16:20.362 "c98103c2-ea3a-43c6-b086-d7c925b3e0c5" 00:16:20.362 ], 00:16:20.362 "product_name": "Malloc disk", 00:16:20.362 "block_size": 512, 00:16:20.362 "num_blocks": 65536, 00:16:20.362 "uuid": "c98103c2-ea3a-43c6-b086-d7c925b3e0c5", 00:16:20.362 "assigned_rate_limits": { 00:16:20.362 "rw_ios_per_sec": 0, 00:16:20.362 "rw_mbytes_per_sec": 0, 00:16:20.362 "r_mbytes_per_sec": 0, 00:16:20.362 "w_mbytes_per_sec": 0 00:16:20.362 }, 00:16:20.362 "claimed": true, 00:16:20.362 "claim_type": "exclusive_write", 00:16:20.362 "zoned": false, 00:16:20.362 "supported_io_types": { 00:16:20.362 "read": true, 00:16:20.362 "write": true, 00:16:20.362 "unmap": true, 00:16:20.362 "write_zeroes": true, 00:16:20.362 "flush": true, 00:16:20.362 "reset": true, 00:16:20.362 "compare": false, 00:16:20.362 "compare_and_write": false, 00:16:20.362 "abort": true, 00:16:20.362 "nvme_admin": false, 00:16:20.362 "nvme_io": false 00:16:20.362 }, 00:16:20.363 "memory_domains": [ 00:16:20.363 { 00:16:20.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.363 "dma_device_type": 2 00:16:20.363 } 00:16:20.363 ], 00:16:20.363 "driver_specific": {} 00:16:20.363 } 00:16:20.363 ] 00:16:20.363 11:26:38 -- common/autotest_common.sh@905 -- # return 0 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.363 11:26:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.622 11:26:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.622 "name": "Existed_Raid", 00:16:20.622 "uuid": "6defc7db-81e9-434a-96f0-7d0857658b3f", 00:16:20.622 "strip_size_kb": 64, 00:16:20.622 "state": "configuring", 00:16:20.622 "raid_level": "raid0", 00:16:20.622 "superblock": true, 00:16:20.622 "num_base_bdevs": 4, 00:16:20.622 "num_base_bdevs_discovered": 2, 00:16:20.622 "num_base_bdevs_operational": 4, 00:16:20.622 "base_bdevs_list": [ 00:16:20.622 { 00:16:20.622 "name": "BaseBdev1", 00:16:20.622 "uuid": "4508f7cf-de86-435c-a622-04c909cebc55", 00:16:20.622 "is_configured": true, 00:16:20.622 "data_offset": 2048, 00:16:20.622 "data_size": 63488 00:16:20.622 }, 00:16:20.622 { 00:16:20.622 "name": "BaseBdev2", 00:16:20.622 "uuid": "c98103c2-ea3a-43c6-b086-d7c925b3e0c5", 00:16:20.622 "is_configured": true, 00:16:20.622 "data_offset": 2048, 00:16:20.622 "data_size": 63488 00:16:20.622 }, 00:16:20.622 { 00:16:20.622 "name": "BaseBdev3", 00:16:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.622 "is_configured": false, 00:16:20.622 "data_offset": 0, 00:16:20.622 "data_size": 0 00:16:20.622 }, 00:16:20.622 { 00:16:20.622 "name": "BaseBdev4", 00:16:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.622 "is_configured": false, 00:16:20.622 "data_offset": 0, 00:16:20.622 "data_size": 0 00:16:20.622 } 00:16:20.622 ] 00:16:20.622 }' 00:16:20.622 11:26:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.622 11:26:38 -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 11:26:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.190 [2024-11-26 11:26:39.378401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.190 BaseBdev3 00:16:21.190 11:26:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:21.190 11:26:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:21.190 11:26:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:21.190 11:26:39 -- common/autotest_common.sh@899 -- # local i 00:16:21.190 11:26:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:21.190 11:26:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:21.190 11:26:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.447 11:26:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.706 [ 00:16:21.706 { 00:16:21.706 "name": "BaseBdev3", 00:16:21.706 "aliases": [ 00:16:21.706 "50a4c53b-d167-4057-8aa9-bf7742646ac9" 00:16:21.706 ], 00:16:21.706 "product_name": "Malloc disk", 00:16:21.706 "block_size": 512, 00:16:21.706 "num_blocks": 65536, 00:16:21.706 "uuid": "50a4c53b-d167-4057-8aa9-bf7742646ac9", 00:16:21.706 "assigned_rate_limits": { 00:16:21.706 "rw_ios_per_sec": 0, 00:16:21.706 "rw_mbytes_per_sec": 0, 00:16:21.706 "r_mbytes_per_sec": 0, 00:16:21.706 "w_mbytes_per_sec": 0 00:16:21.706 }, 00:16:21.706 "claimed": true, 00:16:21.706 "claim_type": "exclusive_write", 00:16:21.706 "zoned": false, 00:16:21.706 "supported_io_types": { 00:16:21.706 "read": true, 00:16:21.706 "write": true, 00:16:21.706 "unmap": true, 00:16:21.706 "write_zeroes": true, 00:16:21.706 "flush": true, 00:16:21.706 "reset": true, 00:16:21.706 "compare": false, 00:16:21.706 "compare_and_write": false, 00:16:21.706 "abort": true, 00:16:21.706 "nvme_admin": false, 00:16:21.706 "nvme_io": false 00:16:21.706 }, 00:16:21.706 "memory_domains": [ 00:16:21.706 { 00:16:21.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.706 "dma_device_type": 2 00:16:21.706 } 00:16:21.706 ], 00:16:21.706 "driver_specific": {} 00:16:21.706 } 00:16:21.706 ] 00:16:21.706 11:26:39 -- common/autotest_common.sh@905 -- # return 0 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.706 11:26:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.964 11:26:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.964 "name": "Existed_Raid", 00:16:21.964 "uuid": "6defc7db-81e9-434a-96f0-7d0857658b3f", 00:16:21.964 "strip_size_kb": 64, 00:16:21.964 "state": "configuring", 00:16:21.964 "raid_level": "raid0", 00:16:21.964 "superblock": true, 00:16:21.964 "num_base_bdevs": 4, 00:16:21.964 "num_base_bdevs_discovered": 3, 00:16:21.964 "num_base_bdevs_operational": 4, 00:16:21.964 "base_bdevs_list": [ 00:16:21.964 { 00:16:21.964 "name": "BaseBdev1", 00:16:21.964 "uuid": "4508f7cf-de86-435c-a622-04c909cebc55", 00:16:21.965 "is_configured": true, 00:16:21.965 "data_offset": 2048, 00:16:21.965 "data_size": 63488 00:16:21.965 }, 00:16:21.965 { 00:16:21.965 "name": "BaseBdev2", 00:16:21.965 "uuid": "c98103c2-ea3a-43c6-b086-d7c925b3e0c5", 00:16:21.965 "is_configured": true, 00:16:21.965 "data_offset": 2048, 00:16:21.965 "data_size": 63488 00:16:21.965 }, 00:16:21.965 { 00:16:21.965 "name": "BaseBdev3", 00:16:21.965 "uuid": "50a4c53b-d167-4057-8aa9-bf7742646ac9", 00:16:21.965 "is_configured": true, 00:16:21.965 "data_offset": 2048, 00:16:21.965 "data_size": 63488 00:16:21.965 }, 00:16:21.965 { 00:16:21.965 "name": "BaseBdev4", 00:16:21.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.965 "is_configured": false, 00:16:21.965 "data_offset": 0, 00:16:21.965 "data_size": 0 00:16:21.965 } 00:16:21.965 ] 00:16:21.965 }' 00:16:21.965 11:26:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.965 11:26:40 -- common/autotest_common.sh@10 -- # set +x 00:16:22.223 11:26:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:22.482 [2024-11-26 11:26:40.511567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.482 [2024-11-26 11:26:40.512092] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:22.482 BaseBdev4 00:16:22.482 [2024-11-26 11:26:40.513232] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:22.482 [2024-11-26 11:26:40.513818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:16:22.482 [2024-11-26 11:26:40.514983] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:22.482 [2024-11-26 11:26:40.515266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:22.482 [2024-11-26 11:26:40.515764] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.482 11:26:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:22.482 11:26:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:22.482 11:26:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:22.482 11:26:40 -- common/autotest_common.sh@899 -- # local i 00:16:22.482 11:26:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:22.482 11:26:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:22.482 11:26:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.741 11:26:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:22.741 [ 00:16:22.741 { 00:16:22.741 "name": "BaseBdev4", 00:16:22.741 "aliases": [ 00:16:22.741 "ce9054b6-d040-43de-8603-a3b0f7808edc" 00:16:22.741 ], 00:16:22.741 "product_name": "Malloc disk", 00:16:22.741 "block_size": 512, 00:16:22.741 "num_blocks": 65536, 00:16:22.741 "uuid": "ce9054b6-d040-43de-8603-a3b0f7808edc", 00:16:22.741 "assigned_rate_limits": { 00:16:22.741 "rw_ios_per_sec": 0, 00:16:22.741 "rw_mbytes_per_sec": 0, 00:16:22.741 "r_mbytes_per_sec": 0, 00:16:22.741 "w_mbytes_per_sec": 0 00:16:22.741 }, 00:16:22.741 "claimed": true, 00:16:22.741 "claim_type": "exclusive_write", 00:16:22.741 "zoned": false, 00:16:22.741 "supported_io_types": { 00:16:22.741 "read": true, 00:16:22.741 "write": true, 00:16:22.741 "unmap": true, 00:16:22.741 "write_zeroes": true, 00:16:22.741 "flush": true, 00:16:22.741 "reset": true, 00:16:22.741 "compare": false, 00:16:22.741 "compare_and_write": false, 00:16:22.741 "abort": true, 00:16:22.741 "nvme_admin": false, 00:16:22.741 "nvme_io": false 00:16:22.741 }, 00:16:22.741 "memory_domains": [ 00:16:22.741 { 00:16:22.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.741 "dma_device_type": 2 00:16:22.741 } 00:16:22.741 ], 00:16:22.741 "driver_specific": {} 00:16:22.741 } 00:16:22.741 ] 00:16:22.741 11:26:40 -- common/autotest_common.sh@905 -- # return 0 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.741 11:26:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.999 11:26:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.999 "name": "Existed_Raid", 00:16:22.999 "uuid": "6defc7db-81e9-434a-96f0-7d0857658b3f", 00:16:22.999 "strip_size_kb": 64, 00:16:22.999 "state": "online", 00:16:22.999 "raid_level": "raid0", 00:16:22.999 "superblock": true, 00:16:22.999 "num_base_bdevs": 4, 00:16:22.999 "num_base_bdevs_discovered": 4, 00:16:22.999 "num_base_bdevs_operational": 4, 00:16:22.999 "base_bdevs_list": [ 00:16:22.999 { 00:16:22.999 "name": "BaseBdev1", 00:16:22.999 "uuid": "4508f7cf-de86-435c-a622-04c909cebc55", 00:16:22.999 "is_configured": true, 00:16:22.999 "data_offset": 2048, 00:16:22.999 "data_size": 63488 00:16:22.999 }, 00:16:22.999 { 00:16:22.999 "name": "BaseBdev2", 00:16:22.999 "uuid": "c98103c2-ea3a-43c6-b086-d7c925b3e0c5", 00:16:22.999 "is_configured": true, 00:16:22.999 "data_offset": 2048, 00:16:22.999 "data_size": 63488 00:16:22.999 }, 00:16:22.999 { 00:16:22.999 "name": "BaseBdev3", 00:16:22.999 "uuid": "50a4c53b-d167-4057-8aa9-bf7742646ac9", 00:16:22.999 "is_configured": true, 00:16:22.999 "data_offset": 2048, 00:16:22.999 "data_size": 63488 00:16:22.999 }, 00:16:22.999 { 00:16:22.999 "name": "BaseBdev4", 00:16:22.999 "uuid": "ce9054b6-d040-43de-8603-a3b0f7808edc", 00:16:22.999 "is_configured": true, 00:16:22.999 "data_offset": 2048, 00:16:22.999 "data_size": 63488 00:16:22.999 } 00:16:22.999 ] 00:16:22.999 }' 00:16:22.999 11:26:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.999 11:26:41 -- common/autotest_common.sh@10 -- # set +x 00:16:23.258 11:26:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:23.516 [2024-11-26 11:26:41.572160] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.516 [2024-11-26 11:26:41.572211] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.516 [2024-11-26 11:26:41.572303] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.516 11:26:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.775 11:26:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.775 "name": "Existed_Raid", 00:16:23.775 "uuid": "6defc7db-81e9-434a-96f0-7d0857658b3f", 00:16:23.775 "strip_size_kb": 64, 00:16:23.775 "state": "offline", 00:16:23.775 "raid_level": "raid0", 00:16:23.775 "superblock": true, 00:16:23.775 "num_base_bdevs": 4, 00:16:23.775 "num_base_bdevs_discovered": 3, 00:16:23.775 "num_base_bdevs_operational": 3, 00:16:23.775 "base_bdevs_list": [ 00:16:23.775 { 00:16:23.775 "name": null, 00:16:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.775 "is_configured": false, 00:16:23.775 "data_offset": 2048, 00:16:23.775 "data_size": 63488 00:16:23.775 }, 00:16:23.775 { 00:16:23.775 "name": "BaseBdev2", 00:16:23.775 "uuid": "c98103c2-ea3a-43c6-b086-d7c925b3e0c5", 00:16:23.775 "is_configured": true, 00:16:23.775 "data_offset": 2048, 00:16:23.775 "data_size": 63488 00:16:23.775 }, 00:16:23.775 { 00:16:23.775 "name": "BaseBdev3", 00:16:23.775 "uuid": "50a4c53b-d167-4057-8aa9-bf7742646ac9", 00:16:23.775 "is_configured": true, 00:16:23.775 "data_offset": 2048, 00:16:23.775 "data_size": 63488 00:16:23.775 }, 00:16:23.775 { 00:16:23.775 "name": "BaseBdev4", 00:16:23.775 "uuid": "ce9054b6-d040-43de-8603-a3b0f7808edc", 00:16:23.775 "is_configured": true, 00:16:23.775 "data_offset": 2048, 00:16:23.775 "data_size": 63488 00:16:23.775 } 00:16:23.775 ] 00:16:23.775 }' 00:16:23.775 11:26:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.775 11:26:41 -- common/autotest_common.sh@10 -- # set +x 00:16:24.034 11:26:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:24.034 11:26:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.034 11:26:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.034 11:26:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.292 11:26:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.292 11:26:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.292 11:26:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:24.292 [2024-11-26 11:26:42.519600] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.551 11:26:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:24.551 11:26:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.551 11:26:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.551 11:26:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.809 11:26:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.809 11:26:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.809 11:26:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:25.067 [2024-11-26 11:26:43.060811] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.067 11:26:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.067 11:26:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.067 11:26:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.068 11:26:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:25.326 11:26:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:25.326 11:26:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.326 11:26:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:25.326 [2024-11-26 11:26:43.558319] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:25.326 [2024-11-26 11:26:43.558382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:25.585 11:26:43 -- bdev/bdev_raid.sh@287 -- # killprocess 84966 00:16:25.585 11:26:43 -- common/autotest_common.sh@936 -- # '[' -z 84966 ']' 00:16:25.585 11:26:43 -- common/autotest_common.sh@940 -- # kill -0 84966 00:16:25.585 11:26:43 -- common/autotest_common.sh@941 -- # uname 00:16:25.585 11:26:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.585 11:26:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84966 00:16:25.844 killing process with pid 84966 00:16:25.844 11:26:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.844 11:26:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.844 11:26:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84966' 00:16:25.844 11:26:43 -- common/autotest_common.sh@955 -- # kill 84966 00:16:25.844 11:26:43 -- common/autotest_common.sh@960 -- # wait 84966 00:16:25.844 [2024-11-26 11:26:43.831596] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.844 [2024-11-26 11:26:43.831686] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:25.844 00:16:25.844 real 0m11.437s 00:16:25.844 user 0m20.146s 00:16:25.844 sys 0m1.809s 00:16:25.844 11:26:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.844 ************************************ 00:16:25.844 END TEST raid_state_function_test_sb 00:16:25.844 ************************************ 00:16:25.844 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:25.844 11:26:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:25.844 11:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.844 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:25.844 ************************************ 00:16:25.844 START TEST raid_superblock_test 00:16:25.844 ************************************ 00:16:25.844 11:26:44 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=85360 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:25.844 11:26:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 85360 /var/tmp/spdk-raid.sock 00:16:25.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:25.844 11:26:44 -- common/autotest_common.sh@829 -- # '[' -z 85360 ']' 00:16:25.844 11:26:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:25.844 11:26:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.844 11:26:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:25.844 11:26:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.844 11:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:26.103 [2024-11-26 11:26:44.119311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:26.103 [2024-11-26 11:26:44.119436] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85360 ] 00:16:26.103 [2024-11-26 11:26:44.276891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.103 [2024-11-26 11:26:44.318657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.362 [2024-11-26 11:26:44.355991] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.930 11:26:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.930 11:26:45 -- common/autotest_common.sh@862 -- # return 0 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.930 11:26:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:27.189 malloc1 00:16:27.189 11:26:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.448 [2024-11-26 11:26:45.499784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.448 [2024-11-26 11:26:45.499885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.448 [2024-11-26 11:26:45.499937] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:27.448 [2024-11-26 11:26:45.499957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.448 [2024-11-26 11:26:45.502318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.448 [2024-11-26 11:26:45.502372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.448 pt1 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.448 11:26:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:27.740 malloc2 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.740 [2024-11-26 11:26:45.906130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.740 [2024-11-26 11:26:45.906227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.740 [2024-11-26 11:26:45.906258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:27.740 [2024-11-26 11:26:45.906272] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.740 [2024-11-26 11:26:45.908723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.740 [2024-11-26 11:26:45.908779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.740 pt2 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.740 11:26:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:28.011 malloc3 00:16:28.011 11:26:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.270 [2024-11-26 11:26:46.348936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.270 [2024-11-26 11:26:46.349046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.270 [2024-11-26 11:26:46.349079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:28.270 [2024-11-26 11:26:46.349093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.270 [2024-11-26 11:26:46.351330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.270 [2024-11-26 11:26:46.351384] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.270 pt3 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.270 11:26:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:28.530 malloc4 00:16:28.530 11:26:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.530 [2024-11-26 11:26:46.751614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.530 [2024-11-26 11:26:46.751709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.530 [2024-11-26 11:26:46.751747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:16:28.530 [2024-11-26 11:26:46.751762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.530 [2024-11-26 11:26:46.754426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.530 [2024-11-26 11:26:46.754481] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.530 pt4 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:28.789 [2024-11-26 11:26:46.943657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.789 [2024-11-26 11:26:46.945680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.789 [2024-11-26 11:26:46.945796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.789 [2024-11-26 11:26:46.945857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.789 [2024-11-26 11:26:46.946097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:16:28.789 [2024-11-26 11:26:46.946113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:28.789 [2024-11-26 11:26:46.946229] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:28.789 [2024-11-26 11:26:46.946576] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:16:28.789 [2024-11-26 11:26:46.946601] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:16:28.789 [2024-11-26 11:26:46.946732] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.789 11:26:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.047 11:26:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.047 "name": "raid_bdev1", 00:16:29.047 "uuid": "6c6a630e-f92a-4797-8f0f-9b32f905c15a", 00:16:29.047 "strip_size_kb": 64, 00:16:29.047 "state": "online", 00:16:29.047 "raid_level": "raid0", 00:16:29.047 "superblock": true, 00:16:29.047 "num_base_bdevs": 4, 00:16:29.047 "num_base_bdevs_discovered": 4, 00:16:29.047 "num_base_bdevs_operational": 4, 00:16:29.047 "base_bdevs_list": [ 00:16:29.047 { 00:16:29.047 "name": "pt1", 00:16:29.047 "uuid": "7d159bea-2032-50eb-91a0-97313d002de6", 00:16:29.047 "is_configured": true, 00:16:29.047 "data_offset": 2048, 00:16:29.047 "data_size": 63488 00:16:29.047 }, 00:16:29.047 { 00:16:29.047 "name": "pt2", 00:16:29.047 "uuid": "23758874-c6c7-5630-b3df-77a9b93b12f9", 00:16:29.047 "is_configured": true, 00:16:29.047 "data_offset": 2048, 00:16:29.047 "data_size": 63488 00:16:29.047 }, 00:16:29.047 { 00:16:29.047 "name": "pt3", 00:16:29.047 "uuid": "2d62e309-cacb-5480-93ff-2a04db232a9b", 00:16:29.047 "is_configured": true, 00:16:29.047 "data_offset": 2048, 00:16:29.047 "data_size": 63488 00:16:29.047 }, 00:16:29.047 { 00:16:29.047 "name": "pt4", 00:16:29.047 "uuid": "ffa7266c-fc1d-5221-a540-9060b67d48fb", 00:16:29.047 "is_configured": true, 00:16:29.047 "data_offset": 2048, 00:16:29.047 "data_size": 63488 00:16:29.047 } 00:16:29.047 ] 00:16:29.047 }' 00:16:29.047 11:26:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.047 11:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:29.305 11:26:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.305 11:26:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:29.564 [2024-11-26 11:26:47.764244] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.564 11:26:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6c6a630e-f92a-4797-8f0f-9b32f905c15a 00:16:29.564 11:26:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 6c6a630e-f92a-4797-8f0f-9b32f905c15a ']' 00:16:29.564 11:26:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:29.823 [2024-11-26 11:26:47.955905] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.823 [2024-11-26 11:26:47.955959] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.823 [2024-11-26 11:26:47.956045] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.823 [2024-11-26 11:26:47.956126] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.823 [2024-11-26 11:26:47.956139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:16:29.823 11:26:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.823 11:26:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:30.082 11:26:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:30.082 11:26:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:30.082 11:26:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.082 11:26:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:30.341 11:26:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.341 11:26:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:30.600 11:26:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.600 11:26:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:30.859 11:26:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.859 11:26:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:30.859 11:26:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:30.859 11:26:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.118 11:26:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:31.118 11:26:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.118 11:26:49 -- common/autotest_common.sh@650 -- # local es=0 00:16:31.118 11:26:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.118 11:26:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.118 11:26:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.118 11:26:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.118 11:26:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.118 11:26:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.118 11:26:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.118 11:26:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.118 11:26:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:31.118 11:26:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:31.376 [2024-11-26 11:26:49.508494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.376 [2024-11-26 11:26:49.510530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.376 [2024-11-26 11:26:49.510605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:31.376 [2024-11-26 11:26:49.510664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:31.376 [2024-11-26 11:26:49.510721] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:31.376 [2024-11-26 11:26:49.510791] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:31.376 [2024-11-26 11:26:49.510826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:31.376 [2024-11-26 11:26:49.510865] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:16:31.376 [2024-11-26 11:26:49.510902] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.376 [2024-11-26 11:26:49.510917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:16:31.376 request: 00:16:31.376 { 00:16:31.376 "name": "raid_bdev1", 00:16:31.376 "raid_level": "raid0", 00:16:31.376 "base_bdevs": [ 00:16:31.376 "malloc1", 00:16:31.376 "malloc2", 00:16:31.376 "malloc3", 00:16:31.376 "malloc4" 00:16:31.376 ], 00:16:31.376 "superblock": false, 00:16:31.376 "strip_size_kb": 64, 00:16:31.376 "method": "bdev_raid_create", 00:16:31.376 "req_id": 1 00:16:31.376 } 00:16:31.376 Got JSON-RPC error response 00:16:31.376 response: 00:16:31.376 { 00:16:31.376 "code": -17, 00:16:31.377 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.377 } 00:16:31.377 11:26:49 -- common/autotest_common.sh@653 -- # es=1 00:16:31.377 11:26:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.377 11:26:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.377 11:26:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.377 11:26:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:31.377 11:26:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.635 11:26:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:31.635 11:26:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:31.635 11:26:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.894 [2024-11-26 11:26:49.952531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.894 [2024-11-26 11:26:49.952617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.894 [2024-11-26 11:26:49.952660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:31.894 [2024-11-26 11:26:49.952673] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.894 [2024-11-26 11:26:49.954963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.894 [2024-11-26 11:26:49.955017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.894 [2024-11-26 11:26:49.955094] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:31.894 [2024-11-26 11:26:49.955145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.894 pt1 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.894 11:26:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.153 11:26:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.153 "name": "raid_bdev1", 00:16:32.153 "uuid": "6c6a630e-f92a-4797-8f0f-9b32f905c15a", 00:16:32.153 "strip_size_kb": 64, 00:16:32.153 "state": "configuring", 00:16:32.153 "raid_level": "raid0", 00:16:32.153 "superblock": true, 00:16:32.153 "num_base_bdevs": 4, 00:16:32.153 "num_base_bdevs_discovered": 1, 00:16:32.153 "num_base_bdevs_operational": 4, 00:16:32.153 "base_bdevs_list": [ 00:16:32.153 { 00:16:32.153 "name": "pt1", 00:16:32.153 "uuid": "7d159bea-2032-50eb-91a0-97313d002de6", 00:16:32.153 "is_configured": true, 00:16:32.153 "data_offset": 2048, 00:16:32.153 "data_size": 63488 00:16:32.153 }, 00:16:32.153 { 00:16:32.153 "name": null, 00:16:32.153 "uuid": "23758874-c6c7-5630-b3df-77a9b93b12f9", 00:16:32.153 "is_configured": false, 00:16:32.153 "data_offset": 2048, 00:16:32.153 "data_size": 63488 00:16:32.153 }, 00:16:32.153 { 00:16:32.153 "name": null, 00:16:32.153 "uuid": "2d62e309-cacb-5480-93ff-2a04db232a9b", 00:16:32.153 "is_configured": false, 00:16:32.153 "data_offset": 2048, 00:16:32.153 "data_size": 63488 00:16:32.153 }, 00:16:32.153 { 00:16:32.153 "name": null, 00:16:32.153 "uuid": "ffa7266c-fc1d-5221-a540-9060b67d48fb", 00:16:32.153 "is_configured": false, 00:16:32.153 "data_offset": 2048, 00:16:32.153 "data_size": 63488 00:16:32.153 } 00:16:32.153 ] 00:16:32.153 }' 00:16:32.153 11:26:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.153 11:26:50 -- common/autotest_common.sh@10 -- # set +x 00:16:32.411 11:26:50 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:16:32.411 11:26:50 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.670 [2024-11-26 11:26:50.748819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.670 [2024-11-26 11:26:50.748929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.670 [2024-11-26 11:26:50.748983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:16:32.670 [2024-11-26 11:26:50.748997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.670 [2024-11-26 11:26:50.749408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.670 [2024-11-26 11:26:50.749430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.670 [2024-11-26 11:26:50.749514] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:32.670 [2024-11-26 11:26:50.749541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.670 pt2 00:16:32.670 11:26:50 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:32.929 [2024-11-26 11:26:50.952885] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.929 11:26:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.188 11:26:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.188 "name": "raid_bdev1", 00:16:33.188 "uuid": "6c6a630e-f92a-4797-8f0f-9b32f905c15a", 00:16:33.188 "strip_size_kb": 64, 00:16:33.188 "state": "configuring", 00:16:33.188 "raid_level": "raid0", 00:16:33.188 "superblock": true, 00:16:33.188 "num_base_bdevs": 4, 00:16:33.188 "num_base_bdevs_discovered": 1, 00:16:33.188 "num_base_bdevs_operational": 4, 00:16:33.188 "base_bdevs_list": [ 00:16:33.188 { 00:16:33.188 "name": "pt1", 00:16:33.188 "uuid": "7d159bea-2032-50eb-91a0-97313d002de6", 00:16:33.188 "is_configured": true, 00:16:33.188 "data_offset": 2048, 00:16:33.188 "data_size": 63488 00:16:33.188 }, 00:16:33.188 { 00:16:33.188 "name": null, 00:16:33.188 "uuid": "23758874-c6c7-5630-b3df-77a9b93b12f9", 00:16:33.188 "is_configured": false, 00:16:33.188 "data_offset": 2048, 00:16:33.188 "data_size": 63488 00:16:33.188 }, 00:16:33.188 { 00:16:33.188 "name": null, 00:16:33.188 "uuid": "2d62e309-cacb-5480-93ff-2a04db232a9b", 00:16:33.188 "is_configured": false, 00:16:33.188 "data_offset": 2048, 00:16:33.188 "data_size": 63488 00:16:33.188 }, 00:16:33.188 { 00:16:33.188 "name": null, 00:16:33.188 "uuid": "ffa7266c-fc1d-5221-a540-9060b67d48fb", 00:16:33.188 "is_configured": false, 00:16:33.188 "data_offset": 2048, 00:16:33.188 "data_size": 63488 00:16:33.188 } 00:16:33.188 ] 00:16:33.188 }' 00:16:33.189 11:26:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.189 11:26:51 -- common/autotest_common.sh@10 -- # set +x 00:16:33.448 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:33.448 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:33.448 11:26:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.707 [2024-11-26 11:26:51.713099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.707 [2024-11-26 11:26:51.713196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.707 [2024-11-26 11:26:51.713222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:16:33.707 [2024-11-26 11:26:51.713238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.707 [2024-11-26 11:26:51.713659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.707 [2024-11-26 11:26:51.713685] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.707 [2024-11-26 11:26:51.713760] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:33.707 [2024-11-26 11:26:51.713794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.707 pt2 00:16:33.707 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:33.707 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:33.707 11:26:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.966 [2024-11-26 11:26:51.953172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.966 [2024-11-26 11:26:51.953273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.966 [2024-11-26 11:26:51.953302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:16:33.966 [2024-11-26 11:26:51.953317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.966 [2024-11-26 11:26:51.953802] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.966 [2024-11-26 11:26:51.953828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.966 [2024-11-26 11:26:51.953917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:33.966 [2024-11-26 11:26:51.953956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.966 pt3 00:16:33.966 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:33.966 11:26:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:33.966 11:26:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.966 [2024-11-26 11:26:52.149205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.966 [2024-11-26 11:26:52.149306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.966 [2024-11-26 11:26:52.149332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:16:33.966 [2024-11-26 11:26:52.149349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.966 [2024-11-26 11:26:52.149724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.966 [2024-11-26 11:26:52.149748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.966 [2024-11-26 11:26:52.149811] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:33.966 [2024-11-26 11:26:52.149840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.966 [2024-11-26 11:26:52.150005] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:16:33.966 [2024-11-26 11:26:52.150022] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:33.966 [2024-11-26 11:26:52.150108] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:16:33.966 [2024-11-26 11:26:52.150429] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:16:33.966 [2024-11-26 11:26:52.150444] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:16:33.966 [2024-11-26 11:26:52.150544] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.966 pt4 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.966 11:26:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.225 11:26:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.225 "name": "raid_bdev1", 00:16:34.225 "uuid": "6c6a630e-f92a-4797-8f0f-9b32f905c15a", 00:16:34.225 "strip_size_kb": 64, 00:16:34.225 "state": "online", 00:16:34.225 "raid_level": "raid0", 00:16:34.225 "superblock": true, 00:16:34.225 "num_base_bdevs": 4, 00:16:34.225 "num_base_bdevs_discovered": 4, 00:16:34.225 "num_base_bdevs_operational": 4, 00:16:34.225 "base_bdevs_list": [ 00:16:34.225 { 00:16:34.225 "name": "pt1", 00:16:34.225 "uuid": "7d159bea-2032-50eb-91a0-97313d002de6", 00:16:34.225 "is_configured": true, 00:16:34.225 "data_offset": 2048, 00:16:34.225 "data_size": 63488 00:16:34.225 }, 00:16:34.225 { 00:16:34.225 "name": "pt2", 00:16:34.225 "uuid": "23758874-c6c7-5630-b3df-77a9b93b12f9", 00:16:34.225 "is_configured": true, 00:16:34.225 "data_offset": 2048, 00:16:34.225 "data_size": 63488 00:16:34.225 }, 00:16:34.225 { 00:16:34.225 "name": "pt3", 00:16:34.225 "uuid": "2d62e309-cacb-5480-93ff-2a04db232a9b", 00:16:34.225 "is_configured": true, 00:16:34.225 "data_offset": 2048, 00:16:34.225 "data_size": 63488 00:16:34.225 }, 00:16:34.225 { 00:16:34.225 "name": "pt4", 00:16:34.225 "uuid": "ffa7266c-fc1d-5221-a540-9060b67d48fb", 00:16:34.225 "is_configured": true, 00:16:34.225 "data_offset": 2048, 00:16:34.225 "data_size": 63488 00:16:34.225 } 00:16:34.225 ] 00:16:34.225 }' 00:16:34.225 11:26:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.225 11:26:52 -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 11:26:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:34.484 11:26:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:34.743 [2024-11-26 11:26:52.853697] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.743 11:26:52 -- bdev/bdev_raid.sh@430 -- # '[' 6c6a630e-f92a-4797-8f0f-9b32f905c15a '!=' 6c6a630e-f92a-4797-8f0f-9b32f905c15a ']' 00:16:34.743 11:26:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:34.743 11:26:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:34.743 11:26:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:34.743 11:26:52 -- bdev/bdev_raid.sh@511 -- # killprocess 85360 00:16:34.743 11:26:52 -- common/autotest_common.sh@936 -- # '[' -z 85360 ']' 00:16:34.743 11:26:52 -- common/autotest_common.sh@940 -- # kill -0 85360 00:16:34.743 11:26:52 -- common/autotest_common.sh@941 -- # uname 00:16:34.743 11:26:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.743 11:26:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85360 00:16:34.743 11:26:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.743 killing process with pid 85360 00:16:34.743 11:26:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.743 11:26:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85360' 00:16:34.743 11:26:52 -- common/autotest_common.sh@955 -- # kill 85360 00:16:34.743 [2024-11-26 11:26:52.903089] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.743 11:26:52 -- common/autotest_common.sh@960 -- # wait 85360 00:16:34.743 [2024-11-26 11:26:52.903181] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.743 [2024-11-26 11:26:52.903262] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.743 [2024-11-26 11:26:52.903276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:16:34.743 [2024-11-26 11:26:52.933339] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:35.003 00:16:35.003 real 0m9.042s 00:16:35.003 user 0m15.851s 00:16:35.003 sys 0m1.311s 00:16:35.003 11:26:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:35.003 ************************************ 00:16:35.003 END TEST raid_superblock_test 00:16:35.003 ************************************ 00:16:35.003 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:35.003 11:26:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:35.003 11:26:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.003 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:35.003 ************************************ 00:16:35.003 START TEST raid_state_function_test 00:16:35.003 ************************************ 00:16:35.003 11:26:53 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=85640 00:16:35.003 Process raid pid: 85640 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 85640' 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 85640 /var/tmp/spdk-raid.sock 00:16:35.003 11:26:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:35.003 11:26:53 -- common/autotest_common.sh@829 -- # '[' -z 85640 ']' 00:16:35.003 11:26:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:35.003 11:26:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:35.003 11:26:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:35.003 11:26:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.003 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:35.003 [2024-11-26 11:26:53.227389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:35.003 [2024-11-26 11:26:53.227543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.262 [2024-11-26 11:26:53.381955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.262 [2024-11-26 11:26:53.416554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.262 [2024-11-26 11:26:53.448239] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.200 11:26:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.200 11:26:54 -- common/autotest_common.sh@862 -- # return 0 00:16:36.200 11:26:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:36.200 [2024-11-26 11:26:54.420059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.201 [2024-11-26 11:26:54.420121] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.201 [2024-11-26 11:26:54.420137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.201 [2024-11-26 11:26:54.420148] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.201 [2024-11-26 11:26:54.420157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.201 [2024-11-26 11:26:54.420169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.201 [2024-11-26 11:26:54.420181] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.201 [2024-11-26 11:26:54.420191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.201 11:26:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.460 11:26:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.460 11:26:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.460 11:26:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.460 "name": "Existed_Raid", 00:16:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.460 "strip_size_kb": 64, 00:16:36.460 "state": "configuring", 00:16:36.460 "raid_level": "concat", 00:16:36.460 "superblock": false, 00:16:36.460 "num_base_bdevs": 4, 00:16:36.460 "num_base_bdevs_discovered": 0, 00:16:36.460 "num_base_bdevs_operational": 4, 00:16:36.460 "base_bdevs_list": [ 00:16:36.460 { 00:16:36.460 "name": "BaseBdev1", 00:16:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.460 "is_configured": false, 00:16:36.460 "data_offset": 0, 00:16:36.460 "data_size": 0 00:16:36.460 }, 00:16:36.460 { 00:16:36.460 "name": "BaseBdev2", 00:16:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.460 "is_configured": false, 00:16:36.460 "data_offset": 0, 00:16:36.460 "data_size": 0 00:16:36.460 }, 00:16:36.460 { 00:16:36.460 "name": "BaseBdev3", 00:16:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.460 "is_configured": false, 00:16:36.460 "data_offset": 0, 00:16:36.460 "data_size": 0 00:16:36.460 }, 00:16:36.460 { 00:16:36.460 "name": "BaseBdev4", 00:16:36.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.460 "is_configured": false, 00:16:36.460 "data_offset": 0, 00:16:36.460 "data_size": 0 00:16:36.460 } 00:16:36.460 ] 00:16:36.460 }' 00:16:36.460 11:26:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.460 11:26:54 -- common/autotest_common.sh@10 -- # set +x 00:16:37.026 11:26:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:37.027 [2024-11-26 11:26:55.168220] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.027 [2024-11-26 11:26:55.168277] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:37.027 11:26:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:37.285 [2024-11-26 11:26:55.424343] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.285 [2024-11-26 11:26:55.424404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.285 [2024-11-26 11:26:55.424419] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.285 [2024-11-26 11:26:55.424429] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.285 [2024-11-26 11:26:55.424438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.285 [2024-11-26 11:26:55.424450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.285 [2024-11-26 11:26:55.424459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.285 [2024-11-26 11:26:55.424468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.285 11:26:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.544 [2024-11-26 11:26:55.630578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.544 BaseBdev1 00:16:37.544 11:26:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:37.544 11:26:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:37.544 11:26:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.544 11:26:55 -- common/autotest_common.sh@899 -- # local i 00:16:37.544 11:26:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.544 11:26:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.544 11:26:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.803 11:26:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.062 [ 00:16:38.062 { 00:16:38.062 "name": "BaseBdev1", 00:16:38.062 "aliases": [ 00:16:38.062 "7ca27ff5-0b2b-4b5e-b368-7562d1729da0" 00:16:38.062 ], 00:16:38.062 "product_name": "Malloc disk", 00:16:38.062 "block_size": 512, 00:16:38.062 "num_blocks": 65536, 00:16:38.062 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:38.062 "assigned_rate_limits": { 00:16:38.062 "rw_ios_per_sec": 0, 00:16:38.062 "rw_mbytes_per_sec": 0, 00:16:38.062 "r_mbytes_per_sec": 0, 00:16:38.062 "w_mbytes_per_sec": 0 00:16:38.062 }, 00:16:38.062 "claimed": true, 00:16:38.062 "claim_type": "exclusive_write", 00:16:38.062 "zoned": false, 00:16:38.062 "supported_io_types": { 00:16:38.062 "read": true, 00:16:38.062 "write": true, 00:16:38.062 "unmap": true, 00:16:38.062 "write_zeroes": true, 00:16:38.062 "flush": true, 00:16:38.062 "reset": true, 00:16:38.062 "compare": false, 00:16:38.062 "compare_and_write": false, 00:16:38.062 "abort": true, 00:16:38.062 "nvme_admin": false, 00:16:38.062 "nvme_io": false 00:16:38.062 }, 00:16:38.062 "memory_domains": [ 00:16:38.062 { 00:16:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.062 "dma_device_type": 2 00:16:38.062 } 00:16:38.062 ], 00:16:38.062 "driver_specific": {} 00:16:38.062 } 00:16:38.062 ] 00:16:38.062 11:26:56 -- common/autotest_common.sh@905 -- # return 0 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.062 11:26:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.321 11:26:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.321 "name": "Existed_Raid", 00:16:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.321 "strip_size_kb": 64, 00:16:38.321 "state": "configuring", 00:16:38.321 "raid_level": "concat", 00:16:38.321 "superblock": false, 00:16:38.321 "num_base_bdevs": 4, 00:16:38.321 "num_base_bdevs_discovered": 1, 00:16:38.321 "num_base_bdevs_operational": 4, 00:16:38.321 "base_bdevs_list": [ 00:16:38.321 { 00:16:38.321 "name": "BaseBdev1", 00:16:38.321 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:38.321 "is_configured": true, 00:16:38.321 "data_offset": 0, 00:16:38.321 "data_size": 65536 00:16:38.321 }, 00:16:38.321 { 00:16:38.321 "name": "BaseBdev2", 00:16:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.321 "is_configured": false, 00:16:38.321 "data_offset": 0, 00:16:38.321 "data_size": 0 00:16:38.321 }, 00:16:38.321 { 00:16:38.321 "name": "BaseBdev3", 00:16:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.321 "is_configured": false, 00:16:38.321 "data_offset": 0, 00:16:38.321 "data_size": 0 00:16:38.321 }, 00:16:38.321 { 00:16:38.321 "name": "BaseBdev4", 00:16:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.321 "is_configured": false, 00:16:38.321 "data_offset": 0, 00:16:38.321 "data_size": 0 00:16:38.321 } 00:16:38.321 ] 00:16:38.321 }' 00:16:38.321 11:26:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.321 11:26:56 -- common/autotest_common.sh@10 -- # set +x 00:16:38.580 11:26:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.839 [2024-11-26 11:26:56.879109] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.839 [2024-11-26 11:26:56.879177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:38.839 11:26:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:38.839 11:26:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:39.098 [2024-11-26 11:26:57.083220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.098 [2024-11-26 11:26:57.085281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.098 [2024-11-26 11:26:57.085335] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.098 [2024-11-26 11:26:57.085351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.098 [2024-11-26 11:26:57.085362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.098 [2024-11-26 11:26:57.085372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:39.098 [2024-11-26 11:26:57.085381] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.098 11:26:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.357 11:26:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.357 "name": "Existed_Raid", 00:16:39.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.357 "strip_size_kb": 64, 00:16:39.357 "state": "configuring", 00:16:39.357 "raid_level": "concat", 00:16:39.357 "superblock": false, 00:16:39.357 "num_base_bdevs": 4, 00:16:39.357 "num_base_bdevs_discovered": 1, 00:16:39.357 "num_base_bdevs_operational": 4, 00:16:39.357 "base_bdevs_list": [ 00:16:39.357 { 00:16:39.357 "name": "BaseBdev1", 00:16:39.357 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:39.357 "is_configured": true, 00:16:39.357 "data_offset": 0, 00:16:39.357 "data_size": 65536 00:16:39.357 }, 00:16:39.357 { 00:16:39.357 "name": "BaseBdev2", 00:16:39.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.357 "is_configured": false, 00:16:39.357 "data_offset": 0, 00:16:39.357 "data_size": 0 00:16:39.357 }, 00:16:39.357 { 00:16:39.357 "name": "BaseBdev3", 00:16:39.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.357 "is_configured": false, 00:16:39.357 "data_offset": 0, 00:16:39.357 "data_size": 0 00:16:39.357 }, 00:16:39.357 { 00:16:39.357 "name": "BaseBdev4", 00:16:39.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.357 "is_configured": false, 00:16:39.357 "data_offset": 0, 00:16:39.357 "data_size": 0 00:16:39.357 } 00:16:39.357 ] 00:16:39.357 }' 00:16:39.357 11:26:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.357 11:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:39.618 11:26:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.618 [2024-11-26 11:26:57.835730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.618 BaseBdev2 00:16:39.618 11:26:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:39.618 11:26:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:39.891 11:26:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.891 11:26:57 -- common/autotest_common.sh@899 -- # local i 00:16:39.891 11:26:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.891 11:26:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.891 11:26:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.891 11:26:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.150 [ 00:16:40.150 { 00:16:40.150 "name": "BaseBdev2", 00:16:40.150 "aliases": [ 00:16:40.150 "f9285829-afd8-4787-adb8-65cb448e3762" 00:16:40.150 ], 00:16:40.150 "product_name": "Malloc disk", 00:16:40.150 "block_size": 512, 00:16:40.150 "num_blocks": 65536, 00:16:40.150 "uuid": "f9285829-afd8-4787-adb8-65cb448e3762", 00:16:40.150 "assigned_rate_limits": { 00:16:40.150 "rw_ios_per_sec": 0, 00:16:40.150 "rw_mbytes_per_sec": 0, 00:16:40.150 "r_mbytes_per_sec": 0, 00:16:40.150 "w_mbytes_per_sec": 0 00:16:40.150 }, 00:16:40.150 "claimed": true, 00:16:40.150 "claim_type": "exclusive_write", 00:16:40.150 "zoned": false, 00:16:40.150 "supported_io_types": { 00:16:40.150 "read": true, 00:16:40.150 "write": true, 00:16:40.150 "unmap": true, 00:16:40.150 "write_zeroes": true, 00:16:40.150 "flush": true, 00:16:40.150 "reset": true, 00:16:40.150 "compare": false, 00:16:40.150 "compare_and_write": false, 00:16:40.150 "abort": true, 00:16:40.150 "nvme_admin": false, 00:16:40.150 "nvme_io": false 00:16:40.150 }, 00:16:40.150 "memory_domains": [ 00:16:40.150 { 00:16:40.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.150 "dma_device_type": 2 00:16:40.150 } 00:16:40.150 ], 00:16:40.150 "driver_specific": {} 00:16:40.150 } 00:16:40.150 ] 00:16:40.150 11:26:58 -- common/autotest_common.sh@905 -- # return 0 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.150 11:26:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.409 11:26:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.409 "name": "Existed_Raid", 00:16:40.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.409 "strip_size_kb": 64, 00:16:40.409 "state": "configuring", 00:16:40.409 "raid_level": "concat", 00:16:40.409 "superblock": false, 00:16:40.409 "num_base_bdevs": 4, 00:16:40.409 "num_base_bdevs_discovered": 2, 00:16:40.409 "num_base_bdevs_operational": 4, 00:16:40.409 "base_bdevs_list": [ 00:16:40.409 { 00:16:40.409 "name": "BaseBdev1", 00:16:40.409 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:40.409 "is_configured": true, 00:16:40.409 "data_offset": 0, 00:16:40.409 "data_size": 65536 00:16:40.409 }, 00:16:40.409 { 00:16:40.409 "name": "BaseBdev2", 00:16:40.409 "uuid": "f9285829-afd8-4787-adb8-65cb448e3762", 00:16:40.409 "is_configured": true, 00:16:40.409 "data_offset": 0, 00:16:40.409 "data_size": 65536 00:16:40.409 }, 00:16:40.409 { 00:16:40.409 "name": "BaseBdev3", 00:16:40.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.409 "is_configured": false, 00:16:40.409 "data_offset": 0, 00:16:40.409 "data_size": 0 00:16:40.409 }, 00:16:40.409 { 00:16:40.409 "name": "BaseBdev4", 00:16:40.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.409 "is_configured": false, 00:16:40.409 "data_offset": 0, 00:16:40.409 "data_size": 0 00:16:40.409 } 00:16:40.409 ] 00:16:40.409 }' 00:16:40.409 11:26:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.409 11:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:40.668 11:26:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.927 [2024-11-26 11:26:59.104475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.927 BaseBdev3 00:16:40.927 11:26:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:40.927 11:26:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:40.927 11:26:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.927 11:26:59 -- common/autotest_common.sh@899 -- # local i 00:16:40.927 11:26:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.927 11:26:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.927 11:26:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.185 11:26:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.443 [ 00:16:41.443 { 00:16:41.443 "name": "BaseBdev3", 00:16:41.443 "aliases": [ 00:16:41.443 "ea9ad616-c930-402e-882c-a5a7ef8b303a" 00:16:41.443 ], 00:16:41.443 "product_name": "Malloc disk", 00:16:41.443 "block_size": 512, 00:16:41.443 "num_blocks": 65536, 00:16:41.443 "uuid": "ea9ad616-c930-402e-882c-a5a7ef8b303a", 00:16:41.443 "assigned_rate_limits": { 00:16:41.443 "rw_ios_per_sec": 0, 00:16:41.443 "rw_mbytes_per_sec": 0, 00:16:41.443 "r_mbytes_per_sec": 0, 00:16:41.443 "w_mbytes_per_sec": 0 00:16:41.443 }, 00:16:41.443 "claimed": true, 00:16:41.443 "claim_type": "exclusive_write", 00:16:41.443 "zoned": false, 00:16:41.443 "supported_io_types": { 00:16:41.443 "read": true, 00:16:41.443 "write": true, 00:16:41.443 "unmap": true, 00:16:41.443 "write_zeroes": true, 00:16:41.443 "flush": true, 00:16:41.443 "reset": true, 00:16:41.443 "compare": false, 00:16:41.443 "compare_and_write": false, 00:16:41.443 "abort": true, 00:16:41.443 "nvme_admin": false, 00:16:41.443 "nvme_io": false 00:16:41.443 }, 00:16:41.443 "memory_domains": [ 00:16:41.443 { 00:16:41.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.443 "dma_device_type": 2 00:16:41.443 } 00:16:41.443 ], 00:16:41.443 "driver_specific": {} 00:16:41.443 } 00:16:41.443 ] 00:16:41.443 11:26:59 -- common/autotest_common.sh@905 -- # return 0 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.443 11:26:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.701 11:26:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.701 "name": "Existed_Raid", 00:16:41.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.701 "strip_size_kb": 64, 00:16:41.701 "state": "configuring", 00:16:41.701 "raid_level": "concat", 00:16:41.701 "superblock": false, 00:16:41.701 "num_base_bdevs": 4, 00:16:41.701 "num_base_bdevs_discovered": 3, 00:16:41.701 "num_base_bdevs_operational": 4, 00:16:41.701 "base_bdevs_list": [ 00:16:41.701 { 00:16:41.701 "name": "BaseBdev1", 00:16:41.701 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:41.701 "is_configured": true, 00:16:41.701 "data_offset": 0, 00:16:41.701 "data_size": 65536 00:16:41.701 }, 00:16:41.701 { 00:16:41.701 "name": "BaseBdev2", 00:16:41.701 "uuid": "f9285829-afd8-4787-adb8-65cb448e3762", 00:16:41.701 "is_configured": true, 00:16:41.701 "data_offset": 0, 00:16:41.701 "data_size": 65536 00:16:41.701 }, 00:16:41.701 { 00:16:41.701 "name": "BaseBdev3", 00:16:41.701 "uuid": "ea9ad616-c930-402e-882c-a5a7ef8b303a", 00:16:41.701 "is_configured": true, 00:16:41.701 "data_offset": 0, 00:16:41.701 "data_size": 65536 00:16:41.701 }, 00:16:41.701 { 00:16:41.701 "name": "BaseBdev4", 00:16:41.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.701 "is_configured": false, 00:16:41.701 "data_offset": 0, 00:16:41.701 "data_size": 0 00:16:41.701 } 00:16:41.701 ] 00:16:41.701 }' 00:16:41.701 11:26:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.701 11:26:59 -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 11:27:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:42.219 [2024-11-26 11:27:00.285345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.219 BaseBdev4 00:16:42.219 [2024-11-26 11:27:00.285604] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:42.219 [2024-11-26 11:27:00.285636] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:42.219 [2024-11-26 11:27:00.285774] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:42.219 [2024-11-26 11:27:00.286155] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:42.219 [2024-11-26 11:27:00.286175] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:42.219 [2024-11-26 11:27:00.286419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.219 11:27:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:42.219 11:27:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:42.219 11:27:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:42.219 11:27:00 -- common/autotest_common.sh@899 -- # local i 00:16:42.219 11:27:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:42.219 11:27:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:42.219 11:27:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.477 11:27:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:42.736 [ 00:16:42.736 { 00:16:42.736 "name": "BaseBdev4", 00:16:42.736 "aliases": [ 00:16:42.736 "49408e80-19a7-47b4-bb7a-d7237bb537f0" 00:16:42.736 ], 00:16:42.736 "product_name": "Malloc disk", 00:16:42.737 "block_size": 512, 00:16:42.737 "num_blocks": 65536, 00:16:42.737 "uuid": "49408e80-19a7-47b4-bb7a-d7237bb537f0", 00:16:42.737 "assigned_rate_limits": { 00:16:42.737 "rw_ios_per_sec": 0, 00:16:42.737 "rw_mbytes_per_sec": 0, 00:16:42.737 "r_mbytes_per_sec": 0, 00:16:42.737 "w_mbytes_per_sec": 0 00:16:42.737 }, 00:16:42.737 "claimed": true, 00:16:42.737 "claim_type": "exclusive_write", 00:16:42.737 "zoned": false, 00:16:42.737 "supported_io_types": { 00:16:42.737 "read": true, 00:16:42.737 "write": true, 00:16:42.737 "unmap": true, 00:16:42.737 "write_zeroes": true, 00:16:42.737 "flush": true, 00:16:42.737 "reset": true, 00:16:42.737 "compare": false, 00:16:42.737 "compare_and_write": false, 00:16:42.737 "abort": true, 00:16:42.737 "nvme_admin": false, 00:16:42.737 "nvme_io": false 00:16:42.737 }, 00:16:42.737 "memory_domains": [ 00:16:42.737 { 00:16:42.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.737 "dma_device_type": 2 00:16:42.737 } 00:16:42.737 ], 00:16:42.737 "driver_specific": {} 00:16:42.737 } 00:16:42.737 ] 00:16:42.737 11:27:00 -- common/autotest_common.sh@905 -- # return 0 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.737 11:27:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.996 11:27:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.996 "name": "Existed_Raid", 00:16:42.996 "uuid": "4eb58375-d600-4227-acd8-57a1e7e15ae6", 00:16:42.996 "strip_size_kb": 64, 00:16:42.996 "state": "online", 00:16:42.996 "raid_level": "concat", 00:16:42.996 "superblock": false, 00:16:42.996 "num_base_bdevs": 4, 00:16:42.996 "num_base_bdevs_discovered": 4, 00:16:42.996 "num_base_bdevs_operational": 4, 00:16:42.996 "base_bdevs_list": [ 00:16:42.996 { 00:16:42.996 "name": "BaseBdev1", 00:16:42.996 "uuid": "7ca27ff5-0b2b-4b5e-b368-7562d1729da0", 00:16:42.996 "is_configured": true, 00:16:42.996 "data_offset": 0, 00:16:42.996 "data_size": 65536 00:16:42.996 }, 00:16:42.996 { 00:16:42.996 "name": "BaseBdev2", 00:16:42.996 "uuid": "f9285829-afd8-4787-adb8-65cb448e3762", 00:16:42.996 "is_configured": true, 00:16:42.996 "data_offset": 0, 00:16:42.996 "data_size": 65536 00:16:42.996 }, 00:16:42.996 { 00:16:42.996 "name": "BaseBdev3", 00:16:42.996 "uuid": "ea9ad616-c930-402e-882c-a5a7ef8b303a", 00:16:42.996 "is_configured": true, 00:16:42.996 "data_offset": 0, 00:16:42.996 "data_size": 65536 00:16:42.996 }, 00:16:42.996 { 00:16:42.996 "name": "BaseBdev4", 00:16:42.996 "uuid": "49408e80-19a7-47b4-bb7a-d7237bb537f0", 00:16:42.996 "is_configured": true, 00:16:42.996 "data_offset": 0, 00:16:42.996 "data_size": 65536 00:16:42.996 } 00:16:42.996 ] 00:16:42.996 }' 00:16:42.996 11:27:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.996 11:27:01 -- common/autotest_common.sh@10 -- # set +x 00:16:43.255 11:27:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:43.514 [2024-11-26 11:27:01.565913] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.514 [2024-11-26 11:27:01.566105] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.514 [2024-11-26 11:27:01.566284] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.514 11:27:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.774 11:27:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.774 "name": "Existed_Raid", 00:16:43.774 "uuid": "4eb58375-d600-4227-acd8-57a1e7e15ae6", 00:16:43.774 "strip_size_kb": 64, 00:16:43.774 "state": "offline", 00:16:43.774 "raid_level": "concat", 00:16:43.774 "superblock": false, 00:16:43.774 "num_base_bdevs": 4, 00:16:43.774 "num_base_bdevs_discovered": 3, 00:16:43.774 "num_base_bdevs_operational": 3, 00:16:43.774 "base_bdevs_list": [ 00:16:43.774 { 00:16:43.774 "name": null, 00:16:43.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.774 "is_configured": false, 00:16:43.774 "data_offset": 0, 00:16:43.774 "data_size": 65536 00:16:43.774 }, 00:16:43.774 { 00:16:43.774 "name": "BaseBdev2", 00:16:43.774 "uuid": "f9285829-afd8-4787-adb8-65cb448e3762", 00:16:43.774 "is_configured": true, 00:16:43.774 "data_offset": 0, 00:16:43.774 "data_size": 65536 00:16:43.774 }, 00:16:43.774 { 00:16:43.774 "name": "BaseBdev3", 00:16:43.774 "uuid": "ea9ad616-c930-402e-882c-a5a7ef8b303a", 00:16:43.774 "is_configured": true, 00:16:43.774 "data_offset": 0, 00:16:43.774 "data_size": 65536 00:16:43.774 }, 00:16:43.774 { 00:16:43.774 "name": "BaseBdev4", 00:16:43.774 "uuid": "49408e80-19a7-47b4-bb7a-d7237bb537f0", 00:16:43.774 "is_configured": true, 00:16:43.774 "data_offset": 0, 00:16:43.774 "data_size": 65536 00:16:43.774 } 00:16:43.774 ] 00:16:43.774 }' 00:16:43.774 11:27:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.774 11:27:01 -- common/autotest_common.sh@10 -- # set +x 00:16:44.033 11:27:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:44.033 11:27:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.033 11:27:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.033 11:27:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.292 11:27:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.292 11:27:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.292 11:27:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:44.550 [2024-11-26 11:27:02.718033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.550 11:27:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.550 11:27:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.550 11:27:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.550 11:27:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.808 11:27:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.808 11:27:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.808 11:27:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:45.066 [2024-11-26 11:27:03.177515] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.066 11:27:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:45.066 11:27:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:45.066 11:27:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.066 11:27:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:45.325 11:27:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:45.325 11:27:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.325 11:27:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:45.584 [2024-11-26 11:27:03.584671] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:45.584 [2024-11-26 11:27:03.584728] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:45.584 11:27:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:45.584 11:27:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:45.584 11:27:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.584 11:27:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.844 11:27:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:45.844 11:27:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:45.844 11:27:03 -- bdev/bdev_raid.sh@287 -- # killprocess 85640 00:16:45.844 11:27:03 -- common/autotest_common.sh@936 -- # '[' -z 85640 ']' 00:16:45.844 11:27:03 -- common/autotest_common.sh@940 -- # kill -0 85640 00:16:45.844 11:27:03 -- common/autotest_common.sh@941 -- # uname 00:16:45.844 11:27:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.844 11:27:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85640 00:16:45.844 killing process with pid 85640 00:16:45.844 11:27:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.844 11:27:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.844 11:27:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85640' 00:16:45.844 11:27:03 -- common/autotest_common.sh@955 -- # kill 85640 00:16:45.844 [2024-11-26 11:27:03.904448] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.844 11:27:03 -- common/autotest_common.sh@960 -- # wait 85640 00:16:45.844 [2024-11-26 11:27:03.904522] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.104 ************************************ 00:16:46.104 END TEST raid_state_function_test 00:16:46.104 ************************************ 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:46.104 00:16:46.104 real 0m10.916s 00:16:46.104 user 0m19.279s 00:16:46.104 sys 0m1.712s 00:16:46.104 11:27:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:46.104 11:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:46.104 11:27:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:46.104 11:27:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.104 11:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 ************************************ 00:16:46.104 START TEST raid_state_function_test_sb 00:16:46.104 ************************************ 00:16:46.104 11:27:04 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=86029 00:16:46.104 Process raid pid: 86029 00:16:46.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 86029' 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 86029 /var/tmp/spdk-raid.sock 00:16:46.104 11:27:04 -- common/autotest_common.sh@829 -- # '[' -z 86029 ']' 00:16:46.104 11:27:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.104 11:27:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:46.104 11:27:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.104 11:27:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.104 11:27:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.104 11:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 [2024-11-26 11:27:04.197891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.104 [2024-11-26 11:27:04.198232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.364 [2024-11-26 11:27:04.354594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.364 [2024-11-26 11:27:04.389079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.364 [2024-11-26 11:27:04.420671] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.303 11:27:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.303 11:27:05 -- common/autotest_common.sh@862 -- # return 0 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:47.303 [2024-11-26 11:27:05.364747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.303 [2024-11-26 11:27:05.364821] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.303 [2024-11-26 11:27:05.364862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.303 [2024-11-26 11:27:05.364892] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.303 [2024-11-26 11:27:05.364922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.303 [2024-11-26 11:27:05.364956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.303 [2024-11-26 11:27:05.364972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.303 [2024-11-26 11:27:05.364983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.303 11:27:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.304 11:27:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.304 11:27:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.304 11:27:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.304 11:27:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.562 11:27:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.562 "name": "Existed_Raid", 00:16:47.562 "uuid": "a61f6213-7ee9-4184-a195-a09896953c6f", 00:16:47.562 "strip_size_kb": 64, 00:16:47.562 "state": "configuring", 00:16:47.562 "raid_level": "concat", 00:16:47.562 "superblock": true, 00:16:47.562 "num_base_bdevs": 4, 00:16:47.562 "num_base_bdevs_discovered": 0, 00:16:47.562 "num_base_bdevs_operational": 4, 00:16:47.562 "base_bdevs_list": [ 00:16:47.562 { 00:16:47.562 "name": "BaseBdev1", 00:16:47.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.562 "is_configured": false, 00:16:47.562 "data_offset": 0, 00:16:47.562 "data_size": 0 00:16:47.562 }, 00:16:47.562 { 00:16:47.562 "name": "BaseBdev2", 00:16:47.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.562 "is_configured": false, 00:16:47.562 "data_offset": 0, 00:16:47.562 "data_size": 0 00:16:47.562 }, 00:16:47.562 { 00:16:47.562 "name": "BaseBdev3", 00:16:47.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.562 "is_configured": false, 00:16:47.562 "data_offset": 0, 00:16:47.562 "data_size": 0 00:16:47.562 }, 00:16:47.562 { 00:16:47.562 "name": "BaseBdev4", 00:16:47.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.562 "is_configured": false, 00:16:47.562 "data_offset": 0, 00:16:47.562 "data_size": 0 00:16:47.562 } 00:16:47.562 ] 00:16:47.562 }' 00:16:47.562 11:27:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.562 11:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.821 11:27:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:48.080 [2024-11-26 11:27:06.164862] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.080 [2024-11-26 11:27:06.164959] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:48.080 11:27:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:48.339 [2024-11-26 11:27:06.364996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.339 [2024-11-26 11:27:06.365180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.339 [2024-11-26 11:27:06.365226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.339 [2024-11-26 11:27:06.365241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.339 [2024-11-26 11:27:06.365253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.339 [2024-11-26 11:27:06.365267] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.339 [2024-11-26 11:27:06.365279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.339 [2024-11-26 11:27:06.365289] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.339 11:27:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.598 [2024-11-26 11:27:06.623193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.598 BaseBdev1 00:16:48.598 11:27:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:48.598 11:27:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.598 11:27:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.598 11:27:06 -- common/autotest_common.sh@899 -- # local i 00:16:48.598 11:27:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.598 11:27:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.598 11:27:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.857 11:27:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.857 [ 00:16:48.857 { 00:16:48.857 "name": "BaseBdev1", 00:16:48.857 "aliases": [ 00:16:48.857 "08bee146-ba46-49f9-a078-6a23252cc467" 00:16:48.857 ], 00:16:48.857 "product_name": "Malloc disk", 00:16:48.857 "block_size": 512, 00:16:48.857 "num_blocks": 65536, 00:16:48.857 "uuid": "08bee146-ba46-49f9-a078-6a23252cc467", 00:16:48.857 "assigned_rate_limits": { 00:16:48.857 "rw_ios_per_sec": 0, 00:16:48.857 "rw_mbytes_per_sec": 0, 00:16:48.857 "r_mbytes_per_sec": 0, 00:16:48.857 "w_mbytes_per_sec": 0 00:16:48.857 }, 00:16:48.857 "claimed": true, 00:16:48.857 "claim_type": "exclusive_write", 00:16:48.857 "zoned": false, 00:16:48.857 "supported_io_types": { 00:16:48.857 "read": true, 00:16:48.857 "write": true, 00:16:48.857 "unmap": true, 00:16:48.857 "write_zeroes": true, 00:16:48.857 "flush": true, 00:16:48.857 "reset": true, 00:16:48.857 "compare": false, 00:16:48.857 "compare_and_write": false, 00:16:48.857 "abort": true, 00:16:48.857 "nvme_admin": false, 00:16:48.857 "nvme_io": false 00:16:48.857 }, 00:16:48.857 "memory_domains": [ 00:16:48.857 { 00:16:48.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.857 "dma_device_type": 2 00:16:48.857 } 00:16:48.857 ], 00:16:48.857 "driver_specific": {} 00:16:48.857 } 00:16:48.857 ] 00:16:48.857 11:27:07 -- common/autotest_common.sh@905 -- # return 0 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.857 11:27:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.858 11:27:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.858 11:27:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.116 11:27:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.116 11:27:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.116 11:27:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.116 "name": "Existed_Raid", 00:16:49.116 "uuid": "0109b331-db6e-4f0f-880c-4582fe818cea", 00:16:49.116 "strip_size_kb": 64, 00:16:49.116 "state": "configuring", 00:16:49.116 "raid_level": "concat", 00:16:49.116 "superblock": true, 00:16:49.116 "num_base_bdevs": 4, 00:16:49.116 "num_base_bdevs_discovered": 1, 00:16:49.116 "num_base_bdevs_operational": 4, 00:16:49.116 "base_bdevs_list": [ 00:16:49.116 { 00:16:49.116 "name": "BaseBdev1", 00:16:49.116 "uuid": "08bee146-ba46-49f9-a078-6a23252cc467", 00:16:49.116 "is_configured": true, 00:16:49.116 "data_offset": 2048, 00:16:49.116 "data_size": 63488 00:16:49.116 }, 00:16:49.116 { 00:16:49.116 "name": "BaseBdev2", 00:16:49.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.116 "is_configured": false, 00:16:49.116 "data_offset": 0, 00:16:49.116 "data_size": 0 00:16:49.116 }, 00:16:49.116 { 00:16:49.117 "name": "BaseBdev3", 00:16:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.117 "is_configured": false, 00:16:49.117 "data_offset": 0, 00:16:49.117 "data_size": 0 00:16:49.117 }, 00:16:49.117 { 00:16:49.117 "name": "BaseBdev4", 00:16:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.117 "is_configured": false, 00:16:49.117 "data_offset": 0, 00:16:49.117 "data_size": 0 00:16:49.117 } 00:16:49.117 ] 00:16:49.117 }' 00:16:49.117 11:27:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.117 11:27:07 -- common/autotest_common.sh@10 -- # set +x 00:16:49.684 11:27:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.684 [2024-11-26 11:27:07.823682] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.684 [2024-11-26 11:27:07.823753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:49.684 11:27:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:49.684 11:27:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:49.943 11:27:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.202 BaseBdev1 00:16:50.202 11:27:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:50.202 11:27:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:50.202 11:27:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:50.202 11:27:08 -- common/autotest_common.sh@899 -- # local i 00:16:50.202 11:27:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:50.202 11:27:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:50.202 11:27:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.461 11:27:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.720 [ 00:16:50.720 { 00:16:50.720 "name": "BaseBdev1", 00:16:50.720 "aliases": [ 00:16:50.720 "ec97c9e6-d635-43ba-a34a-601ca37095b3" 00:16:50.720 ], 00:16:50.720 "product_name": "Malloc disk", 00:16:50.720 "block_size": 512, 00:16:50.720 "num_blocks": 65536, 00:16:50.720 "uuid": "ec97c9e6-d635-43ba-a34a-601ca37095b3", 00:16:50.720 "assigned_rate_limits": { 00:16:50.720 "rw_ios_per_sec": 0, 00:16:50.720 "rw_mbytes_per_sec": 0, 00:16:50.720 "r_mbytes_per_sec": 0, 00:16:50.720 "w_mbytes_per_sec": 0 00:16:50.720 }, 00:16:50.720 "claimed": false, 00:16:50.720 "zoned": false, 00:16:50.720 "supported_io_types": { 00:16:50.720 "read": true, 00:16:50.720 "write": true, 00:16:50.720 "unmap": true, 00:16:50.720 "write_zeroes": true, 00:16:50.720 "flush": true, 00:16:50.720 "reset": true, 00:16:50.720 "compare": false, 00:16:50.720 "compare_and_write": false, 00:16:50.720 "abort": true, 00:16:50.720 "nvme_admin": false, 00:16:50.720 "nvme_io": false 00:16:50.720 }, 00:16:50.720 "memory_domains": [ 00:16:50.720 { 00:16:50.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.720 "dma_device_type": 2 00:16:50.720 } 00:16:50.720 ], 00:16:50.720 "driver_specific": {} 00:16:50.720 } 00:16:50.720 ] 00:16:50.720 11:27:08 -- common/autotest_common.sh@905 -- # return 0 00:16:50.720 11:27:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:50.979 [2024-11-26 11:27:09.022834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.979 [2024-11-26 11:27:09.024812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.979 [2024-11-26 11:27:09.024857] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.979 [2024-11-26 11:27:09.024920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.979 [2024-11-26 11:27:09.024951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.979 [2024-11-26 11:27:09.024975] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.979 [2024-11-26 11:27:09.025006] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.979 11:27:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.238 11:27:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.238 "name": "Existed_Raid", 00:16:51.238 "uuid": "8de0686c-5a17-451e-853c-8515d952877d", 00:16:51.238 "strip_size_kb": 64, 00:16:51.238 "state": "configuring", 00:16:51.238 "raid_level": "concat", 00:16:51.238 "superblock": true, 00:16:51.238 "num_base_bdevs": 4, 00:16:51.238 "num_base_bdevs_discovered": 1, 00:16:51.238 "num_base_bdevs_operational": 4, 00:16:51.238 "base_bdevs_list": [ 00:16:51.238 { 00:16:51.238 "name": "BaseBdev1", 00:16:51.238 "uuid": "ec97c9e6-d635-43ba-a34a-601ca37095b3", 00:16:51.238 "is_configured": true, 00:16:51.238 "data_offset": 2048, 00:16:51.238 "data_size": 63488 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": "BaseBdev2", 00:16:51.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.238 "is_configured": false, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 0 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": "BaseBdev3", 00:16:51.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.238 "is_configured": false, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 0 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": "BaseBdev4", 00:16:51.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.238 "is_configured": false, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 0 00:16:51.238 } 00:16:51.238 ] 00:16:51.238 }' 00:16:51.238 11:27:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.238 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:16:51.497 11:27:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:51.756 [2024-11-26 11:27:09.794767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.756 BaseBdev2 00:16:51.756 11:27:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:51.756 11:27:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:51.756 11:27:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.756 11:27:09 -- common/autotest_common.sh@899 -- # local i 00:16:51.756 11:27:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.756 11:27:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.756 11:27:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.014 11:27:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.273 [ 00:16:52.273 { 00:16:52.273 "name": "BaseBdev2", 00:16:52.273 "aliases": [ 00:16:52.273 "321efc5a-0dc3-4793-aa06-30996e6e6de6" 00:16:52.273 ], 00:16:52.273 "product_name": "Malloc disk", 00:16:52.273 "block_size": 512, 00:16:52.273 "num_blocks": 65536, 00:16:52.273 "uuid": "321efc5a-0dc3-4793-aa06-30996e6e6de6", 00:16:52.273 "assigned_rate_limits": { 00:16:52.273 "rw_ios_per_sec": 0, 00:16:52.273 "rw_mbytes_per_sec": 0, 00:16:52.273 "r_mbytes_per_sec": 0, 00:16:52.273 "w_mbytes_per_sec": 0 00:16:52.273 }, 00:16:52.273 "claimed": true, 00:16:52.273 "claim_type": "exclusive_write", 00:16:52.273 "zoned": false, 00:16:52.273 "supported_io_types": { 00:16:52.273 "read": true, 00:16:52.273 "write": true, 00:16:52.273 "unmap": true, 00:16:52.273 "write_zeroes": true, 00:16:52.273 "flush": true, 00:16:52.273 "reset": true, 00:16:52.273 "compare": false, 00:16:52.273 "compare_and_write": false, 00:16:52.273 "abort": true, 00:16:52.273 "nvme_admin": false, 00:16:52.273 "nvme_io": false 00:16:52.273 }, 00:16:52.273 "memory_domains": [ 00:16:52.273 { 00:16:52.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.274 "dma_device_type": 2 00:16:52.274 } 00:16:52.274 ], 00:16:52.274 "driver_specific": {} 00:16:52.274 } 00:16:52.274 ] 00:16:52.274 11:27:10 -- common/autotest_common.sh@905 -- # return 0 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.274 "name": "Existed_Raid", 00:16:52.274 "uuid": "8de0686c-5a17-451e-853c-8515d952877d", 00:16:52.274 "strip_size_kb": 64, 00:16:52.274 "state": "configuring", 00:16:52.274 "raid_level": "concat", 00:16:52.274 "superblock": true, 00:16:52.274 "num_base_bdevs": 4, 00:16:52.274 "num_base_bdevs_discovered": 2, 00:16:52.274 "num_base_bdevs_operational": 4, 00:16:52.274 "base_bdevs_list": [ 00:16:52.274 { 00:16:52.274 "name": "BaseBdev1", 00:16:52.274 "uuid": "ec97c9e6-d635-43ba-a34a-601ca37095b3", 00:16:52.274 "is_configured": true, 00:16:52.274 "data_offset": 2048, 00:16:52.274 "data_size": 63488 00:16:52.274 }, 00:16:52.274 { 00:16:52.274 "name": "BaseBdev2", 00:16:52.274 "uuid": "321efc5a-0dc3-4793-aa06-30996e6e6de6", 00:16:52.274 "is_configured": true, 00:16:52.274 "data_offset": 2048, 00:16:52.274 "data_size": 63488 00:16:52.274 }, 00:16:52.274 { 00:16:52.274 "name": "BaseBdev3", 00:16:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.274 "is_configured": false, 00:16:52.274 "data_offset": 0, 00:16:52.274 "data_size": 0 00:16:52.274 }, 00:16:52.274 { 00:16:52.274 "name": "BaseBdev4", 00:16:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.274 "is_configured": false, 00:16:52.274 "data_offset": 0, 00:16:52.274 "data_size": 0 00:16:52.274 } 00:16:52.274 ] 00:16:52.274 }' 00:16:52.274 11:27:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.274 11:27:10 -- common/autotest_common.sh@10 -- # set +x 00:16:52.533 11:27:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.802 [2024-11-26 11:27:10.975622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.802 BaseBdev3 00:16:52.802 11:27:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:52.802 11:27:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:52.802 11:27:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:52.802 11:27:10 -- common/autotest_common.sh@899 -- # local i 00:16:52.802 11:27:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:52.802 11:27:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:52.802 11:27:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.072 11:27:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.332 [ 00:16:53.332 { 00:16:53.332 "name": "BaseBdev3", 00:16:53.332 "aliases": [ 00:16:53.332 "f54bb455-e137-47d8-8b60-c2ffd0eccfd3" 00:16:53.332 ], 00:16:53.332 "product_name": "Malloc disk", 00:16:53.332 "block_size": 512, 00:16:53.332 "num_blocks": 65536, 00:16:53.332 "uuid": "f54bb455-e137-47d8-8b60-c2ffd0eccfd3", 00:16:53.332 "assigned_rate_limits": { 00:16:53.332 "rw_ios_per_sec": 0, 00:16:53.332 "rw_mbytes_per_sec": 0, 00:16:53.332 "r_mbytes_per_sec": 0, 00:16:53.332 "w_mbytes_per_sec": 0 00:16:53.332 }, 00:16:53.332 "claimed": true, 00:16:53.332 "claim_type": "exclusive_write", 00:16:53.332 "zoned": false, 00:16:53.332 "supported_io_types": { 00:16:53.332 "read": true, 00:16:53.332 "write": true, 00:16:53.332 "unmap": true, 00:16:53.332 "write_zeroes": true, 00:16:53.332 "flush": true, 00:16:53.332 "reset": true, 00:16:53.332 "compare": false, 00:16:53.332 "compare_and_write": false, 00:16:53.332 "abort": true, 00:16:53.332 "nvme_admin": false, 00:16:53.332 "nvme_io": false 00:16:53.332 }, 00:16:53.332 "memory_domains": [ 00:16:53.332 { 00:16:53.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.332 "dma_device_type": 2 00:16:53.332 } 00:16:53.332 ], 00:16:53.332 "driver_specific": {} 00:16:53.332 } 00:16:53.332 ] 00:16:53.332 11:27:11 -- common/autotest_common.sh@905 -- # return 0 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.332 11:27:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.591 11:27:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.591 "name": "Existed_Raid", 00:16:53.591 "uuid": "8de0686c-5a17-451e-853c-8515d952877d", 00:16:53.591 "strip_size_kb": 64, 00:16:53.591 "state": "configuring", 00:16:53.591 "raid_level": "concat", 00:16:53.591 "superblock": true, 00:16:53.591 "num_base_bdevs": 4, 00:16:53.591 "num_base_bdevs_discovered": 3, 00:16:53.591 "num_base_bdevs_operational": 4, 00:16:53.591 "base_bdevs_list": [ 00:16:53.591 { 00:16:53.591 "name": "BaseBdev1", 00:16:53.591 "uuid": "ec97c9e6-d635-43ba-a34a-601ca37095b3", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev2", 00:16:53.591 "uuid": "321efc5a-0dc3-4793-aa06-30996e6e6de6", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev3", 00:16:53.591 "uuid": "f54bb455-e137-47d8-8b60-c2ffd0eccfd3", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev4", 00:16:53.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.591 "is_configured": false, 00:16:53.591 "data_offset": 0, 00:16:53.591 "data_size": 0 00:16:53.591 } 00:16:53.591 ] 00:16:53.591 }' 00:16:53.591 11:27:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.591 11:27:11 -- common/autotest_common.sh@10 -- # set +x 00:16:53.850 11:27:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:54.109 [2024-11-26 11:27:12.217032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.109 [2024-11-26 11:27:12.217553] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:54.109 [2024-11-26 11:27:12.217741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:54.109 [2024-11-26 11:27:12.217918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:16:54.109 [2024-11-26 11:27:12.218326] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:54.109 BaseBdev4 00:16:54.109 [2024-11-26 11:27:12.218525] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:54.109 [2024-11-26 11:27:12.218683] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.109 11:27:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:54.109 11:27:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:54.109 11:27:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.109 11:27:12 -- common/autotest_common.sh@899 -- # local i 00:16:54.109 11:27:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.109 11:27:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.110 11:27:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.368 11:27:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:54.626 [ 00:16:54.627 { 00:16:54.627 "name": "BaseBdev4", 00:16:54.627 "aliases": [ 00:16:54.627 "c28c51cb-c84f-45cd-b2ef-41a008495bb0" 00:16:54.627 ], 00:16:54.627 "product_name": "Malloc disk", 00:16:54.627 "block_size": 512, 00:16:54.627 "num_blocks": 65536, 00:16:54.627 "uuid": "c28c51cb-c84f-45cd-b2ef-41a008495bb0", 00:16:54.627 "assigned_rate_limits": { 00:16:54.627 "rw_ios_per_sec": 0, 00:16:54.627 "rw_mbytes_per_sec": 0, 00:16:54.627 "r_mbytes_per_sec": 0, 00:16:54.627 "w_mbytes_per_sec": 0 00:16:54.627 }, 00:16:54.627 "claimed": true, 00:16:54.627 "claim_type": "exclusive_write", 00:16:54.627 "zoned": false, 00:16:54.627 "supported_io_types": { 00:16:54.627 "read": true, 00:16:54.627 "write": true, 00:16:54.627 "unmap": true, 00:16:54.627 "write_zeroes": true, 00:16:54.627 "flush": true, 00:16:54.627 "reset": true, 00:16:54.627 "compare": false, 00:16:54.627 "compare_and_write": false, 00:16:54.627 "abort": true, 00:16:54.627 "nvme_admin": false, 00:16:54.627 "nvme_io": false 00:16:54.627 }, 00:16:54.627 "memory_domains": [ 00:16:54.627 { 00:16:54.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.627 "dma_device_type": 2 00:16:54.627 } 00:16:54.627 ], 00:16:54.627 "driver_specific": {} 00:16:54.627 } 00:16:54.627 ] 00:16:54.627 11:27:12 -- common/autotest_common.sh@905 -- # return 0 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.627 "name": "Existed_Raid", 00:16:54.627 "uuid": "8de0686c-5a17-451e-853c-8515d952877d", 00:16:54.627 "strip_size_kb": 64, 00:16:54.627 "state": "online", 00:16:54.627 "raid_level": "concat", 00:16:54.627 "superblock": true, 00:16:54.627 "num_base_bdevs": 4, 00:16:54.627 "num_base_bdevs_discovered": 4, 00:16:54.627 "num_base_bdevs_operational": 4, 00:16:54.627 "base_bdevs_list": [ 00:16:54.627 { 00:16:54.627 "name": "BaseBdev1", 00:16:54.627 "uuid": "ec97c9e6-d635-43ba-a34a-601ca37095b3", 00:16:54.627 "is_configured": true, 00:16:54.627 "data_offset": 2048, 00:16:54.627 "data_size": 63488 00:16:54.627 }, 00:16:54.627 { 00:16:54.627 "name": "BaseBdev2", 00:16:54.627 "uuid": "321efc5a-0dc3-4793-aa06-30996e6e6de6", 00:16:54.627 "is_configured": true, 00:16:54.627 "data_offset": 2048, 00:16:54.627 "data_size": 63488 00:16:54.627 }, 00:16:54.627 { 00:16:54.627 "name": "BaseBdev3", 00:16:54.627 "uuid": "f54bb455-e137-47d8-8b60-c2ffd0eccfd3", 00:16:54.627 "is_configured": true, 00:16:54.627 "data_offset": 2048, 00:16:54.627 "data_size": 63488 00:16:54.627 }, 00:16:54.627 { 00:16:54.627 "name": "BaseBdev4", 00:16:54.627 "uuid": "c28c51cb-c84f-45cd-b2ef-41a008495bb0", 00:16:54.627 "is_configured": true, 00:16:54.627 "data_offset": 2048, 00:16:54.627 "data_size": 63488 00:16:54.627 } 00:16:54.627 ] 00:16:54.627 }' 00:16:54.627 11:27:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.627 11:27:12 -- common/autotest_common.sh@10 -- # set +x 00:16:55.195 11:27:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.195 [2024-11-26 11:27:13.425616] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.195 [2024-11-26 11:27:13.425658] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.195 [2024-11-26 11:27:13.425743] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.454 11:27:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.712 11:27:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.712 "name": "Existed_Raid", 00:16:55.712 "uuid": "8de0686c-5a17-451e-853c-8515d952877d", 00:16:55.712 "strip_size_kb": 64, 00:16:55.712 "state": "offline", 00:16:55.712 "raid_level": "concat", 00:16:55.712 "superblock": true, 00:16:55.713 "num_base_bdevs": 4, 00:16:55.713 "num_base_bdevs_discovered": 3, 00:16:55.713 "num_base_bdevs_operational": 3, 00:16:55.713 "base_bdevs_list": [ 00:16:55.713 { 00:16:55.713 "name": null, 00:16:55.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.713 "is_configured": false, 00:16:55.713 "data_offset": 2048, 00:16:55.713 "data_size": 63488 00:16:55.713 }, 00:16:55.713 { 00:16:55.713 "name": "BaseBdev2", 00:16:55.713 "uuid": "321efc5a-0dc3-4793-aa06-30996e6e6de6", 00:16:55.713 "is_configured": true, 00:16:55.713 "data_offset": 2048, 00:16:55.713 "data_size": 63488 00:16:55.713 }, 00:16:55.713 { 00:16:55.713 "name": "BaseBdev3", 00:16:55.713 "uuid": "f54bb455-e137-47d8-8b60-c2ffd0eccfd3", 00:16:55.713 "is_configured": true, 00:16:55.713 "data_offset": 2048, 00:16:55.713 "data_size": 63488 00:16:55.713 }, 00:16:55.713 { 00:16:55.713 "name": "BaseBdev4", 00:16:55.713 "uuid": "c28c51cb-c84f-45cd-b2ef-41a008495bb0", 00:16:55.713 "is_configured": true, 00:16:55.713 "data_offset": 2048, 00:16:55.713 "data_size": 63488 00:16:55.713 } 00:16:55.713 ] 00:16:55.713 }' 00:16:55.713 11:27:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.713 11:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:55.971 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:55.971 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:55.971 11:27:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.971 11:27:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:56.229 11:27:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:56.229 11:27:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.229 11:27:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:56.229 [2024-11-26 11:27:14.449414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.488 11:27:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:56.747 [2024-11-26 11:27:14.888282] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.747 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:56.747 11:27:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:56.747 11:27:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.747 11:27:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:57.006 11:27:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:57.006 11:27:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.006 11:27:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:57.264 [2024-11-26 11:27:15.379441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:57.264 [2024-11-26 11:27:15.379496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:57.264 11:27:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:57.264 11:27:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:57.264 11:27:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.264 11:27:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.523 11:27:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:57.523 11:27:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:57.523 11:27:15 -- bdev/bdev_raid.sh@287 -- # killprocess 86029 00:16:57.523 11:27:15 -- common/autotest_common.sh@936 -- # '[' -z 86029 ']' 00:16:57.523 11:27:15 -- common/autotest_common.sh@940 -- # kill -0 86029 00:16:57.523 11:27:15 -- common/autotest_common.sh@941 -- # uname 00:16:57.523 11:27:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.523 11:27:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86029 00:16:57.523 killing process with pid 86029 00:16:57.523 11:27:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:57.523 11:27:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:57.523 11:27:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86029' 00:16:57.523 11:27:15 -- common/autotest_common.sh@955 -- # kill 86029 00:16:57.523 [2024-11-26 11:27:15.631873] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.523 [2024-11-26 11:27:15.631984] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.523 11:27:15 -- common/autotest_common.sh@960 -- # wait 86029 00:16:57.783 ************************************ 00:16:57.783 END TEST raid_state_function_test_sb 00:16:57.783 ************************************ 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:57.783 00:16:57.783 real 0m11.674s 00:16:57.783 user 0m20.647s 00:16:57.783 sys 0m1.811s 00:16:57.783 11:27:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:57.783 11:27:15 -- common/autotest_common.sh@10 -- # set +x 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:57.783 11:27:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:57.783 11:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:57.783 11:27:15 -- common/autotest_common.sh@10 -- # set +x 00:16:57.783 ************************************ 00:16:57.783 START TEST raid_superblock_test 00:16:57.783 ************************************ 00:16:57.783 11:27:15 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=86420 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:57.783 11:27:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 86420 /var/tmp/spdk-raid.sock 00:16:57.783 11:27:15 -- common/autotest_common.sh@829 -- # '[' -z 86420 ']' 00:16:57.783 11:27:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:57.783 11:27:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:57.783 11:27:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:57.783 11:27:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.783 11:27:15 -- common/autotest_common.sh@10 -- # set +x 00:16:57.783 [2024-11-26 11:27:15.924614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.783 [2024-11-26 11:27:15.924787] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86420 ] 00:16:58.043 [2024-11-26 11:27:16.080350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.043 [2024-11-26 11:27:16.115869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.043 [2024-11-26 11:27:16.147299] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.612 11:27:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.612 11:27:16 -- common/autotest_common.sh@862 -- # return 0 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.612 11:27:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:58.869 malloc1 00:16:58.869 11:27:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.127 [2024-11-26 11:27:17.277839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.127 [2024-11-26 11:27:17.277982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.127 [2024-11-26 11:27:17.278039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:59.127 [2024-11-26 11:27:17.278060] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.127 [2024-11-26 11:27:17.280901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.127 [2024-11-26 11:27:17.281031] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.127 pt1 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:59.127 11:27:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:59.394 malloc2 00:16:59.394 11:27:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.659 [2024-11-26 11:27:17.689457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.659 [2024-11-26 11:27:17.689711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.659 [2024-11-26 11:27:17.689770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:59.659 [2024-11-26 11:27:17.689801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.659 [2024-11-26 11:27:17.692227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.659 [2024-11-26 11:27:17.692267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.659 pt2 00:16:59.659 11:27:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:59.659 11:27:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:59.659 11:27:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:59.659 11:27:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:59.660 11:27:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:59.660 11:27:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:59.660 11:27:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:59.660 11:27:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:59.660 11:27:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:59.918 malloc3 00:16:59.918 11:27:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.918 [2024-11-26 11:27:18.112615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.918 [2024-11-26 11:27:18.112715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.918 [2024-11-26 11:27:18.112747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:59.918 [2024-11-26 11:27:18.112761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.918 [2024-11-26 11:27:18.115167] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.918 [2024-11-26 11:27:18.115369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.918 pt3 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:59.918 11:27:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:00.176 malloc4 00:17:00.176 11:27:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:00.434 [2024-11-26 11:27:18.560000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:00.434 [2024-11-26 11:27:18.560246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.434 [2024-11-26 11:27:18.560296] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:00.434 [2024-11-26 11:27:18.560311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.434 [2024-11-26 11:27:18.562796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.434 [2024-11-26 11:27:18.562836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:00.434 pt4 00:17:00.434 11:27:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:00.434 11:27:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:00.434 11:27:18 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:00.692 [2024-11-26 11:27:18.772165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.692 [2024-11-26 11:27:18.774505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.692 [2024-11-26 11:27:18.774795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:00.692 [2024-11-26 11:27:18.775069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:00.692 [2024-11-26 11:27:18.775463] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:00.692 [2024-11-26 11:27:18.775635] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:00.692 [2024-11-26 11:27:18.775805] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:00.692 [2024-11-26 11:27:18.776242] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:00.692 [2024-11-26 11:27:18.776365] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:00.692 [2024-11-26 11:27:18.776752] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.692 11:27:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.950 11:27:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.950 "name": "raid_bdev1", 00:17:00.951 "uuid": "e705aee3-03fd-492b-b052-476460672156", 00:17:00.951 "strip_size_kb": 64, 00:17:00.951 "state": "online", 00:17:00.951 "raid_level": "concat", 00:17:00.951 "superblock": true, 00:17:00.951 "num_base_bdevs": 4, 00:17:00.951 "num_base_bdevs_discovered": 4, 00:17:00.951 "num_base_bdevs_operational": 4, 00:17:00.951 "base_bdevs_list": [ 00:17:00.951 { 00:17:00.951 "name": "pt1", 00:17:00.951 "uuid": "00a9a60e-a987-5b62-9dcd-a1d1858fbdf7", 00:17:00.951 "is_configured": true, 00:17:00.951 "data_offset": 2048, 00:17:00.951 "data_size": 63488 00:17:00.951 }, 00:17:00.951 { 00:17:00.951 "name": "pt2", 00:17:00.951 "uuid": "c1b63fea-e8b1-5e64-a3f6-74ee816fb0c0", 00:17:00.951 "is_configured": true, 00:17:00.951 "data_offset": 2048, 00:17:00.951 "data_size": 63488 00:17:00.951 }, 00:17:00.951 { 00:17:00.951 "name": "pt3", 00:17:00.951 "uuid": "49801800-7eb4-554f-9d66-a1450f9199f9", 00:17:00.951 "is_configured": true, 00:17:00.951 "data_offset": 2048, 00:17:00.951 "data_size": 63488 00:17:00.951 }, 00:17:00.951 { 00:17:00.951 "name": "pt4", 00:17:00.951 "uuid": "a0d39768-02c7-5f2d-8441-9347585a0e10", 00:17:00.951 "is_configured": true, 00:17:00.951 "data_offset": 2048, 00:17:00.951 "data_size": 63488 00:17:00.951 } 00:17:00.951 ] 00:17:00.951 }' 00:17:00.951 11:27:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.951 11:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:01.210 11:27:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:01.210 11:27:19 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:01.469 [2024-11-26 11:27:19.553253] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.469 11:27:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e705aee3-03fd-492b-b052-476460672156 00:17:01.469 11:27:19 -- bdev/bdev_raid.sh@380 -- # '[' -z e705aee3-03fd-492b-b052-476460672156 ']' 00:17:01.469 11:27:19 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:01.728 [2024-11-26 11:27:19.757094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.728 [2024-11-26 11:27:19.757139] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.728 [2024-11-26 11:27:19.757224] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.728 [2024-11-26 11:27:19.757313] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.728 [2024-11-26 11:27:19.757334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:01.728 11:27:19 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.728 11:27:19 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:01.986 11:27:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:01.986 11:27:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:01.986 11:27:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.986 11:27:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:02.244 11:27:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.244 11:27:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.503 11:27:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.503 11:27:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:02.503 11:27:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.503 11:27:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:02.762 11:27:20 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:02.762 11:27:20 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:03.021 11:27:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:03.021 11:27:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:03.021 11:27:21 -- common/autotest_common.sh@650 -- # local es=0 00:17:03.021 11:27:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:03.021 11:27:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.021 11:27:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.021 11:27:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.021 11:27:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.021 11:27:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.021 11:27:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.021 11:27:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.021 11:27:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:03.021 11:27:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:03.281 [2024-11-26 11:27:21.369503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:03.281 [2024-11-26 11:27:21.371758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:03.281 [2024-11-26 11:27:21.371815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:03.281 [2024-11-26 11:27:21.371859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:03.281 [2024-11-26 11:27:21.371914] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:03.281 [2024-11-26 11:27:21.372003] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:03.281 [2024-11-26 11:27:21.372051] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:03.281 [2024-11-26 11:27:21.372089] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:03.281 [2024-11-26 11:27:21.372124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.281 [2024-11-26 11:27:21.372139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:17:03.281 request: 00:17:03.281 { 00:17:03.281 "name": "raid_bdev1", 00:17:03.281 "raid_level": "concat", 00:17:03.281 "base_bdevs": [ 00:17:03.281 "malloc1", 00:17:03.281 "malloc2", 00:17:03.281 "malloc3", 00:17:03.281 "malloc4" 00:17:03.281 ], 00:17:03.281 "superblock": false, 00:17:03.281 "strip_size_kb": 64, 00:17:03.281 "method": "bdev_raid_create", 00:17:03.281 "req_id": 1 00:17:03.281 } 00:17:03.281 Got JSON-RPC error response 00:17:03.281 response: 00:17:03.281 { 00:17:03.281 "code": -17, 00:17:03.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:03.281 } 00:17:03.281 11:27:21 -- common/autotest_common.sh@653 -- # es=1 00:17:03.281 11:27:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.281 11:27:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.281 11:27:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.281 11:27:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.281 11:27:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:03.557 11:27:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:03.557 11:27:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:03.557 11:27:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.832 [2024-11-26 11:27:21.881619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.832 [2024-11-26 11:27:21.881956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.832 [2024-11-26 11:27:21.881999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:03.832 [2024-11-26 11:27:21.882028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.832 [2024-11-26 11:27:21.884378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.832 [2024-11-26 11:27:21.884416] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.832 [2024-11-26 11:27:21.884510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:03.832 [2024-11-26 11:27:21.884583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.832 pt1 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.832 11:27:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.090 11:27:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.090 "name": "raid_bdev1", 00:17:04.090 "uuid": "e705aee3-03fd-492b-b052-476460672156", 00:17:04.090 "strip_size_kb": 64, 00:17:04.090 "state": "configuring", 00:17:04.090 "raid_level": "concat", 00:17:04.090 "superblock": true, 00:17:04.090 "num_base_bdevs": 4, 00:17:04.090 "num_base_bdevs_discovered": 1, 00:17:04.090 "num_base_bdevs_operational": 4, 00:17:04.090 "base_bdevs_list": [ 00:17:04.090 { 00:17:04.090 "name": "pt1", 00:17:04.090 "uuid": "00a9a60e-a987-5b62-9dcd-a1d1858fbdf7", 00:17:04.090 "is_configured": true, 00:17:04.090 "data_offset": 2048, 00:17:04.090 "data_size": 63488 00:17:04.090 }, 00:17:04.090 { 00:17:04.090 "name": null, 00:17:04.090 "uuid": "c1b63fea-e8b1-5e64-a3f6-74ee816fb0c0", 00:17:04.090 "is_configured": false, 00:17:04.090 "data_offset": 2048, 00:17:04.090 "data_size": 63488 00:17:04.090 }, 00:17:04.090 { 00:17:04.091 "name": null, 00:17:04.091 "uuid": "49801800-7eb4-554f-9d66-a1450f9199f9", 00:17:04.091 "is_configured": false, 00:17:04.091 "data_offset": 2048, 00:17:04.091 "data_size": 63488 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": null, 00:17:04.091 "uuid": "a0d39768-02c7-5f2d-8441-9347585a0e10", 00:17:04.091 "is_configured": false, 00:17:04.091 "data_offset": 2048, 00:17:04.091 "data_size": 63488 00:17:04.091 } 00:17:04.091 ] 00:17:04.091 }' 00:17:04.091 11:27:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.091 11:27:22 -- common/autotest_common.sh@10 -- # set +x 00:17:04.348 11:27:22 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:04.348 11:27:22 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.608 [2024-11-26 11:27:22.653884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.608 [2024-11-26 11:27:22.654142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.608 [2024-11-26 11:27:22.654190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:17:04.608 [2024-11-26 11:27:22.654205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.608 [2024-11-26 11:27:22.654642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.608 [2024-11-26 11:27:22.654664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.608 [2024-11-26 11:27:22.654737] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:04.608 [2024-11-26 11:27:22.654762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.608 pt2 00:17:04.608 11:27:22 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:04.868 [2024-11-26 11:27:22.854020] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.868 11:27:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.868 11:27:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.868 "name": "raid_bdev1", 00:17:04.868 "uuid": "e705aee3-03fd-492b-b052-476460672156", 00:17:04.868 "strip_size_kb": 64, 00:17:04.868 "state": "configuring", 00:17:04.868 "raid_level": "concat", 00:17:04.868 "superblock": true, 00:17:04.868 "num_base_bdevs": 4, 00:17:04.868 "num_base_bdevs_discovered": 1, 00:17:04.868 "num_base_bdevs_operational": 4, 00:17:04.868 "base_bdevs_list": [ 00:17:04.868 { 00:17:04.868 "name": "pt1", 00:17:04.868 "uuid": "00a9a60e-a987-5b62-9dcd-a1d1858fbdf7", 00:17:04.868 "is_configured": true, 00:17:04.868 "data_offset": 2048, 00:17:04.868 "data_size": 63488 00:17:04.868 }, 00:17:04.868 { 00:17:04.868 "name": null, 00:17:04.868 "uuid": "c1b63fea-e8b1-5e64-a3f6-74ee816fb0c0", 00:17:04.868 "is_configured": false, 00:17:04.868 "data_offset": 2048, 00:17:04.868 "data_size": 63488 00:17:04.868 }, 00:17:04.868 { 00:17:04.868 "name": null, 00:17:04.868 "uuid": "49801800-7eb4-554f-9d66-a1450f9199f9", 00:17:04.868 "is_configured": false, 00:17:04.868 "data_offset": 2048, 00:17:04.868 "data_size": 63488 00:17:04.868 }, 00:17:04.868 { 00:17:04.868 "name": null, 00:17:04.868 "uuid": "a0d39768-02c7-5f2d-8441-9347585a0e10", 00:17:04.868 "is_configured": false, 00:17:04.868 "data_offset": 2048, 00:17:04.868 "data_size": 63488 00:17:04.868 } 00:17:04.868 ] 00:17:04.868 }' 00:17:04.868 11:27:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.868 11:27:23 -- common/autotest_common.sh@10 -- # set +x 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.436 [2024-11-26 11:27:23.594379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.436 [2024-11-26 11:27:23.594495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.436 [2024-11-26 11:27:23.594524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:17:05.436 [2024-11-26 11:27:23.594542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.436 [2024-11-26 11:27:23.595006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.436 [2024-11-26 11:27:23.595051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.436 [2024-11-26 11:27:23.595126] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:05.436 [2024-11-26 11:27:23.595157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.436 pt2 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:05.436 11:27:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.695 [2024-11-26 11:27:23.858499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.695 [2024-11-26 11:27:23.858852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.695 [2024-11-26 11:27:23.858950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:05.695 [2024-11-26 11:27:23.859243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.695 [2024-11-26 11:27:23.859720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.695 [2024-11-26 11:27:23.859885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.695 [2024-11-26 11:27:23.860076] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:05.695 [2024-11-26 11:27:23.860217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.695 pt3 00:17:05.695 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:05.695 11:27:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:05.695 11:27:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.954 [2024-11-26 11:27:24.066508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.954 [2024-11-26 11:27:24.066747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.954 [2024-11-26 11:27:24.066787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:05.954 [2024-11-26 11:27:24.066808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.954 [2024-11-26 11:27:24.067262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.954 [2024-11-26 11:27:24.067288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.954 [2024-11-26 11:27:24.067354] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:05.954 [2024-11-26 11:27:24.067383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.954 [2024-11-26 11:27:24.067515] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:17:05.954 [2024-11-26 11:27:24.067531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:05.954 [2024-11-26 11:27:24.067601] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:05.954 [2024-11-26 11:27:24.068001] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:17:05.954 [2024-11-26 11:27:24.068018] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:17:05.954 [2024-11-26 11:27:24.068152] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.954 pt4 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.954 11:27:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.214 11:27:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.214 "name": "raid_bdev1", 00:17:06.214 "uuid": "e705aee3-03fd-492b-b052-476460672156", 00:17:06.214 "strip_size_kb": 64, 00:17:06.214 "state": "online", 00:17:06.214 "raid_level": "concat", 00:17:06.214 "superblock": true, 00:17:06.214 "num_base_bdevs": 4, 00:17:06.214 "num_base_bdevs_discovered": 4, 00:17:06.214 "num_base_bdevs_operational": 4, 00:17:06.214 "base_bdevs_list": [ 00:17:06.214 { 00:17:06.214 "name": "pt1", 00:17:06.214 "uuid": "00a9a60e-a987-5b62-9dcd-a1d1858fbdf7", 00:17:06.214 "is_configured": true, 00:17:06.214 "data_offset": 2048, 00:17:06.214 "data_size": 63488 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "pt2", 00:17:06.214 "uuid": "c1b63fea-e8b1-5e64-a3f6-74ee816fb0c0", 00:17:06.214 "is_configured": true, 00:17:06.214 "data_offset": 2048, 00:17:06.214 "data_size": 63488 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "pt3", 00:17:06.214 "uuid": "49801800-7eb4-554f-9d66-a1450f9199f9", 00:17:06.214 "is_configured": true, 00:17:06.214 "data_offset": 2048, 00:17:06.214 "data_size": 63488 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "pt4", 00:17:06.214 "uuid": "a0d39768-02c7-5f2d-8441-9347585a0e10", 00:17:06.214 "is_configured": true, 00:17:06.214 "data_offset": 2048, 00:17:06.214 "data_size": 63488 00:17:06.214 } 00:17:06.214 ] 00:17:06.214 }' 00:17:06.214 11:27:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.214 11:27:24 -- common/autotest_common.sh@10 -- # set +x 00:17:06.473 11:27:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:06.473 11:27:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:06.733 [2024-11-26 11:27:24.798948] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.733 11:27:24 -- bdev/bdev_raid.sh@430 -- # '[' e705aee3-03fd-492b-b052-476460672156 '!=' e705aee3-03fd-492b-b052-476460672156 ']' 00:17:06.733 11:27:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:06.733 11:27:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:06.733 11:27:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:06.733 11:27:24 -- bdev/bdev_raid.sh@511 -- # killprocess 86420 00:17:06.733 11:27:24 -- common/autotest_common.sh@936 -- # '[' -z 86420 ']' 00:17:06.733 11:27:24 -- common/autotest_common.sh@940 -- # kill -0 86420 00:17:06.733 11:27:24 -- common/autotest_common.sh@941 -- # uname 00:17:06.733 11:27:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.733 11:27:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86420 00:17:06.733 killing process with pid 86420 00:17:06.733 11:27:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:06.733 11:27:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:06.733 11:27:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86420' 00:17:06.733 11:27:24 -- common/autotest_common.sh@955 -- # kill 86420 00:17:06.733 [2024-11-26 11:27:24.851392] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.733 11:27:24 -- common/autotest_common.sh@960 -- # wait 86420 00:17:06.733 [2024-11-26 11:27:24.851476] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.733 [2024-11-26 11:27:24.851550] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.733 [2024-11-26 11:27:24.851563] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:17:06.733 [2024-11-26 11:27:24.881572] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:06.993 00:17:06.993 real 0m9.188s 00:17:06.993 user 0m16.092s 00:17:06.993 sys 0m1.341s 00:17:06.993 11:27:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.993 ************************************ 00:17:06.993 END TEST raid_superblock_test 00:17:06.993 ************************************ 00:17:06.993 11:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:06.993 11:27:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:06.993 11:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.993 11:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.993 ************************************ 00:17:06.993 START TEST raid_state_function_test 00:17:06.993 ************************************ 00:17:06.993 11:27:25 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=86700 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 86700' 00:17:06.993 Process raid pid: 86700 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 86700 /var/tmp/spdk-raid.sock 00:17:06.993 11:27:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:06.993 11:27:25 -- common/autotest_common.sh@829 -- # '[' -z 86700 ']' 00:17:06.993 11:27:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.993 11:27:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.993 11:27:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.993 11:27:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.993 11:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.993 [2024-11-26 11:27:25.167934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:06.993 [2024-11-26 11:27:25.168094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.252 [2024-11-26 11:27:25.319888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.252 [2024-11-26 11:27:25.353670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.252 [2024-11-26 11:27:25.385126] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.188 11:27:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.188 11:27:26 -- common/autotest_common.sh@862 -- # return 0 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:08.188 [2024-11-26 11:27:26.320685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.188 [2024-11-26 11:27:26.320756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.188 [2024-11-26 11:27:26.320772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.188 [2024-11-26 11:27:26.320783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.188 [2024-11-26 11:27:26.320793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.188 [2024-11-26 11:27:26.320803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.188 [2024-11-26 11:27:26.320816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:08.188 [2024-11-26 11:27:26.320825] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.188 11:27:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.447 11:27:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.447 "name": "Existed_Raid", 00:17:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.447 "strip_size_kb": 0, 00:17:08.447 "state": "configuring", 00:17:08.447 "raid_level": "raid1", 00:17:08.447 "superblock": false, 00:17:08.447 "num_base_bdevs": 4, 00:17:08.447 "num_base_bdevs_discovered": 0, 00:17:08.447 "num_base_bdevs_operational": 4, 00:17:08.447 "base_bdevs_list": [ 00:17:08.447 { 00:17:08.447 "name": "BaseBdev1", 00:17:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.447 "is_configured": false, 00:17:08.447 "data_offset": 0, 00:17:08.447 "data_size": 0 00:17:08.447 }, 00:17:08.447 { 00:17:08.447 "name": "BaseBdev2", 00:17:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.447 "is_configured": false, 00:17:08.447 "data_offset": 0, 00:17:08.447 "data_size": 0 00:17:08.447 }, 00:17:08.447 { 00:17:08.447 "name": "BaseBdev3", 00:17:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.447 "is_configured": false, 00:17:08.447 "data_offset": 0, 00:17:08.448 "data_size": 0 00:17:08.448 }, 00:17:08.448 { 00:17:08.448 "name": "BaseBdev4", 00:17:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.448 "is_configured": false, 00:17:08.448 "data_offset": 0, 00:17:08.448 "data_size": 0 00:17:08.448 } 00:17:08.448 ] 00:17:08.448 }' 00:17:08.448 11:27:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.448 11:27:26 -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 11:27:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.965 [2024-11-26 11:27:27.020817] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.965 [2024-11-26 11:27:27.020903] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:08.965 11:27:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.224 [2024-11-26 11:27:27.276915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.224 [2024-11-26 11:27:27.277002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.224 [2024-11-26 11:27:27.277019] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.224 [2024-11-26 11:27:27.277030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.224 [2024-11-26 11:27:27.277040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.224 [2024-11-26 11:27:27.277050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.224 [2024-11-26 11:27:27.277064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.224 [2024-11-26 11:27:27.277074] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:09.224 11:27:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.483 [2024-11-26 11:27:27.536537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.483 BaseBdev1 00:17:09.483 11:27:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:09.483 11:27:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:09.483 11:27:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:09.483 11:27:27 -- common/autotest_common.sh@899 -- # local i 00:17:09.483 11:27:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:09.483 11:27:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:09.483 11:27:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.742 11:27:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.742 [ 00:17:09.742 { 00:17:09.742 "name": "BaseBdev1", 00:17:09.742 "aliases": [ 00:17:09.742 "70cdcd37-b12d-4429-b54e-ed2455ef265c" 00:17:09.742 ], 00:17:09.742 "product_name": "Malloc disk", 00:17:09.742 "block_size": 512, 00:17:09.742 "num_blocks": 65536, 00:17:09.742 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:09.742 "assigned_rate_limits": { 00:17:09.742 "rw_ios_per_sec": 0, 00:17:09.742 "rw_mbytes_per_sec": 0, 00:17:09.742 "r_mbytes_per_sec": 0, 00:17:09.742 "w_mbytes_per_sec": 0 00:17:09.742 }, 00:17:09.742 "claimed": true, 00:17:09.742 "claim_type": "exclusive_write", 00:17:09.742 "zoned": false, 00:17:09.742 "supported_io_types": { 00:17:09.742 "read": true, 00:17:09.742 "write": true, 00:17:09.742 "unmap": true, 00:17:09.742 "write_zeroes": true, 00:17:09.742 "flush": true, 00:17:09.742 "reset": true, 00:17:09.742 "compare": false, 00:17:09.742 "compare_and_write": false, 00:17:09.742 "abort": true, 00:17:09.742 "nvme_admin": false, 00:17:09.742 "nvme_io": false 00:17:09.742 }, 00:17:09.742 "memory_domains": [ 00:17:09.742 { 00:17:09.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.742 "dma_device_type": 2 00:17:09.742 } 00:17:09.742 ], 00:17:09.742 "driver_specific": {} 00:17:09.742 } 00:17:09.742 ] 00:17:10.001 11:27:27 -- common/autotest_common.sh@905 -- # return 0 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.001 11:27:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.001 11:27:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.001 "name": "Existed_Raid", 00:17:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.001 "strip_size_kb": 0, 00:17:10.001 "state": "configuring", 00:17:10.001 "raid_level": "raid1", 00:17:10.001 "superblock": false, 00:17:10.001 "num_base_bdevs": 4, 00:17:10.001 "num_base_bdevs_discovered": 1, 00:17:10.001 "num_base_bdevs_operational": 4, 00:17:10.001 "base_bdevs_list": [ 00:17:10.001 { 00:17:10.001 "name": "BaseBdev1", 00:17:10.001 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:10.001 "is_configured": true, 00:17:10.001 "data_offset": 0, 00:17:10.001 "data_size": 65536 00:17:10.001 }, 00:17:10.001 { 00:17:10.001 "name": "BaseBdev2", 00:17:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.001 "is_configured": false, 00:17:10.001 "data_offset": 0, 00:17:10.001 "data_size": 0 00:17:10.001 }, 00:17:10.001 { 00:17:10.001 "name": "BaseBdev3", 00:17:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.001 "is_configured": false, 00:17:10.001 "data_offset": 0, 00:17:10.001 "data_size": 0 00:17:10.001 }, 00:17:10.001 { 00:17:10.001 "name": "BaseBdev4", 00:17:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.002 "is_configured": false, 00:17:10.002 "data_offset": 0, 00:17:10.002 "data_size": 0 00:17:10.002 } 00:17:10.002 ] 00:17:10.002 }' 00:17:10.002 11:27:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.002 11:27:28 -- common/autotest_common.sh@10 -- # set +x 00:17:10.569 11:27:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.569 [2024-11-26 11:27:28.769014] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.569 [2024-11-26 11:27:28.769081] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:10.569 11:27:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:10.569 11:27:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:10.828 [2024-11-26 11:27:29.033192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.828 [2024-11-26 11:27:29.035496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.828 [2024-11-26 11:27:29.035541] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.828 [2024-11-26 11:27:29.035575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:10.828 [2024-11-26 11:27:29.035587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:10.828 [2024-11-26 11:27:29.035596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:10.828 [2024-11-26 11:27:29.035606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.828 11:27:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.089 11:27:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.089 "name": "Existed_Raid", 00:17:11.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.089 "strip_size_kb": 0, 00:17:11.089 "state": "configuring", 00:17:11.089 "raid_level": "raid1", 00:17:11.089 "superblock": false, 00:17:11.089 "num_base_bdevs": 4, 00:17:11.089 "num_base_bdevs_discovered": 1, 00:17:11.089 "num_base_bdevs_operational": 4, 00:17:11.089 "base_bdevs_list": [ 00:17:11.089 { 00:17:11.089 "name": "BaseBdev1", 00:17:11.089 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:11.089 "is_configured": true, 00:17:11.089 "data_offset": 0, 00:17:11.089 "data_size": 65536 00:17:11.089 }, 00:17:11.089 { 00:17:11.089 "name": "BaseBdev2", 00:17:11.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.089 "is_configured": false, 00:17:11.089 "data_offset": 0, 00:17:11.089 "data_size": 0 00:17:11.089 }, 00:17:11.089 { 00:17:11.089 "name": "BaseBdev3", 00:17:11.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.089 "is_configured": false, 00:17:11.089 "data_offset": 0, 00:17:11.089 "data_size": 0 00:17:11.089 }, 00:17:11.089 { 00:17:11.089 "name": "BaseBdev4", 00:17:11.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.089 "is_configured": false, 00:17:11.089 "data_offset": 0, 00:17:11.089 "data_size": 0 00:17:11.089 } 00:17:11.089 ] 00:17:11.089 }' 00:17:11.089 11:27:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.089 11:27:29 -- common/autotest_common.sh@10 -- # set +x 00:17:11.347 11:27:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:11.606 [2024-11-26 11:27:29.824364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.606 BaseBdev2 00:17:11.864 11:27:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:11.865 11:27:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:11.865 11:27:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.865 11:27:29 -- common/autotest_common.sh@899 -- # local i 00:17:11.865 11:27:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.865 11:27:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.865 11:27:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.123 11:27:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:12.123 [ 00:17:12.123 { 00:17:12.123 "name": "BaseBdev2", 00:17:12.123 "aliases": [ 00:17:12.123 "716460ea-e1d8-4674-8488-514ca57b572e" 00:17:12.123 ], 00:17:12.123 "product_name": "Malloc disk", 00:17:12.123 "block_size": 512, 00:17:12.123 "num_blocks": 65536, 00:17:12.123 "uuid": "716460ea-e1d8-4674-8488-514ca57b572e", 00:17:12.123 "assigned_rate_limits": { 00:17:12.123 "rw_ios_per_sec": 0, 00:17:12.123 "rw_mbytes_per_sec": 0, 00:17:12.123 "r_mbytes_per_sec": 0, 00:17:12.123 "w_mbytes_per_sec": 0 00:17:12.123 }, 00:17:12.123 "claimed": true, 00:17:12.123 "claim_type": "exclusive_write", 00:17:12.123 "zoned": false, 00:17:12.123 "supported_io_types": { 00:17:12.123 "read": true, 00:17:12.123 "write": true, 00:17:12.123 "unmap": true, 00:17:12.123 "write_zeroes": true, 00:17:12.123 "flush": true, 00:17:12.123 "reset": true, 00:17:12.123 "compare": false, 00:17:12.123 "compare_and_write": false, 00:17:12.123 "abort": true, 00:17:12.123 "nvme_admin": false, 00:17:12.123 "nvme_io": false 00:17:12.123 }, 00:17:12.123 "memory_domains": [ 00:17:12.123 { 00:17:12.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.123 "dma_device_type": 2 00:17:12.123 } 00:17:12.123 ], 00:17:12.123 "driver_specific": {} 00:17:12.123 } 00:17:12.123 ] 00:17:12.123 11:27:30 -- common/autotest_common.sh@905 -- # return 0 00:17:12.123 11:27:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:12.123 11:27:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.123 11:27:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.124 11:27:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.383 11:27:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.383 "name": "Existed_Raid", 00:17:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.383 "strip_size_kb": 0, 00:17:12.383 "state": "configuring", 00:17:12.383 "raid_level": "raid1", 00:17:12.383 "superblock": false, 00:17:12.383 "num_base_bdevs": 4, 00:17:12.383 "num_base_bdevs_discovered": 2, 00:17:12.383 "num_base_bdevs_operational": 4, 00:17:12.383 "base_bdevs_list": [ 00:17:12.383 { 00:17:12.383 "name": "BaseBdev1", 00:17:12.383 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:12.383 "is_configured": true, 00:17:12.383 "data_offset": 0, 00:17:12.383 "data_size": 65536 00:17:12.383 }, 00:17:12.383 { 00:17:12.383 "name": "BaseBdev2", 00:17:12.383 "uuid": "716460ea-e1d8-4674-8488-514ca57b572e", 00:17:12.383 "is_configured": true, 00:17:12.383 "data_offset": 0, 00:17:12.383 "data_size": 65536 00:17:12.383 }, 00:17:12.383 { 00:17:12.383 "name": "BaseBdev3", 00:17:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.383 "is_configured": false, 00:17:12.383 "data_offset": 0, 00:17:12.383 "data_size": 0 00:17:12.383 }, 00:17:12.383 { 00:17:12.383 "name": "BaseBdev4", 00:17:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.383 "is_configured": false, 00:17:12.383 "data_offset": 0, 00:17:12.383 "data_size": 0 00:17:12.383 } 00:17:12.383 ] 00:17:12.383 }' 00:17:12.383 11:27:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.383 11:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:12.951 11:27:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:12.951 [2024-11-26 11:27:31.081326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.951 BaseBdev3 00:17:12.951 11:27:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:12.951 11:27:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:12.951 11:27:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:12.951 11:27:31 -- common/autotest_common.sh@899 -- # local i 00:17:12.951 11:27:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:12.951 11:27:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:12.951 11:27:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.210 11:27:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:13.469 [ 00:17:13.469 { 00:17:13.469 "name": "BaseBdev3", 00:17:13.469 "aliases": [ 00:17:13.469 "b62c2db6-0dfe-4b53-bb3a-d20aaf3c4df1" 00:17:13.469 ], 00:17:13.469 "product_name": "Malloc disk", 00:17:13.469 "block_size": 512, 00:17:13.469 "num_blocks": 65536, 00:17:13.469 "uuid": "b62c2db6-0dfe-4b53-bb3a-d20aaf3c4df1", 00:17:13.469 "assigned_rate_limits": { 00:17:13.469 "rw_ios_per_sec": 0, 00:17:13.469 "rw_mbytes_per_sec": 0, 00:17:13.469 "r_mbytes_per_sec": 0, 00:17:13.469 "w_mbytes_per_sec": 0 00:17:13.469 }, 00:17:13.469 "claimed": true, 00:17:13.469 "claim_type": "exclusive_write", 00:17:13.469 "zoned": false, 00:17:13.469 "supported_io_types": { 00:17:13.469 "read": true, 00:17:13.469 "write": true, 00:17:13.469 "unmap": true, 00:17:13.469 "write_zeroes": true, 00:17:13.469 "flush": true, 00:17:13.469 "reset": true, 00:17:13.469 "compare": false, 00:17:13.469 "compare_and_write": false, 00:17:13.469 "abort": true, 00:17:13.469 "nvme_admin": false, 00:17:13.469 "nvme_io": false 00:17:13.469 }, 00:17:13.469 "memory_domains": [ 00:17:13.469 { 00:17:13.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.469 "dma_device_type": 2 00:17:13.469 } 00:17:13.469 ], 00:17:13.469 "driver_specific": {} 00:17:13.469 } 00:17:13.469 ] 00:17:13.469 11:27:31 -- common/autotest_common.sh@905 -- # return 0 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.469 11:27:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.728 11:27:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.728 "name": "Existed_Raid", 00:17:13.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.728 "strip_size_kb": 0, 00:17:13.728 "state": "configuring", 00:17:13.728 "raid_level": "raid1", 00:17:13.728 "superblock": false, 00:17:13.728 "num_base_bdevs": 4, 00:17:13.728 "num_base_bdevs_discovered": 3, 00:17:13.728 "num_base_bdevs_operational": 4, 00:17:13.728 "base_bdevs_list": [ 00:17:13.728 { 00:17:13.728 "name": "BaseBdev1", 00:17:13.728 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:13.728 "is_configured": true, 00:17:13.728 "data_offset": 0, 00:17:13.728 "data_size": 65536 00:17:13.728 }, 00:17:13.728 { 00:17:13.728 "name": "BaseBdev2", 00:17:13.728 "uuid": "716460ea-e1d8-4674-8488-514ca57b572e", 00:17:13.728 "is_configured": true, 00:17:13.728 "data_offset": 0, 00:17:13.728 "data_size": 65536 00:17:13.728 }, 00:17:13.728 { 00:17:13.728 "name": "BaseBdev3", 00:17:13.728 "uuid": "b62c2db6-0dfe-4b53-bb3a-d20aaf3c4df1", 00:17:13.728 "is_configured": true, 00:17:13.728 "data_offset": 0, 00:17:13.728 "data_size": 65536 00:17:13.728 }, 00:17:13.728 { 00:17:13.728 "name": "BaseBdev4", 00:17:13.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.728 "is_configured": false, 00:17:13.728 "data_offset": 0, 00:17:13.728 "data_size": 0 00:17:13.728 } 00:17:13.728 ] 00:17:13.728 }' 00:17:13.728 11:27:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.728 11:27:31 -- common/autotest_common.sh@10 -- # set +x 00:17:13.986 11:27:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:14.245 [2024-11-26 11:27:32.366441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:14.245 [2024-11-26 11:27:32.366743] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:14.245 [2024-11-26 11:27:32.366803] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:14.245 [2024-11-26 11:27:32.367097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:14.245 [2024-11-26 11:27:32.367646] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:14.245 [2024-11-26 11:27:32.367815] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:14.245 [2024-11-26 11:27:32.368182] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.245 BaseBdev4 00:17:14.245 11:27:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:14.245 11:27:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:14.245 11:27:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:14.245 11:27:32 -- common/autotest_common.sh@899 -- # local i 00:17:14.245 11:27:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:14.245 11:27:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:14.245 11:27:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.504 11:27:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:14.764 [ 00:17:14.764 { 00:17:14.764 "name": "BaseBdev4", 00:17:14.764 "aliases": [ 00:17:14.764 "fb48dd98-4159-4c05-94c1-0611fc081250" 00:17:14.764 ], 00:17:14.764 "product_name": "Malloc disk", 00:17:14.764 "block_size": 512, 00:17:14.764 "num_blocks": 65536, 00:17:14.764 "uuid": "fb48dd98-4159-4c05-94c1-0611fc081250", 00:17:14.764 "assigned_rate_limits": { 00:17:14.764 "rw_ios_per_sec": 0, 00:17:14.764 "rw_mbytes_per_sec": 0, 00:17:14.764 "r_mbytes_per_sec": 0, 00:17:14.764 "w_mbytes_per_sec": 0 00:17:14.764 }, 00:17:14.764 "claimed": true, 00:17:14.764 "claim_type": "exclusive_write", 00:17:14.764 "zoned": false, 00:17:14.764 "supported_io_types": { 00:17:14.764 "read": true, 00:17:14.764 "write": true, 00:17:14.764 "unmap": true, 00:17:14.764 "write_zeroes": true, 00:17:14.764 "flush": true, 00:17:14.764 "reset": true, 00:17:14.764 "compare": false, 00:17:14.764 "compare_and_write": false, 00:17:14.764 "abort": true, 00:17:14.764 "nvme_admin": false, 00:17:14.764 "nvme_io": false 00:17:14.764 }, 00:17:14.764 "memory_domains": [ 00:17:14.764 { 00:17:14.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.764 "dma_device_type": 2 00:17:14.764 } 00:17:14.764 ], 00:17:14.764 "driver_specific": {} 00:17:14.764 } 00:17:14.764 ] 00:17:14.764 11:27:32 -- common/autotest_common.sh@905 -- # return 0 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.764 11:27:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.764 11:27:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.764 "name": "Existed_Raid", 00:17:14.764 "uuid": "6fa06f99-15ce-46a3-af8f-97166340580a", 00:17:14.764 "strip_size_kb": 0, 00:17:14.764 "state": "online", 00:17:14.764 "raid_level": "raid1", 00:17:14.764 "superblock": false, 00:17:14.764 "num_base_bdevs": 4, 00:17:14.764 "num_base_bdevs_discovered": 4, 00:17:14.764 "num_base_bdevs_operational": 4, 00:17:14.764 "base_bdevs_list": [ 00:17:14.764 { 00:17:14.764 "name": "BaseBdev1", 00:17:14.764 "uuid": "70cdcd37-b12d-4429-b54e-ed2455ef265c", 00:17:14.764 "is_configured": true, 00:17:14.764 "data_offset": 0, 00:17:14.764 "data_size": 65536 00:17:14.764 }, 00:17:14.764 { 00:17:14.764 "name": "BaseBdev2", 00:17:14.764 "uuid": "716460ea-e1d8-4674-8488-514ca57b572e", 00:17:14.764 "is_configured": true, 00:17:14.764 "data_offset": 0, 00:17:14.764 "data_size": 65536 00:17:14.764 }, 00:17:14.764 { 00:17:14.764 "name": "BaseBdev3", 00:17:14.764 "uuid": "b62c2db6-0dfe-4b53-bb3a-d20aaf3c4df1", 00:17:14.764 "is_configured": true, 00:17:14.764 "data_offset": 0, 00:17:14.764 "data_size": 65536 00:17:14.764 }, 00:17:14.764 { 00:17:14.764 "name": "BaseBdev4", 00:17:14.764 "uuid": "fb48dd98-4159-4c05-94c1-0611fc081250", 00:17:14.764 "is_configured": true, 00:17:14.764 "data_offset": 0, 00:17:14.764 "data_size": 65536 00:17:14.764 } 00:17:14.764 ] 00:17:14.764 }' 00:17:14.764 11:27:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.764 11:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.336 [2024-11-26 11:27:33.510934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.336 11:27:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.618 11:27:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.618 "name": "Existed_Raid", 00:17:15.618 "uuid": "6fa06f99-15ce-46a3-af8f-97166340580a", 00:17:15.618 "strip_size_kb": 0, 00:17:15.618 "state": "online", 00:17:15.618 "raid_level": "raid1", 00:17:15.618 "superblock": false, 00:17:15.618 "num_base_bdevs": 4, 00:17:15.618 "num_base_bdevs_discovered": 3, 00:17:15.618 "num_base_bdevs_operational": 3, 00:17:15.618 "base_bdevs_list": [ 00:17:15.618 { 00:17:15.618 "name": null, 00:17:15.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.618 "is_configured": false, 00:17:15.618 "data_offset": 0, 00:17:15.618 "data_size": 65536 00:17:15.618 }, 00:17:15.618 { 00:17:15.618 "name": "BaseBdev2", 00:17:15.618 "uuid": "716460ea-e1d8-4674-8488-514ca57b572e", 00:17:15.618 "is_configured": true, 00:17:15.618 "data_offset": 0, 00:17:15.618 "data_size": 65536 00:17:15.618 }, 00:17:15.618 { 00:17:15.618 "name": "BaseBdev3", 00:17:15.618 "uuid": "b62c2db6-0dfe-4b53-bb3a-d20aaf3c4df1", 00:17:15.618 "is_configured": true, 00:17:15.618 "data_offset": 0, 00:17:15.618 "data_size": 65536 00:17:15.618 }, 00:17:15.618 { 00:17:15.618 "name": "BaseBdev4", 00:17:15.618 "uuid": "fb48dd98-4159-4c05-94c1-0611fc081250", 00:17:15.618 "is_configured": true, 00:17:15.618 "data_offset": 0, 00:17:15.618 "data_size": 65536 00:17:15.618 } 00:17:15.618 ] 00:17:15.618 }' 00:17:15.618 11:27:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.618 11:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:15.890 11:27:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:15.890 11:27:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.890 11:27:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.890 11:27:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.147 11:27:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.147 11:27:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.147 11:27:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:16.406 [2024-11-26 11:27:34.478007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.406 11:27:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.406 11:27:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.406 11:27:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.406 11:27:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:16.665 11:27:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.665 11:27:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.665 11:27:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:16.925 [2024-11-26 11:27:34.977571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.925 11:27:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.925 11:27:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.925 11:27:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.925 11:27:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:17.184 11:27:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:17.184 11:27:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.184 11:27:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:17.443 [2024-11-26 11:27:35.448838] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:17.443 [2024-11-26 11:27:35.448890] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.443 [2024-11-26 11:27:35.448967] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.443 [2024-11-26 11:27:35.455616] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.443 [2024-11-26 11:27:35.455650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:17.443 11:27:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:17.443 11:27:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:17.443 11:27:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.443 11:27:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.443 11:27:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:17.703 11:27:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:17.703 11:27:35 -- bdev/bdev_raid.sh@287 -- # killprocess 86700 00:17:17.703 11:27:35 -- common/autotest_common.sh@936 -- # '[' -z 86700 ']' 00:17:17.703 11:27:35 -- common/autotest_common.sh@940 -- # kill -0 86700 00:17:17.703 11:27:35 -- common/autotest_common.sh@941 -- # uname 00:17:17.703 11:27:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.703 11:27:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86700 00:17:17.703 killing process with pid 86700 00:17:17.703 11:27:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:17.703 11:27:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:17.703 11:27:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86700' 00:17:17.703 11:27:35 -- common/autotest_common.sh@955 -- # kill 86700 00:17:17.703 [2024-11-26 11:27:35.715077] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.703 11:27:35 -- common/autotest_common.sh@960 -- # wait 86700 00:17:17.703 [2024-11-26 11:27:35.715151] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.703 11:27:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:17.703 00:17:17.703 real 0m10.780s 00:17:17.703 user 0m19.045s 00:17:17.703 sys 0m1.654s 00:17:17.703 11:27:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:17.703 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.703 ************************************ 00:17:17.703 END TEST raid_state_function_test 00:17:17.703 ************************************ 00:17:17.703 11:27:35 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:17.703 11:27:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:17.703 11:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.703 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 ************************************ 00:17:17.963 START TEST raid_state_function_test_sb 00:17:17.963 ************************************ 00:17:17.963 11:27:35 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=87087 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 87087' 00:17:17.963 Process raid pid: 87087 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:17.963 11:27:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 87087 /var/tmp/spdk-raid.sock 00:17:17.963 11:27:35 -- common/autotest_common.sh@829 -- # '[' -z 87087 ']' 00:17:17.963 11:27:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:17.963 11:27:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.963 11:27:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:17.963 11:27:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.963 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 [2024-11-26 11:27:36.001677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.963 [2024-11-26 11:27:36.001825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.963 [2024-11-26 11:27:36.156736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.963 [2024-11-26 11:27:36.190107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.222 [2024-11-26 11:27:36.222557] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.790 11:27:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.790 11:27:36 -- common/autotest_common.sh@862 -- # return 0 00:17:18.790 11:27:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:19.049 [2024-11-26 11:27:37.234242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.049 [2024-11-26 11:27:37.234362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.049 [2024-11-26 11:27:37.234386] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.049 [2024-11-26 11:27:37.234399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.049 [2024-11-26 11:27:37.234409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.049 [2024-11-26 11:27:37.234419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.049 [2024-11-26 11:27:37.234435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.049 [2024-11-26 11:27:37.234445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.049 11:27:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.308 11:27:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.308 "name": "Existed_Raid", 00:17:19.308 "uuid": "b32a7ca7-9e2d-4f3b-8628-ea2b0f902731", 00:17:19.308 "strip_size_kb": 0, 00:17:19.308 "state": "configuring", 00:17:19.308 "raid_level": "raid1", 00:17:19.308 "superblock": true, 00:17:19.308 "num_base_bdevs": 4, 00:17:19.308 "num_base_bdevs_discovered": 0, 00:17:19.308 "num_base_bdevs_operational": 4, 00:17:19.308 "base_bdevs_list": [ 00:17:19.308 { 00:17:19.308 "name": "BaseBdev1", 00:17:19.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.308 "is_configured": false, 00:17:19.309 "data_offset": 0, 00:17:19.309 "data_size": 0 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev2", 00:17:19.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.309 "is_configured": false, 00:17:19.309 "data_offset": 0, 00:17:19.309 "data_size": 0 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev3", 00:17:19.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.309 "is_configured": false, 00:17:19.309 "data_offset": 0, 00:17:19.309 "data_size": 0 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev4", 00:17:19.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.309 "is_configured": false, 00:17:19.309 "data_offset": 0, 00:17:19.309 "data_size": 0 00:17:19.309 } 00:17:19.309 ] 00:17:19.309 }' 00:17:19.309 11:27:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.309 11:27:37 -- common/autotest_common.sh@10 -- # set +x 00:17:19.567 11:27:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.826 [2024-11-26 11:27:37.982338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.826 [2024-11-26 11:27:37.982632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:19.826 11:27:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.085 [2024-11-26 11:27:38.178467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.085 [2024-11-26 11:27:38.178515] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.085 [2024-11-26 11:27:38.178548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.085 [2024-11-26 11:27:38.178559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.085 [2024-11-26 11:27:38.178570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.085 [2024-11-26 11:27:38.178580] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.085 [2024-11-26 11:27:38.178594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.085 [2024-11-26 11:27:38.178604] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.085 11:27:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.345 [2024-11-26 11:27:38.428723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.345 BaseBdev1 00:17:20.345 11:27:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:20.345 11:27:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:20.345 11:27:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.345 11:27:38 -- common/autotest_common.sh@899 -- # local i 00:17:20.345 11:27:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.345 11:27:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.345 11:27:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.604 11:27:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:20.604 [ 00:17:20.604 { 00:17:20.604 "name": "BaseBdev1", 00:17:20.604 "aliases": [ 00:17:20.604 "5d19a8c2-a06f-46b1-8555-968fcf2dc1d2" 00:17:20.604 ], 00:17:20.604 "product_name": "Malloc disk", 00:17:20.604 "block_size": 512, 00:17:20.604 "num_blocks": 65536, 00:17:20.604 "uuid": "5d19a8c2-a06f-46b1-8555-968fcf2dc1d2", 00:17:20.604 "assigned_rate_limits": { 00:17:20.604 "rw_ios_per_sec": 0, 00:17:20.604 "rw_mbytes_per_sec": 0, 00:17:20.604 "r_mbytes_per_sec": 0, 00:17:20.604 "w_mbytes_per_sec": 0 00:17:20.604 }, 00:17:20.604 "claimed": true, 00:17:20.604 "claim_type": "exclusive_write", 00:17:20.604 "zoned": false, 00:17:20.604 "supported_io_types": { 00:17:20.604 "read": true, 00:17:20.604 "write": true, 00:17:20.604 "unmap": true, 00:17:20.604 "write_zeroes": true, 00:17:20.604 "flush": true, 00:17:20.604 "reset": true, 00:17:20.604 "compare": false, 00:17:20.604 "compare_and_write": false, 00:17:20.604 "abort": true, 00:17:20.604 "nvme_admin": false, 00:17:20.604 "nvme_io": false 00:17:20.604 }, 00:17:20.604 "memory_domains": [ 00:17:20.604 { 00:17:20.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.604 "dma_device_type": 2 00:17:20.604 } 00:17:20.604 ], 00:17:20.604 "driver_specific": {} 00:17:20.604 } 00:17:20.604 ] 00:17:20.864 11:27:38 -- common/autotest_common.sh@905 -- # return 0 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.864 11:27:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.123 11:27:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.123 "name": "Existed_Raid", 00:17:21.123 "uuid": "708df245-4dbc-4081-9c87-d551814c0892", 00:17:21.123 "strip_size_kb": 0, 00:17:21.123 "state": "configuring", 00:17:21.123 "raid_level": "raid1", 00:17:21.123 "superblock": true, 00:17:21.123 "num_base_bdevs": 4, 00:17:21.123 "num_base_bdevs_discovered": 1, 00:17:21.123 "num_base_bdevs_operational": 4, 00:17:21.123 "base_bdevs_list": [ 00:17:21.123 { 00:17:21.123 "name": "BaseBdev1", 00:17:21.123 "uuid": "5d19a8c2-a06f-46b1-8555-968fcf2dc1d2", 00:17:21.123 "is_configured": true, 00:17:21.123 "data_offset": 2048, 00:17:21.123 "data_size": 63488 00:17:21.123 }, 00:17:21.123 { 00:17:21.123 "name": "BaseBdev2", 00:17:21.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.123 "is_configured": false, 00:17:21.123 "data_offset": 0, 00:17:21.123 "data_size": 0 00:17:21.123 }, 00:17:21.123 { 00:17:21.123 "name": "BaseBdev3", 00:17:21.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.123 "is_configured": false, 00:17:21.123 "data_offset": 0, 00:17:21.123 "data_size": 0 00:17:21.123 }, 00:17:21.123 { 00:17:21.123 "name": "BaseBdev4", 00:17:21.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.123 "is_configured": false, 00:17:21.123 "data_offset": 0, 00:17:21.123 "data_size": 0 00:17:21.123 } 00:17:21.123 ] 00:17:21.123 }' 00:17:21.123 11:27:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.123 11:27:39 -- common/autotest_common.sh@10 -- # set +x 00:17:21.382 11:27:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:21.382 [2024-11-26 11:27:39.613176] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.383 [2024-11-26 11:27:39.613237] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:21.642 11:27:39 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:21.642 11:27:39 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.901 11:27:39 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.901 BaseBdev1 00:17:21.901 11:27:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:21.901 11:27:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:21.901 11:27:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.901 11:27:40 -- common/autotest_common.sh@899 -- # local i 00:17:21.901 11:27:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.901 11:27:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.901 11:27:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.160 11:27:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.420 [ 00:17:22.420 { 00:17:22.420 "name": "BaseBdev1", 00:17:22.420 "aliases": [ 00:17:22.420 "755741e3-cf37-4c67-abc8-6dae08eb7650" 00:17:22.420 ], 00:17:22.420 "product_name": "Malloc disk", 00:17:22.420 "block_size": 512, 00:17:22.420 "num_blocks": 65536, 00:17:22.420 "uuid": "755741e3-cf37-4c67-abc8-6dae08eb7650", 00:17:22.420 "assigned_rate_limits": { 00:17:22.420 "rw_ios_per_sec": 0, 00:17:22.420 "rw_mbytes_per_sec": 0, 00:17:22.420 "r_mbytes_per_sec": 0, 00:17:22.420 "w_mbytes_per_sec": 0 00:17:22.420 }, 00:17:22.420 "claimed": false, 00:17:22.420 "zoned": false, 00:17:22.420 "supported_io_types": { 00:17:22.420 "read": true, 00:17:22.420 "write": true, 00:17:22.420 "unmap": true, 00:17:22.420 "write_zeroes": true, 00:17:22.420 "flush": true, 00:17:22.420 "reset": true, 00:17:22.420 "compare": false, 00:17:22.420 "compare_and_write": false, 00:17:22.420 "abort": true, 00:17:22.420 "nvme_admin": false, 00:17:22.420 "nvme_io": false 00:17:22.420 }, 00:17:22.420 "memory_domains": [ 00:17:22.420 { 00:17:22.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.420 "dma_device_type": 2 00:17:22.420 } 00:17:22.420 ], 00:17:22.420 "driver_specific": {} 00:17:22.420 } 00:17:22.420 ] 00:17:22.420 11:27:40 -- common/autotest_common.sh@905 -- # return 0 00:17:22.420 11:27:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:22.680 [2024-11-26 11:27:40.726545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.680 [2024-11-26 11:27:40.729064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.680 [2024-11-26 11:27:40.729109] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.680 [2024-11-26 11:27:40.729143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.680 [2024-11-26 11:27:40.729155] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.680 [2024-11-26 11:27:40.729165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:22.680 [2024-11-26 11:27:40.729174] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.680 11:27:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.939 11:27:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.939 "name": "Existed_Raid", 00:17:22.939 "uuid": "e6963ee2-43a4-4b3d-95f9-8f1efeb90f00", 00:17:22.939 "strip_size_kb": 0, 00:17:22.939 "state": "configuring", 00:17:22.939 "raid_level": "raid1", 00:17:22.939 "superblock": true, 00:17:22.939 "num_base_bdevs": 4, 00:17:22.939 "num_base_bdevs_discovered": 1, 00:17:22.939 "num_base_bdevs_operational": 4, 00:17:22.939 "base_bdevs_list": [ 00:17:22.939 { 00:17:22.939 "name": "BaseBdev1", 00:17:22.939 "uuid": "755741e3-cf37-4c67-abc8-6dae08eb7650", 00:17:22.939 "is_configured": true, 00:17:22.939 "data_offset": 2048, 00:17:22.939 "data_size": 63488 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": "BaseBdev2", 00:17:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.939 "is_configured": false, 00:17:22.939 "data_offset": 0, 00:17:22.939 "data_size": 0 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": "BaseBdev3", 00:17:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.939 "is_configured": false, 00:17:22.939 "data_offset": 0, 00:17:22.939 "data_size": 0 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": "BaseBdev4", 00:17:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.939 "is_configured": false, 00:17:22.939 "data_offset": 0, 00:17:22.939 "data_size": 0 00:17:22.939 } 00:17:22.939 ] 00:17:22.939 }' 00:17:22.939 11:27:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.939 11:27:40 -- common/autotest_common.sh@10 -- # set +x 00:17:23.198 11:27:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.198 [2024-11-26 11:27:41.438060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.457 BaseBdev2 00:17:23.457 11:27:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:23.457 11:27:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:23.457 11:27:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:23.457 11:27:41 -- common/autotest_common.sh@899 -- # local i 00:17:23.457 11:27:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:23.457 11:27:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:23.457 11:27:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.457 11:27:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.716 [ 00:17:23.716 { 00:17:23.716 "name": "BaseBdev2", 00:17:23.716 "aliases": [ 00:17:23.716 "0e548a84-b5b1-476b-9816-e375d69eba69" 00:17:23.716 ], 00:17:23.716 "product_name": "Malloc disk", 00:17:23.716 "block_size": 512, 00:17:23.716 "num_blocks": 65536, 00:17:23.716 "uuid": "0e548a84-b5b1-476b-9816-e375d69eba69", 00:17:23.716 "assigned_rate_limits": { 00:17:23.716 "rw_ios_per_sec": 0, 00:17:23.716 "rw_mbytes_per_sec": 0, 00:17:23.716 "r_mbytes_per_sec": 0, 00:17:23.716 "w_mbytes_per_sec": 0 00:17:23.716 }, 00:17:23.716 "claimed": true, 00:17:23.716 "claim_type": "exclusive_write", 00:17:23.716 "zoned": false, 00:17:23.716 "supported_io_types": { 00:17:23.716 "read": true, 00:17:23.716 "write": true, 00:17:23.716 "unmap": true, 00:17:23.716 "write_zeroes": true, 00:17:23.716 "flush": true, 00:17:23.716 "reset": true, 00:17:23.716 "compare": false, 00:17:23.716 "compare_and_write": false, 00:17:23.716 "abort": true, 00:17:23.716 "nvme_admin": false, 00:17:23.716 "nvme_io": false 00:17:23.716 }, 00:17:23.716 "memory_domains": [ 00:17:23.716 { 00:17:23.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.716 "dma_device_type": 2 00:17:23.716 } 00:17:23.716 ], 00:17:23.716 "driver_specific": {} 00:17:23.716 } 00:17:23.716 ] 00:17:23.716 11:27:41 -- common/autotest_common.sh@905 -- # return 0 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.716 11:27:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.717 11:27:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.976 11:27:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.976 "name": "Existed_Raid", 00:17:23.976 "uuid": "e6963ee2-43a4-4b3d-95f9-8f1efeb90f00", 00:17:23.976 "strip_size_kb": 0, 00:17:23.976 "state": "configuring", 00:17:23.976 "raid_level": "raid1", 00:17:23.976 "superblock": true, 00:17:23.976 "num_base_bdevs": 4, 00:17:23.976 "num_base_bdevs_discovered": 2, 00:17:23.976 "num_base_bdevs_operational": 4, 00:17:23.976 "base_bdevs_list": [ 00:17:23.976 { 00:17:23.976 "name": "BaseBdev1", 00:17:23.976 "uuid": "755741e3-cf37-4c67-abc8-6dae08eb7650", 00:17:23.976 "is_configured": true, 00:17:23.976 "data_offset": 2048, 00:17:23.976 "data_size": 63488 00:17:23.976 }, 00:17:23.976 { 00:17:23.976 "name": "BaseBdev2", 00:17:23.976 "uuid": "0e548a84-b5b1-476b-9816-e375d69eba69", 00:17:23.976 "is_configured": true, 00:17:23.976 "data_offset": 2048, 00:17:23.976 "data_size": 63488 00:17:23.976 }, 00:17:23.976 { 00:17:23.976 "name": "BaseBdev3", 00:17:23.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.976 "is_configured": false, 00:17:23.976 "data_offset": 0, 00:17:23.976 "data_size": 0 00:17:23.976 }, 00:17:23.976 { 00:17:23.976 "name": "BaseBdev4", 00:17:23.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.976 "is_configured": false, 00:17:23.976 "data_offset": 0, 00:17:23.976 "data_size": 0 00:17:23.976 } 00:17:23.976 ] 00:17:23.976 }' 00:17:23.976 11:27:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.976 11:27:42 -- common/autotest_common.sh@10 -- # set +x 00:17:24.235 11:27:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.495 [2024-11-26 11:27:42.594998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.495 BaseBdev3 00:17:24.495 11:27:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:24.495 11:27:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:24.495 11:27:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.495 11:27:42 -- common/autotest_common.sh@899 -- # local i 00:17:24.495 11:27:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.495 11:27:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.495 11:27:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.754 11:27:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.013 [ 00:17:25.013 { 00:17:25.013 "name": "BaseBdev3", 00:17:25.013 "aliases": [ 00:17:25.013 "b560dca8-6c46-4560-9d3e-4f13e86e073d" 00:17:25.013 ], 00:17:25.013 "product_name": "Malloc disk", 00:17:25.013 "block_size": 512, 00:17:25.013 "num_blocks": 65536, 00:17:25.013 "uuid": "b560dca8-6c46-4560-9d3e-4f13e86e073d", 00:17:25.013 "assigned_rate_limits": { 00:17:25.013 "rw_ios_per_sec": 0, 00:17:25.013 "rw_mbytes_per_sec": 0, 00:17:25.013 "r_mbytes_per_sec": 0, 00:17:25.013 "w_mbytes_per_sec": 0 00:17:25.013 }, 00:17:25.013 "claimed": true, 00:17:25.013 "claim_type": "exclusive_write", 00:17:25.013 "zoned": false, 00:17:25.013 "supported_io_types": { 00:17:25.013 "read": true, 00:17:25.013 "write": true, 00:17:25.013 "unmap": true, 00:17:25.013 "write_zeroes": true, 00:17:25.013 "flush": true, 00:17:25.013 "reset": true, 00:17:25.013 "compare": false, 00:17:25.013 "compare_and_write": false, 00:17:25.013 "abort": true, 00:17:25.013 "nvme_admin": false, 00:17:25.013 "nvme_io": false 00:17:25.013 }, 00:17:25.013 "memory_domains": [ 00:17:25.013 { 00:17:25.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.013 "dma_device_type": 2 00:17:25.013 } 00:17:25.013 ], 00:17:25.013 "driver_specific": {} 00:17:25.013 } 00:17:25.013 ] 00:17:25.013 11:27:43 -- common/autotest_common.sh@905 -- # return 0 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.013 11:27:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.014 11:27:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.014 11:27:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.014 11:27:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.273 11:27:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.273 "name": "Existed_Raid", 00:17:25.273 "uuid": "e6963ee2-43a4-4b3d-95f9-8f1efeb90f00", 00:17:25.273 "strip_size_kb": 0, 00:17:25.273 "state": "configuring", 00:17:25.273 "raid_level": "raid1", 00:17:25.273 "superblock": true, 00:17:25.273 "num_base_bdevs": 4, 00:17:25.273 "num_base_bdevs_discovered": 3, 00:17:25.273 "num_base_bdevs_operational": 4, 00:17:25.273 "base_bdevs_list": [ 00:17:25.273 { 00:17:25.273 "name": "BaseBdev1", 00:17:25.273 "uuid": "755741e3-cf37-4c67-abc8-6dae08eb7650", 00:17:25.273 "is_configured": true, 00:17:25.273 "data_offset": 2048, 00:17:25.273 "data_size": 63488 00:17:25.273 }, 00:17:25.273 { 00:17:25.273 "name": "BaseBdev2", 00:17:25.273 "uuid": "0e548a84-b5b1-476b-9816-e375d69eba69", 00:17:25.273 "is_configured": true, 00:17:25.273 "data_offset": 2048, 00:17:25.273 "data_size": 63488 00:17:25.273 }, 00:17:25.273 { 00:17:25.273 "name": "BaseBdev3", 00:17:25.273 "uuid": "b560dca8-6c46-4560-9d3e-4f13e86e073d", 00:17:25.273 "is_configured": true, 00:17:25.273 "data_offset": 2048, 00:17:25.273 "data_size": 63488 00:17:25.273 }, 00:17:25.273 { 00:17:25.273 "name": "BaseBdev4", 00:17:25.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.273 "is_configured": false, 00:17:25.273 "data_offset": 0, 00:17:25.273 "data_size": 0 00:17:25.273 } 00:17:25.273 ] 00:17:25.273 }' 00:17:25.273 11:27:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.273 11:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:25.532 11:27:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:25.792 [2024-11-26 11:27:43.784548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.792 [2024-11-26 11:27:43.784799] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:25.792 [2024-11-26 11:27:43.784823] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:25.792 [2024-11-26 11:27:43.784942] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:25.792 [2024-11-26 11:27:43.785336] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:25.792 [2024-11-26 11:27:43.785364] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:25.792 [2024-11-26 11:27:43.785526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.792 BaseBdev4 00:17:25.792 11:27:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:25.792 11:27:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:25.792 11:27:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.792 11:27:43 -- common/autotest_common.sh@899 -- # local i 00:17:25.792 11:27:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.792 11:27:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.792 11:27:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.792 11:27:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:26.051 [ 00:17:26.051 { 00:17:26.051 "name": "BaseBdev4", 00:17:26.051 "aliases": [ 00:17:26.051 "8e3eba95-6780-42fa-8193-7a179a0a3fb1" 00:17:26.051 ], 00:17:26.051 "product_name": "Malloc disk", 00:17:26.051 "block_size": 512, 00:17:26.051 "num_blocks": 65536, 00:17:26.051 "uuid": "8e3eba95-6780-42fa-8193-7a179a0a3fb1", 00:17:26.051 "assigned_rate_limits": { 00:17:26.051 "rw_ios_per_sec": 0, 00:17:26.051 "rw_mbytes_per_sec": 0, 00:17:26.051 "r_mbytes_per_sec": 0, 00:17:26.051 "w_mbytes_per_sec": 0 00:17:26.051 }, 00:17:26.051 "claimed": true, 00:17:26.051 "claim_type": "exclusive_write", 00:17:26.051 "zoned": false, 00:17:26.051 "supported_io_types": { 00:17:26.051 "read": true, 00:17:26.051 "write": true, 00:17:26.051 "unmap": true, 00:17:26.051 "write_zeroes": true, 00:17:26.051 "flush": true, 00:17:26.051 "reset": true, 00:17:26.051 "compare": false, 00:17:26.051 "compare_and_write": false, 00:17:26.051 "abort": true, 00:17:26.051 "nvme_admin": false, 00:17:26.051 "nvme_io": false 00:17:26.051 }, 00:17:26.051 "memory_domains": [ 00:17:26.051 { 00:17:26.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.051 "dma_device_type": 2 00:17:26.051 } 00:17:26.051 ], 00:17:26.051 "driver_specific": {} 00:17:26.051 } 00:17:26.051 ] 00:17:26.051 11:27:44 -- common/autotest_common.sh@905 -- # return 0 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.051 11:27:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.310 11:27:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.310 "name": "Existed_Raid", 00:17:26.310 "uuid": "e6963ee2-43a4-4b3d-95f9-8f1efeb90f00", 00:17:26.310 "strip_size_kb": 0, 00:17:26.310 "state": "online", 00:17:26.310 "raid_level": "raid1", 00:17:26.310 "superblock": true, 00:17:26.310 "num_base_bdevs": 4, 00:17:26.310 "num_base_bdevs_discovered": 4, 00:17:26.310 "num_base_bdevs_operational": 4, 00:17:26.310 "base_bdevs_list": [ 00:17:26.310 { 00:17:26.310 "name": "BaseBdev1", 00:17:26.310 "uuid": "755741e3-cf37-4c67-abc8-6dae08eb7650", 00:17:26.310 "is_configured": true, 00:17:26.310 "data_offset": 2048, 00:17:26.310 "data_size": 63488 00:17:26.310 }, 00:17:26.310 { 00:17:26.310 "name": "BaseBdev2", 00:17:26.310 "uuid": "0e548a84-b5b1-476b-9816-e375d69eba69", 00:17:26.310 "is_configured": true, 00:17:26.310 "data_offset": 2048, 00:17:26.310 "data_size": 63488 00:17:26.310 }, 00:17:26.310 { 00:17:26.310 "name": "BaseBdev3", 00:17:26.310 "uuid": "b560dca8-6c46-4560-9d3e-4f13e86e073d", 00:17:26.310 "is_configured": true, 00:17:26.310 "data_offset": 2048, 00:17:26.310 "data_size": 63488 00:17:26.310 }, 00:17:26.310 { 00:17:26.310 "name": "BaseBdev4", 00:17:26.310 "uuid": "8e3eba95-6780-42fa-8193-7a179a0a3fb1", 00:17:26.310 "is_configured": true, 00:17:26.310 "data_offset": 2048, 00:17:26.310 "data_size": 63488 00:17:26.310 } 00:17:26.310 ] 00:17:26.310 }' 00:17:26.310 11:27:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.310 11:27:44 -- common/autotest_common.sh@10 -- # set +x 00:17:26.569 11:27:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.829 [2024-11-26 11:27:44.913028] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.829 11:27:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.088 11:27:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.088 "name": "Existed_Raid", 00:17:27.088 "uuid": "e6963ee2-43a4-4b3d-95f9-8f1efeb90f00", 00:17:27.088 "strip_size_kb": 0, 00:17:27.088 "state": "online", 00:17:27.088 "raid_level": "raid1", 00:17:27.088 "superblock": true, 00:17:27.088 "num_base_bdevs": 4, 00:17:27.088 "num_base_bdevs_discovered": 3, 00:17:27.088 "num_base_bdevs_operational": 3, 00:17:27.088 "base_bdevs_list": [ 00:17:27.088 { 00:17:27.088 "name": null, 00:17:27.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.088 "is_configured": false, 00:17:27.088 "data_offset": 2048, 00:17:27.088 "data_size": 63488 00:17:27.088 }, 00:17:27.088 { 00:17:27.088 "name": "BaseBdev2", 00:17:27.088 "uuid": "0e548a84-b5b1-476b-9816-e375d69eba69", 00:17:27.088 "is_configured": true, 00:17:27.088 "data_offset": 2048, 00:17:27.088 "data_size": 63488 00:17:27.088 }, 00:17:27.088 { 00:17:27.088 "name": "BaseBdev3", 00:17:27.088 "uuid": "b560dca8-6c46-4560-9d3e-4f13e86e073d", 00:17:27.088 "is_configured": true, 00:17:27.088 "data_offset": 2048, 00:17:27.088 "data_size": 63488 00:17:27.088 }, 00:17:27.088 { 00:17:27.088 "name": "BaseBdev4", 00:17:27.088 "uuid": "8e3eba95-6780-42fa-8193-7a179a0a3fb1", 00:17:27.088 "is_configured": true, 00:17:27.088 "data_offset": 2048, 00:17:27.088 "data_size": 63488 00:17:27.088 } 00:17:27.088 ] 00:17:27.088 }' 00:17:27.088 11:27:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.088 11:27:45 -- common/autotest_common.sh@10 -- # set +x 00:17:27.346 11:27:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:27.346 11:27:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.346 11:27:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.346 11:27:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.605 11:27:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.605 11:27:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.605 11:27:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:27.865 [2024-11-26 11:27:45.856700] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.865 11:27:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.865 11:27:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.865 11:27:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.865 11:27:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.124 [2024-11-26 11:27:46.319776] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.124 11:27:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.383 11:27:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.383 11:27:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.383 11:27:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:28.642 [2024-11-26 11:27:46.786988] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:28.642 [2024-11-26 11:27:46.787021] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.642 [2024-11-26 11:27:46.787100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.642 [2024-11-26 11:27:46.793801] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.642 [2024-11-26 11:27:46.793854] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:17:28.642 11:27:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.642 11:27:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.642 11:27:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.642 11:27:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.901 11:27:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:28.901 11:27:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:28.901 11:27:47 -- bdev/bdev_raid.sh@287 -- # killprocess 87087 00:17:28.901 11:27:47 -- common/autotest_common.sh@936 -- # '[' -z 87087 ']' 00:17:28.901 11:27:47 -- common/autotest_common.sh@940 -- # kill -0 87087 00:17:28.901 11:27:47 -- common/autotest_common.sh@941 -- # uname 00:17:28.901 11:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.901 11:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87087 00:17:28.901 killing process with pid 87087 00:17:28.901 11:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:28.901 11:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:28.901 11:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87087' 00:17:28.901 11:27:47 -- common/autotest_common.sh@955 -- # kill 87087 00:17:28.902 [2024-11-26 11:27:47.086198] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.902 11:27:47 -- common/autotest_common.sh@960 -- # wait 87087 00:17:28.902 [2024-11-26 11:27:47.086348] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.161 ************************************ 00:17:29.161 END TEST raid_state_function_test_sb 00:17:29.161 ************************************ 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:29.161 00:17:29.161 real 0m11.330s 00:17:29.161 user 0m20.097s 00:17:29.161 sys 0m1.671s 00:17:29.161 11:27:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:29.161 11:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:29.161 11:27:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:29.161 11:27:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:29.161 11:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.161 ************************************ 00:17:29.161 START TEST raid_superblock_test 00:17:29.161 ************************************ 00:17:29.161 11:27:47 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:29.161 11:27:47 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@357 -- # raid_pid=87474 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@358 -- # waitforlisten 87474 /var/tmp/spdk-raid.sock 00:17:29.162 11:27:47 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:29.162 11:27:47 -- common/autotest_common.sh@829 -- # '[' -z 87474 ']' 00:17:29.162 11:27:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.162 11:27:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.162 11:27:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.162 11:27:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.162 11:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.162 [2024-11-26 11:27:47.398538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:29.162 [2024-11-26 11:27:47.398716] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87474 ] 00:17:29.422 [2024-11-26 11:27:47.565226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.422 [2024-11-26 11:27:47.599196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.422 [2024-11-26 11:27:47.630259] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.359 11:27:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.359 11:27:48 -- common/autotest_common.sh@862 -- # return 0 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:30.359 malloc1 00:17:30.359 11:27:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.618 [2024-11-26 11:27:48.712113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.618 [2024-11-26 11:27:48.712220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.618 [2024-11-26 11:27:48.712257] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:17:30.619 [2024-11-26 11:27:48.712284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.619 [2024-11-26 11:27:48.714815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.619 [2024-11-26 11:27:48.714870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.619 pt1 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.619 11:27:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:30.878 malloc2 00:17:30.878 11:27:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.878 [2024-11-26 11:27:49.111269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.878 [2024-11-26 11:27:49.111371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.878 [2024-11-26 11:27:49.111403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:17:30.878 [2024-11-26 11:27:49.111416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.878 [2024-11-26 11:27:49.113721] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.878 [2024-11-26 11:27:49.113775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.878 pt2 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:31.170 malloc3 00:17:31.170 11:27:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.443 [2024-11-26 11:27:49.522930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.443 [2024-11-26 11:27:49.523061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.443 [2024-11-26 11:27:49.523094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:17:31.443 [2024-11-26 11:27:49.523108] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.443 [2024-11-26 11:27:49.525597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.443 [2024-11-26 11:27:49.525652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.443 pt3 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.443 11:27:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:31.702 malloc4 00:17:31.702 11:27:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:31.702 [2024-11-26 11:27:49.933404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:31.702 [2024-11-26 11:27:49.933502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.702 [2024-11-26 11:27:49.933539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:31.702 [2024-11-26 11:27:49.933552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.702 [2024-11-26 11:27:49.936392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.702 [2024-11-26 11:27:49.936446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:31.702 pt4 00:17:31.961 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.961 11:27:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.961 11:27:49 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:31.961 [2024-11-26 11:27:50.189580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.961 [2024-11-26 11:27:50.191837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.961 [2024-11-26 11:27:50.191945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.961 [2024-11-26 11:27:50.192035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:31.961 [2024-11-26 11:27:50.192336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:31.961 [2024-11-26 11:27:50.192376] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:31.961 [2024-11-26 11:27:50.192507] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:31.961 [2024-11-26 11:27:50.192934] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:31.961 [2024-11-26 11:27:50.192965] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:31.961 [2024-11-26 11:27:50.193131] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.220 "name": "raid_bdev1", 00:17:32.220 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:32.220 "strip_size_kb": 0, 00:17:32.220 "state": "online", 00:17:32.220 "raid_level": "raid1", 00:17:32.220 "superblock": true, 00:17:32.220 "num_base_bdevs": 4, 00:17:32.220 "num_base_bdevs_discovered": 4, 00:17:32.220 "num_base_bdevs_operational": 4, 00:17:32.220 "base_bdevs_list": [ 00:17:32.220 { 00:17:32.220 "name": "pt1", 00:17:32.220 "uuid": "2384c04c-6f6b-5f81-9113-72c7ff50cab8", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": "pt2", 00:17:32.220 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": "pt3", 00:17:32.220 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": "pt4", 00:17:32.220 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 } 00:17:32.220 ] 00:17:32.220 }' 00:17:32.220 11:27:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.220 11:27:50 -- common/autotest_common.sh@10 -- # set +x 00:17:32.789 11:27:50 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.789 11:27:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:32.789 [2024-11-26 11:27:50.970047] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.789 11:27:50 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4c314958-317e-4537-b83a-2b36f43f4d4c 00:17:32.789 11:27:50 -- bdev/bdev_raid.sh@380 -- # '[' -z 4c314958-317e-4537-b83a-2b36f43f4d4c ']' 00:17:32.789 11:27:50 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.048 [2024-11-26 11:27:51.165759] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.048 [2024-11-26 11:27:51.165797] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.048 [2024-11-26 11:27:51.165979] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.048 [2024-11-26 11:27:51.166109] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.048 [2024-11-26 11:27:51.166141] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:33.048 11:27:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:33.048 11:27:51 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.306 11:27:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:33.306 11:27:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:33.306 11:27:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.306 11:27:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:33.565 11:27:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.565 11:27:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:33.824 11:27:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.824 11:27:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.082 11:27:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.082 11:27:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:34.341 11:27:52 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.341 11:27:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.341 11:27:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:34.341 11:27:52 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.341 11:27:52 -- common/autotest_common.sh@650 -- # local es=0 00:17:34.341 11:27:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.341 11:27:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.341 11:27:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.341 11:27:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.600 11:27:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.600 11:27:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.600 11:27:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.600 11:27:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.600 11:27:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.600 11:27:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.600 [2024-11-26 11:27:52.762218] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.600 [2024-11-26 11:27:52.764390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.600 [2024-11-26 11:27:52.764447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:34.600 [2024-11-26 11:27:52.764489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:34.600 [2024-11-26 11:27:52.764544] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:34.600 [2024-11-26 11:27:52.764621] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:34.600 [2024-11-26 11:27:52.764649] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:34.600 [2024-11-26 11:27:52.764672] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:34.600 [2024-11-26 11:27:52.764691] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.600 [2024-11-26 11:27:52.764702] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:17:34.600 request: 00:17:34.600 { 00:17:34.600 "name": "raid_bdev1", 00:17:34.600 "raid_level": "raid1", 00:17:34.600 "base_bdevs": [ 00:17:34.600 "malloc1", 00:17:34.600 "malloc2", 00:17:34.600 "malloc3", 00:17:34.600 "malloc4" 00:17:34.600 ], 00:17:34.600 "superblock": false, 00:17:34.600 "method": "bdev_raid_create", 00:17:34.600 "req_id": 1 00:17:34.600 } 00:17:34.600 Got JSON-RPC error response 00:17:34.600 response: 00:17:34.600 { 00:17:34.600 "code": -17, 00:17:34.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.600 } 00:17:34.600 11:27:52 -- common/autotest_common.sh@653 -- # es=1 00:17:34.600 11:27:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.600 11:27:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.600 11:27:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.600 11:27:52 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.600 11:27:52 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:34.859 11:27:52 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:34.859 11:27:52 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:34.859 11:27:52 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.118 [2024-11-26 11:27:53.178359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.118 [2024-11-26 11:27:53.178425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.118 [2024-11-26 11:27:53.178456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:35.118 [2024-11-26 11:27:53.178468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.118 [2024-11-26 11:27:53.181089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.118 [2024-11-26 11:27:53.181121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.118 [2024-11-26 11:27:53.181205] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:35.118 [2024-11-26 11:27:53.181302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.118 pt1 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.118 11:27:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.377 11:27:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.377 "name": "raid_bdev1", 00:17:35.377 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:35.377 "strip_size_kb": 0, 00:17:35.378 "state": "configuring", 00:17:35.378 "raid_level": "raid1", 00:17:35.378 "superblock": true, 00:17:35.378 "num_base_bdevs": 4, 00:17:35.378 "num_base_bdevs_discovered": 1, 00:17:35.378 "num_base_bdevs_operational": 4, 00:17:35.378 "base_bdevs_list": [ 00:17:35.378 { 00:17:35.378 "name": "pt1", 00:17:35.378 "uuid": "2384c04c-6f6b-5f81-9113-72c7ff50cab8", 00:17:35.378 "is_configured": true, 00:17:35.378 "data_offset": 2048, 00:17:35.378 "data_size": 63488 00:17:35.378 }, 00:17:35.378 { 00:17:35.378 "name": null, 00:17:35.378 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:35.378 "is_configured": false, 00:17:35.378 "data_offset": 2048, 00:17:35.378 "data_size": 63488 00:17:35.378 }, 00:17:35.378 { 00:17:35.378 "name": null, 00:17:35.378 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:35.378 "is_configured": false, 00:17:35.378 "data_offset": 2048, 00:17:35.378 "data_size": 63488 00:17:35.378 }, 00:17:35.378 { 00:17:35.378 "name": null, 00:17:35.378 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:35.378 "is_configured": false, 00:17:35.378 "data_offset": 2048, 00:17:35.378 "data_size": 63488 00:17:35.378 } 00:17:35.378 ] 00:17:35.378 }' 00:17:35.378 11:27:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.378 11:27:53 -- common/autotest_common.sh@10 -- # set +x 00:17:35.637 11:27:53 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:35.637 11:27:53 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.896 [2024-11-26 11:27:54.002558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.896 [2024-11-26 11:27:54.002664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.896 [2024-11-26 11:27:54.002703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:17:35.896 [2024-11-26 11:27:54.002718] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.896 [2024-11-26 11:27:54.003203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.896 [2024-11-26 11:27:54.003297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.896 [2024-11-26 11:27:54.003399] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:35.896 [2024-11-26 11:27:54.003428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.896 pt2 00:17:35.896 11:27:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.155 [2024-11-26 11:27:54.250690] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.155 11:27:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.414 11:27:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.414 "name": "raid_bdev1", 00:17:36.414 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:36.414 "strip_size_kb": 0, 00:17:36.414 "state": "configuring", 00:17:36.414 "raid_level": "raid1", 00:17:36.414 "superblock": true, 00:17:36.414 "num_base_bdevs": 4, 00:17:36.414 "num_base_bdevs_discovered": 1, 00:17:36.414 "num_base_bdevs_operational": 4, 00:17:36.414 "base_bdevs_list": [ 00:17:36.414 { 00:17:36.414 "name": "pt1", 00:17:36.414 "uuid": "2384c04c-6f6b-5f81-9113-72c7ff50cab8", 00:17:36.414 "is_configured": true, 00:17:36.414 "data_offset": 2048, 00:17:36.414 "data_size": 63488 00:17:36.414 }, 00:17:36.414 { 00:17:36.414 "name": null, 00:17:36.414 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:36.414 "is_configured": false, 00:17:36.414 "data_offset": 2048, 00:17:36.414 "data_size": 63488 00:17:36.414 }, 00:17:36.414 { 00:17:36.414 "name": null, 00:17:36.414 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:36.414 "is_configured": false, 00:17:36.414 "data_offset": 2048, 00:17:36.414 "data_size": 63488 00:17:36.414 }, 00:17:36.414 { 00:17:36.414 "name": null, 00:17:36.414 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:36.414 "is_configured": false, 00:17:36.414 "data_offset": 2048, 00:17:36.414 "data_size": 63488 00:17:36.414 } 00:17:36.414 ] 00:17:36.414 }' 00:17:36.414 11:27:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.414 11:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.673 11:27:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:36.673 11:27:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:36.673 11:27:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.932 [2024-11-26 11:27:55.074894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.932 [2024-11-26 11:27:55.075039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.932 [2024-11-26 11:27:55.075068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:17:36.932 [2024-11-26 11:27:55.075085] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.932 [2024-11-26 11:27:55.075680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.932 [2024-11-26 11:27:55.075707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.932 [2024-11-26 11:27:55.075779] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.932 [2024-11-26 11:27:55.075808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.932 pt2 00:17:36.932 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:36.932 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:36.932 11:27:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.191 [2024-11-26 11:27:55.291038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.191 [2024-11-26 11:27:55.291341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.191 [2024-11-26 11:27:55.291380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:37.191 [2024-11-26 11:27:55.291398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.191 [2024-11-26 11:27:55.291862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.191 [2024-11-26 11:27:55.291890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.191 [2024-11-26 11:27:55.292007] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:37.191 [2024-11-26 11:27:55.292064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.191 pt3 00:17:37.191 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.191 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.191 11:27:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:37.450 [2024-11-26 11:27:55.547073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:37.450 [2024-11-26 11:27:55.547295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.450 [2024-11-26 11:27:55.547364] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:37.450 [2024-11-26 11:27:55.547520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.450 [2024-11-26 11:27:55.548049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.450 [2024-11-26 11:27:55.548211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:37.450 [2024-11-26 11:27:55.548429] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:37.450 [2024-11-26 11:27:55.548575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:37.450 [2024-11-26 11:27:55.548832] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:17:37.450 [2024-11-26 11:27:55.549020] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:37.450 [2024-11-26 11:27:55.549245] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:37.450 [2024-11-26 11:27:55.549736] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:17:37.450 [2024-11-26 11:27:55.549856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:17:37.450 [2024-11-26 11:27:55.550193] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.450 pt4 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.450 11:27:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.709 11:27:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.709 "name": "raid_bdev1", 00:17:37.709 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:37.709 "strip_size_kb": 0, 00:17:37.709 "state": "online", 00:17:37.709 "raid_level": "raid1", 00:17:37.709 "superblock": true, 00:17:37.709 "num_base_bdevs": 4, 00:17:37.709 "num_base_bdevs_discovered": 4, 00:17:37.709 "num_base_bdevs_operational": 4, 00:17:37.709 "base_bdevs_list": [ 00:17:37.709 { 00:17:37.709 "name": "pt1", 00:17:37.709 "uuid": "2384c04c-6f6b-5f81-9113-72c7ff50cab8", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 }, 00:17:37.709 { 00:17:37.709 "name": "pt2", 00:17:37.709 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 }, 00:17:37.709 { 00:17:37.709 "name": "pt3", 00:17:37.709 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 }, 00:17:37.709 { 00:17:37.709 "name": "pt4", 00:17:37.709 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:37.709 "is_configured": true, 00:17:37.709 "data_offset": 2048, 00:17:37.709 "data_size": 63488 00:17:37.709 } 00:17:37.709 ] 00:17:37.709 }' 00:17:37.709 11:27:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.709 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.968 11:27:56 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.968 11:27:56 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:38.227 [2024-11-26 11:27:56.271527] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.227 11:27:56 -- bdev/bdev_raid.sh@430 -- # '[' 4c314958-317e-4537-b83a-2b36f43f4d4c '!=' 4c314958-317e-4537-b83a-2b36f43f4d4c ']' 00:17:38.227 11:27:56 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:38.227 11:27:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:38.227 11:27:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:38.227 11:27:56 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:38.486 [2024-11-26 11:27:56.475436] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.486 11:27:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.745 11:27:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.745 "name": "raid_bdev1", 00:17:38.745 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:38.745 "strip_size_kb": 0, 00:17:38.745 "state": "online", 00:17:38.745 "raid_level": "raid1", 00:17:38.745 "superblock": true, 00:17:38.745 "num_base_bdevs": 4, 00:17:38.745 "num_base_bdevs_discovered": 3, 00:17:38.745 "num_base_bdevs_operational": 3, 00:17:38.745 "base_bdevs_list": [ 00:17:38.745 { 00:17:38.745 "name": null, 00:17:38.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.745 "is_configured": false, 00:17:38.745 "data_offset": 2048, 00:17:38.745 "data_size": 63488 00:17:38.745 }, 00:17:38.745 { 00:17:38.745 "name": "pt2", 00:17:38.745 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:38.745 "is_configured": true, 00:17:38.745 "data_offset": 2048, 00:17:38.745 "data_size": 63488 00:17:38.745 }, 00:17:38.745 { 00:17:38.745 "name": "pt3", 00:17:38.745 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:38.745 "is_configured": true, 00:17:38.745 "data_offset": 2048, 00:17:38.745 "data_size": 63488 00:17:38.745 }, 00:17:38.745 { 00:17:38.745 "name": "pt4", 00:17:38.745 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:38.745 "is_configured": true, 00:17:38.745 "data_offset": 2048, 00:17:38.745 "data_size": 63488 00:17:38.745 } 00:17:38.745 ] 00:17:38.745 }' 00:17:38.745 11:27:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.745 11:27:56 -- common/autotest_common.sh@10 -- # set +x 00:17:39.004 11:27:57 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.004 [2024-11-26 11:27:57.215596] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.004 [2024-11-26 11:27:57.215800] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.004 [2024-11-26 11:27:57.215932] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.004 [2024-11-26 11:27:57.216027] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.004 [2024-11-26 11:27:57.216042] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:17:39.004 11:27:57 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.004 11:27:57 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:39.263 11:27:57 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:39.263 11:27:57 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:39.263 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:39.263 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:39.263 11:27:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:39.522 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:39.522 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:39.522 11:27:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:39.781 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:39.781 11:27:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:39.781 11:27:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:40.039 11:27:58 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:40.039 11:27:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:40.039 11:27:58 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:40.039 11:27:58 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:40.039 11:27:58 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.298 [2024-11-26 11:27:58.315859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.298 [2024-11-26 11:27:58.315991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.298 [2024-11-26 11:27:58.316043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:17:40.298 [2024-11-26 11:27:58.316057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.298 [2024-11-26 11:27:58.318495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.298 [2024-11-26 11:27:58.318534] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.298 [2024-11-26 11:27:58.318629] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:40.298 [2024-11-26 11:27:58.318664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.298 pt2 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.298 11:27:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.557 11:27:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.557 "name": "raid_bdev1", 00:17:40.557 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:40.557 "strip_size_kb": 0, 00:17:40.557 "state": "configuring", 00:17:40.557 "raid_level": "raid1", 00:17:40.557 "superblock": true, 00:17:40.557 "num_base_bdevs": 4, 00:17:40.557 "num_base_bdevs_discovered": 1, 00:17:40.557 "num_base_bdevs_operational": 3, 00:17:40.557 "base_bdevs_list": [ 00:17:40.557 { 00:17:40.557 "name": null, 00:17:40.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.557 "is_configured": false, 00:17:40.557 "data_offset": 2048, 00:17:40.557 "data_size": 63488 00:17:40.557 }, 00:17:40.557 { 00:17:40.557 "name": "pt2", 00:17:40.557 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:40.557 "is_configured": true, 00:17:40.557 "data_offset": 2048, 00:17:40.557 "data_size": 63488 00:17:40.557 }, 00:17:40.557 { 00:17:40.557 "name": null, 00:17:40.557 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:40.557 "is_configured": false, 00:17:40.557 "data_offset": 2048, 00:17:40.557 "data_size": 63488 00:17:40.557 }, 00:17:40.557 { 00:17:40.557 "name": null, 00:17:40.557 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:40.557 "is_configured": false, 00:17:40.557 "data_offset": 2048, 00:17:40.557 "data_size": 63488 00:17:40.557 } 00:17:40.557 ] 00:17:40.557 }' 00:17:40.557 11:27:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.557 11:27:58 -- common/autotest_common.sh@10 -- # set +x 00:17:40.817 11:27:58 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:40.817 11:27:58 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:40.817 11:27:58 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:40.817 [2024-11-26 11:27:59.056105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:40.817 [2024-11-26 11:27:59.056174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.817 [2024-11-26 11:27:59.056209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:17:40.817 [2024-11-26 11:27:59.056224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.817 [2024-11-26 11:27:59.056731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.817 [2024-11-26 11:27:59.056762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:40.817 [2024-11-26 11:27:59.056850] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:40.817 [2024-11-26 11:27:59.056894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:41.080 pt3 00:17:41.080 11:27:59 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:41.080 11:27:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.080 11:27:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.080 11:27:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.081 "name": "raid_bdev1", 00:17:41.081 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:41.081 "strip_size_kb": 0, 00:17:41.081 "state": "configuring", 00:17:41.081 "raid_level": "raid1", 00:17:41.081 "superblock": true, 00:17:41.081 "num_base_bdevs": 4, 00:17:41.081 "num_base_bdevs_discovered": 2, 00:17:41.081 "num_base_bdevs_operational": 3, 00:17:41.081 "base_bdevs_list": [ 00:17:41.081 { 00:17:41.081 "name": null, 00:17:41.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.081 "is_configured": false, 00:17:41.081 "data_offset": 2048, 00:17:41.081 "data_size": 63488 00:17:41.081 }, 00:17:41.081 { 00:17:41.081 "name": "pt2", 00:17:41.081 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:41.081 "is_configured": true, 00:17:41.081 "data_offset": 2048, 00:17:41.081 "data_size": 63488 00:17:41.081 }, 00:17:41.081 { 00:17:41.081 "name": "pt3", 00:17:41.081 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:41.081 "is_configured": true, 00:17:41.081 "data_offset": 2048, 00:17:41.081 "data_size": 63488 00:17:41.081 }, 00:17:41.081 { 00:17:41.081 "name": null, 00:17:41.081 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:41.081 "is_configured": false, 00:17:41.081 "data_offset": 2048, 00:17:41.081 "data_size": 63488 00:17:41.081 } 00:17:41.081 ] 00:17:41.081 }' 00:17:41.081 11:27:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.081 11:27:59 -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@462 -- # i=3 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:41.649 [2024-11-26 11:27:59.760304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:41.649 [2024-11-26 11:27:59.760414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.649 [2024-11-26 11:27:59.760449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:17:41.649 [2024-11-26 11:27:59.760463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.649 [2024-11-26 11:27:59.760913] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.649 [2024-11-26 11:27:59.760959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:41.649 [2024-11-26 11:27:59.761054] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:41.649 [2024-11-26 11:27:59.761121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:41.649 [2024-11-26 11:27:59.761262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:17:41.649 [2024-11-26 11:27:59.761278] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:41.649 [2024-11-26 11:27:59.761365] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:41.649 [2024-11-26 11:27:59.761720] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:17:41.649 [2024-11-26 11:27:59.761749] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:17:41.649 [2024-11-26 11:27:59.761891] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.649 pt4 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.649 11:27:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.909 11:28:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.909 "name": "raid_bdev1", 00:17:41.909 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:41.909 "strip_size_kb": 0, 00:17:41.909 "state": "online", 00:17:41.909 "raid_level": "raid1", 00:17:41.909 "superblock": true, 00:17:41.909 "num_base_bdevs": 4, 00:17:41.909 "num_base_bdevs_discovered": 3, 00:17:41.909 "num_base_bdevs_operational": 3, 00:17:41.909 "base_bdevs_list": [ 00:17:41.910 { 00:17:41.910 "name": null, 00:17:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.910 "is_configured": false, 00:17:41.910 "data_offset": 2048, 00:17:41.910 "data_size": 63488 00:17:41.910 }, 00:17:41.910 { 00:17:41.910 "name": "pt2", 00:17:41.910 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:41.910 "is_configured": true, 00:17:41.910 "data_offset": 2048, 00:17:41.910 "data_size": 63488 00:17:41.910 }, 00:17:41.910 { 00:17:41.910 "name": "pt3", 00:17:41.910 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:41.910 "is_configured": true, 00:17:41.910 "data_offset": 2048, 00:17:41.910 "data_size": 63488 00:17:41.910 }, 00:17:41.910 { 00:17:41.910 "name": "pt4", 00:17:41.910 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:41.910 "is_configured": true, 00:17:41.910 "data_offset": 2048, 00:17:41.910 "data_size": 63488 00:17:41.910 } 00:17:41.910 ] 00:17:41.910 }' 00:17:41.910 11:28:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.910 11:28:00 -- common/autotest_common.sh@10 -- # set +x 00:17:42.169 11:28:00 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:17:42.169 11:28:00 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:42.428 [2024-11-26 11:28:00.488548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.428 [2024-11-26 11:28:00.488581] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.428 [2024-11-26 11:28:00.488673] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.428 [2024-11-26 11:28:00.488751] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.428 [2024-11-26 11:28:00.488768] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:17:42.428 11:28:00 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:42.428 11:28:00 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.688 [2024-11-26 11:28:00.880619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.688 [2024-11-26 11:28:00.880716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.688 [2024-11-26 11:28:00.880743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:17:42.688 [2024-11-26 11:28:00.880759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.688 [2024-11-26 11:28:00.883198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.688 [2024-11-26 11:28:00.883271] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.688 [2024-11-26 11:28:00.883363] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:42.688 [2024-11-26 11:28:00.883408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.688 pt1 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.688 11:28:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.947 11:28:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.947 "name": "raid_bdev1", 00:17:42.947 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:42.947 "strip_size_kb": 0, 00:17:42.947 "state": "configuring", 00:17:42.947 "raid_level": "raid1", 00:17:42.947 "superblock": true, 00:17:42.947 "num_base_bdevs": 4, 00:17:42.947 "num_base_bdevs_discovered": 1, 00:17:42.947 "num_base_bdevs_operational": 4, 00:17:42.947 "base_bdevs_list": [ 00:17:42.947 { 00:17:42.947 "name": "pt1", 00:17:42.947 "uuid": "2384c04c-6f6b-5f81-9113-72c7ff50cab8", 00:17:42.947 "is_configured": true, 00:17:42.947 "data_offset": 2048, 00:17:42.947 "data_size": 63488 00:17:42.947 }, 00:17:42.947 { 00:17:42.947 "name": null, 00:17:42.947 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:42.947 "is_configured": false, 00:17:42.947 "data_offset": 2048, 00:17:42.947 "data_size": 63488 00:17:42.947 }, 00:17:42.947 { 00:17:42.947 "name": null, 00:17:42.947 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:42.947 "is_configured": false, 00:17:42.947 "data_offset": 2048, 00:17:42.947 "data_size": 63488 00:17:42.947 }, 00:17:42.947 { 00:17:42.947 "name": null, 00:17:42.947 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:42.947 "is_configured": false, 00:17:42.947 "data_offset": 2048, 00:17:42.947 "data_size": 63488 00:17:42.947 } 00:17:42.947 ] 00:17:42.947 }' 00:17:42.948 11:28:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.948 11:28:01 -- common/autotest_common.sh@10 -- # set +x 00:17:43.207 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:43.207 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:43.207 11:28:01 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:43.465 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:43.465 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:43.465 11:28:01 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:43.724 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:43.724 11:28:01 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:43.724 11:28:01 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:43.982 11:28:02 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:43.982 11:28:02 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:43.982 11:28:02 -- bdev/bdev_raid.sh@489 -- # i=3 00:17:43.982 11:28:02 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:44.242 [2024-11-26 11:28:02.429825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:44.242 [2024-11-26 11:28:02.430709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.242 [2024-11-26 11:28:02.430798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:17:44.242 [2024-11-26 11:28:02.430833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.242 [2024-11-26 11:28:02.432241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.242 [2024-11-26 11:28:02.432445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:44.242 [2024-11-26 11:28:02.433006] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:44.242 [2024-11-26 11:28:02.433062] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:44.242 [2024-11-26 11:28:02.433084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.242 [2024-11-26 11:28:02.433147] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:17:44.242 [2024-11-26 11:28:02.433688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:44.242 pt4 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.242 11:28:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.503 11:28:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.503 "name": "raid_bdev1", 00:17:44.503 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:44.503 "strip_size_kb": 0, 00:17:44.503 "state": "configuring", 00:17:44.503 "raid_level": "raid1", 00:17:44.503 "superblock": true, 00:17:44.503 "num_base_bdevs": 4, 00:17:44.503 "num_base_bdevs_discovered": 1, 00:17:44.504 "num_base_bdevs_operational": 3, 00:17:44.504 "base_bdevs_list": [ 00:17:44.504 { 00:17:44.504 "name": null, 00:17:44.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.504 "is_configured": false, 00:17:44.504 "data_offset": 2048, 00:17:44.504 "data_size": 63488 00:17:44.504 }, 00:17:44.504 { 00:17:44.504 "name": null, 00:17:44.504 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:44.504 "is_configured": false, 00:17:44.504 "data_offset": 2048, 00:17:44.504 "data_size": 63488 00:17:44.504 }, 00:17:44.504 { 00:17:44.504 "name": null, 00:17:44.504 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:44.504 "is_configured": false, 00:17:44.504 "data_offset": 2048, 00:17:44.504 "data_size": 63488 00:17:44.504 }, 00:17:44.504 { 00:17:44.504 "name": "pt4", 00:17:44.504 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:44.504 "is_configured": true, 00:17:44.504 "data_offset": 2048, 00:17:44.504 "data_size": 63488 00:17:44.504 } 00:17:44.504 ] 00:17:44.504 }' 00:17:44.504 11:28:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.504 11:28:02 -- common/autotest_common.sh@10 -- # set +x 00:17:44.767 11:28:02 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:44.767 11:28:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:44.767 11:28:02 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.026 [2024-11-26 11:28:03.134650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.026 [2024-11-26 11:28:03.134985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.026 [2024-11-26 11:28:03.135371] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:17:45.026 [2024-11-26 11:28:03.135756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.026 [2024-11-26 11:28:03.136508] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.026 [2024-11-26 11:28:03.136643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.026 [2024-11-26 11:28:03.137049] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:45.026 [2024-11-26 11:28:03.137094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.026 pt2 00:17:45.026 11:28:03 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:45.026 11:28:03 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:45.026 11:28:03 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:45.286 [2024-11-26 11:28:03.338781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:45.286 [2024-11-26 11:28:03.339402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.286 [2024-11-26 11:28:03.339537] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:17:45.286 [2024-11-26 11:28:03.339926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.286 [2024-11-26 11:28:03.340624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.286 [2024-11-26 11:28:03.340657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:45.286 [2024-11-26 11:28:03.340751] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:45.286 [2024-11-26 11:28:03.340787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:45.286 [2024-11-26 11:28:03.341304] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:17:45.286 [2024-11-26 11:28:03.341407] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:45.286 [2024-11-26 11:28:03.341509] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:17:45.286 [2024-11-26 11:28:03.342045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:17:45.286 [2024-11-26 11:28:03.342075] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:17:45.286 [2024-11-26 11:28:03.342195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.286 pt3 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.286 11:28:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.546 11:28:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.546 "name": "raid_bdev1", 00:17:45.546 "uuid": "4c314958-317e-4537-b83a-2b36f43f4d4c", 00:17:45.546 "strip_size_kb": 0, 00:17:45.546 "state": "online", 00:17:45.546 "raid_level": "raid1", 00:17:45.546 "superblock": true, 00:17:45.546 "num_base_bdevs": 4, 00:17:45.546 "num_base_bdevs_discovered": 3, 00:17:45.546 "num_base_bdevs_operational": 3, 00:17:45.546 "base_bdevs_list": [ 00:17:45.546 { 00:17:45.546 "name": null, 00:17:45.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.546 "is_configured": false, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "pt2", 00:17:45.546 "uuid": "bf27f39e-b905-5186-bbdd-966c35fcdbea", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "pt3", 00:17:45.546 "uuid": "937dbbad-8271-53a4-acd6-92120ca5e247", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "pt4", 00:17:45.546 "uuid": "848aed2a-584d-574d-adf0-4786e60d7db7", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 } 00:17:45.546 ] 00:17:45.546 }' 00:17:45.546 11:28:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.546 11:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:45.805 11:28:03 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:45.805 11:28:03 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:46.064 [2024-11-26 11:28:04.123530] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.064 11:28:04 -- bdev/bdev_raid.sh@506 -- # '[' 4c314958-317e-4537-b83a-2b36f43f4d4c '!=' 4c314958-317e-4537-b83a-2b36f43f4d4c ']' 00:17:46.064 11:28:04 -- bdev/bdev_raid.sh@511 -- # killprocess 87474 00:17:46.064 11:28:04 -- common/autotest_common.sh@936 -- # '[' -z 87474 ']' 00:17:46.064 11:28:04 -- common/autotest_common.sh@940 -- # kill -0 87474 00:17:46.064 11:28:04 -- common/autotest_common.sh@941 -- # uname 00:17:46.064 11:28:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:46.064 11:28:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87474 00:17:46.064 killing process with pid 87474 00:17:46.064 11:28:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:46.064 11:28:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:46.064 11:28:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87474' 00:17:46.064 11:28:04 -- common/autotest_common.sh@955 -- # kill 87474 00:17:46.064 11:28:04 -- common/autotest_common.sh@960 -- # wait 87474 00:17:46.064 [2024-11-26 11:28:04.176508] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.064 [2024-11-26 11:28:04.176611] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.064 [2024-11-26 11:28:04.177066] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.064 [2024-11-26 11:28:04.177104] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:17:46.064 [2024-11-26 11:28:04.209491] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.323 ************************************ 00:17:46.323 END TEST raid_superblock_test 00:17:46.323 ************************************ 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:46.323 00:17:46.323 real 0m17.058s 00:17:46.323 user 0m30.517s 00:17:46.323 sys 0m2.664s 00:17:46.323 11:28:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:46.323 11:28:04 -- common/autotest_common.sh@10 -- # set +x 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:17:46.323 11:28:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:46.323 11:28:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.323 11:28:04 -- common/autotest_common.sh@10 -- # set +x 00:17:46.323 ************************************ 00:17:46.323 START TEST raid_rebuild_test 00:17:46.323 ************************************ 00:17:46.323 11:28:04 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=88072 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 88072 /var/tmp/spdk-raid.sock 00:17:46.323 11:28:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:46.323 11:28:04 -- common/autotest_common.sh@829 -- # '[' -z 88072 ']' 00:17:46.323 11:28:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.323 11:28:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.323 11:28:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.323 11:28:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.323 11:28:04 -- common/autotest_common.sh@10 -- # set +x 00:17:46.323 [2024-11-26 11:28:04.517825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.323 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:46.323 Zero copy mechanism will not be used. 00:17:46.323 [2024-11-26 11:28:04.518091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88072 ] 00:17:46.582 [2024-11-26 11:28:04.688389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.582 [2024-11-26 11:28:04.729069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.582 [2024-11-26 11:28:04.766987] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.547 11:28:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.547 11:28:05 -- common/autotest_common.sh@862 -- # return 0 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.547 BaseBdev1 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:17:47.547 11:28:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:47.845 BaseBdev2 00:17:47.845 11:28:05 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:17:48.108 spare_malloc 00:17:48.108 11:28:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:48.367 spare_delay 00:17:48.367 11:28:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:48.367 [2024-11-26 11:28:06.602757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.367 [2024-11-26 11:28:06.602857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.367 [2024-11-26 11:28:06.602902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:17:48.367 [2024-11-26 11:28:06.602940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.367 [2024-11-26 11:28:06.606256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.367 [2024-11-26 11:28:06.606345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.367 spare 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:48.626 [2024-11-26 11:28:06.811168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.626 [2024-11-26 11:28:06.813333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.626 [2024-11-26 11:28:06.813447] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:17:48.626 [2024-11-26 11:28:06.813471] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:48.626 [2024-11-26 11:28:06.813641] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:17:48.626 [2024-11-26 11:28:06.814086] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:17:48.626 [2024-11-26 11:28:06.814117] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:17:48.626 [2024-11-26 11:28:06.814347] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.626 11:28:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.885 11:28:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.885 "name": "raid_bdev1", 00:17:48.885 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:17:48.885 "strip_size_kb": 0, 00:17:48.885 "state": "online", 00:17:48.885 "raid_level": "raid1", 00:17:48.885 "superblock": false, 00:17:48.885 "num_base_bdevs": 2, 00:17:48.885 "num_base_bdevs_discovered": 2, 00:17:48.885 "num_base_bdevs_operational": 2, 00:17:48.885 "base_bdevs_list": [ 00:17:48.885 { 00:17:48.885 "name": "BaseBdev1", 00:17:48.885 "uuid": "aafb1375-463f-4c09-b529-4bd0316c6dc1", 00:17:48.885 "is_configured": true, 00:17:48.885 "data_offset": 0, 00:17:48.885 "data_size": 65536 00:17:48.885 }, 00:17:48.885 { 00:17:48.885 "name": "BaseBdev2", 00:17:48.885 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:17:48.885 "is_configured": true, 00:17:48.885 "data_offset": 0, 00:17:48.885 "data_size": 65536 00:17:48.885 } 00:17:48.885 ] 00:17:48.885 }' 00:17:48.885 11:28:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.885 11:28:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.145 11:28:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.145 11:28:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:17:49.404 [2024-11-26 11:28:07.615608] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.404 11:28:07 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:17:49.404 11:28:07 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.404 11:28:07 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.662 11:28:07 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:17:49.662 11:28:07 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:17:49.662 11:28:07 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:17:49.662 11:28:07 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@12 -- # local i 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.662 11:28:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:49.921 [2024-11-26 11:28:08.031553] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:49.921 /dev/nbd0 00:17:49.921 11:28:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.921 11:28:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.921 11:28:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:49.921 11:28:08 -- common/autotest_common.sh@867 -- # local i 00:17:49.921 11:28:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:49.921 11:28:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:49.921 11:28:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:49.921 11:28:08 -- common/autotest_common.sh@871 -- # break 00:17:49.921 11:28:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:49.921 11:28:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:49.921 11:28:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.921 1+0 records in 00:17:49.921 1+0 records out 00:17:49.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019938 s, 20.5 MB/s 00:17:49.921 11:28:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.921 11:28:08 -- common/autotest_common.sh@884 -- # size=4096 00:17:49.921 11:28:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.921 11:28:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:49.921 11:28:08 -- common/autotest_common.sh@887 -- # return 0 00:17:49.921 11:28:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.921 11:28:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.921 11:28:08 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:17:49.921 11:28:08 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:17:49.921 11:28:08 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:55.190 65536+0 records in 00:17:55.190 65536+0 records out 00:17:55.190 33554432 bytes (34 MB, 32 MiB) copied, 5.00875 s, 6.7 MB/s 00:17:55.190 11:28:13 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@51 -- # local i 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.190 [2024-11-26 11:28:13.336239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@41 -- # break 00:17:55.190 11:28:13 -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.190 11:28:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:55.449 [2024-11-26 11:28:13.564411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.449 11:28:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.708 11:28:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.708 "name": "raid_bdev1", 00:17:55.708 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:17:55.708 "strip_size_kb": 0, 00:17:55.708 "state": "online", 00:17:55.708 "raid_level": "raid1", 00:17:55.708 "superblock": false, 00:17:55.708 "num_base_bdevs": 2, 00:17:55.708 "num_base_bdevs_discovered": 1, 00:17:55.708 "num_base_bdevs_operational": 1, 00:17:55.708 "base_bdevs_list": [ 00:17:55.708 { 00:17:55.708 "name": null, 00:17:55.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.708 "is_configured": false, 00:17:55.708 "data_offset": 0, 00:17:55.708 "data_size": 65536 00:17:55.708 }, 00:17:55.708 { 00:17:55.708 "name": "BaseBdev2", 00:17:55.708 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:17:55.708 "is_configured": true, 00:17:55.708 "data_offset": 0, 00:17:55.708 "data_size": 65536 00:17:55.708 } 00:17:55.708 ] 00:17:55.708 }' 00:17:55.708 11:28:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.708 11:28:13 -- common/autotest_common.sh@10 -- # set +x 00:17:55.967 11:28:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.227 [2024-11-26 11:28:14.296661] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:17:56.227 [2024-11-26 11:28:14.296727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.227 [2024-11-26 11:28:14.299718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09480 00:17:56.227 [2024-11-26 11:28:14.301814] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.227 11:28:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.162 11:28:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:17:57.421 "name": "raid_bdev1", 00:17:57.421 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:17:57.421 "strip_size_kb": 0, 00:17:57.421 "state": "online", 00:17:57.421 "raid_level": "raid1", 00:17:57.421 "superblock": false, 00:17:57.421 "num_base_bdevs": 2, 00:17:57.421 "num_base_bdevs_discovered": 2, 00:17:57.421 "num_base_bdevs_operational": 2, 00:17:57.421 "process": { 00:17:57.421 "type": "rebuild", 00:17:57.421 "target": "spare", 00:17:57.421 "progress": { 00:17:57.421 "blocks": 24576, 00:17:57.421 "percent": 37 00:17:57.421 } 00:17:57.421 }, 00:17:57.421 "base_bdevs_list": [ 00:17:57.421 { 00:17:57.421 "name": "spare", 00:17:57.421 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:17:57.421 "is_configured": true, 00:17:57.421 "data_offset": 0, 00:17:57.421 "data_size": 65536 00:17:57.421 }, 00:17:57.421 { 00:17:57.421 "name": "BaseBdev2", 00:17:57.421 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:17:57.421 "is_configured": true, 00:17:57.421 "data_offset": 0, 00:17:57.421 "data_size": 65536 00:17:57.421 } 00:17:57.421 ] 00:17:57.421 }' 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.421 11:28:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:57.680 [2024-11-26 11:28:15.823023] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.680 [2024-11-26 11:28:15.910257] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.680 [2024-11-26 11:28:15.910381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.938 11:28:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.938 11:28:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.938 "name": "raid_bdev1", 00:17:57.938 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:17:57.938 "strip_size_kb": 0, 00:17:57.938 "state": "online", 00:17:57.938 "raid_level": "raid1", 00:17:57.938 "superblock": false, 00:17:57.938 "num_base_bdevs": 2, 00:17:57.938 "num_base_bdevs_discovered": 1, 00:17:57.938 "num_base_bdevs_operational": 1, 00:17:57.938 "base_bdevs_list": [ 00:17:57.938 { 00:17:57.938 "name": null, 00:17:57.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.938 "is_configured": false, 00:17:57.938 "data_offset": 0, 00:17:57.938 "data_size": 65536 00:17:57.938 }, 00:17:57.938 { 00:17:57.938 "name": "BaseBdev2", 00:17:57.938 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:17:57.938 "is_configured": true, 00:17:57.938 "data_offset": 0, 00:17:57.938 "data_size": 65536 00:17:57.938 } 00:17:57.938 ] 00:17:57.938 }' 00:17:57.938 11:28:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.938 11:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:17:58.504 "name": "raid_bdev1", 00:17:58.504 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:17:58.504 "strip_size_kb": 0, 00:17:58.504 "state": "online", 00:17:58.504 "raid_level": "raid1", 00:17:58.504 "superblock": false, 00:17:58.504 "num_base_bdevs": 2, 00:17:58.504 "num_base_bdevs_discovered": 1, 00:17:58.504 "num_base_bdevs_operational": 1, 00:17:58.504 "base_bdevs_list": [ 00:17:58.504 { 00:17:58.504 "name": null, 00:17:58.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.504 "is_configured": false, 00:17:58.504 "data_offset": 0, 00:17:58.504 "data_size": 65536 00:17:58.504 }, 00:17:58.504 { 00:17:58.504 "name": "BaseBdev2", 00:17:58.504 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:17:58.504 "is_configured": true, 00:17:58.504 "data_offset": 0, 00:17:58.504 "data_size": 65536 00:17:58.504 } 00:17:58.504 ] 00:17:58.504 }' 00:17:58.504 11:28:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:17:58.762 11:28:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:58.762 11:28:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:17:58.762 11:28:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:17:58.762 11:28:16 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.762 [2024-11-26 11:28:16.938246] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:17:58.762 [2024-11-26 11:28:16.938324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.762 [2024-11-26 11:28:16.941269] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09550 00:17:58.762 [2024-11-26 11:28:16.943275] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.762 11:28:16 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.136 11:28:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:00.136 "name": "raid_bdev1", 00:18:00.136 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:00.136 "strip_size_kb": 0, 00:18:00.136 "state": "online", 00:18:00.136 "raid_level": "raid1", 00:18:00.136 "superblock": false, 00:18:00.136 "num_base_bdevs": 2, 00:18:00.136 "num_base_bdevs_discovered": 2, 00:18:00.136 "num_base_bdevs_operational": 2, 00:18:00.136 "process": { 00:18:00.136 "type": "rebuild", 00:18:00.136 "target": "spare", 00:18:00.136 "progress": { 00:18:00.136 "blocks": 24576, 00:18:00.136 "percent": 37 00:18:00.136 } 00:18:00.136 }, 00:18:00.136 "base_bdevs_list": [ 00:18:00.136 { 00:18:00.136 "name": "spare", 00:18:00.136 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:00.136 "is_configured": true, 00:18:00.136 "data_offset": 0, 00:18:00.136 "data_size": 65536 00:18:00.136 }, 00:18:00.136 { 00:18:00.136 "name": "BaseBdev2", 00:18:00.136 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:00.136 "is_configured": true, 00:18:00.136 "data_offset": 0, 00:18:00.136 "data_size": 65536 00:18:00.136 } 00:18:00.136 ] 00:18:00.136 }' 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:18:00.136 11:28:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@657 -- # local timeout=319 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.137 11:28:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:00.396 "name": "raid_bdev1", 00:18:00.396 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:00.396 "strip_size_kb": 0, 00:18:00.396 "state": "online", 00:18:00.396 "raid_level": "raid1", 00:18:00.396 "superblock": false, 00:18:00.396 "num_base_bdevs": 2, 00:18:00.396 "num_base_bdevs_discovered": 2, 00:18:00.396 "num_base_bdevs_operational": 2, 00:18:00.396 "process": { 00:18:00.396 "type": "rebuild", 00:18:00.396 "target": "spare", 00:18:00.396 "progress": { 00:18:00.396 "blocks": 30720, 00:18:00.396 "percent": 46 00:18:00.396 } 00:18:00.396 }, 00:18:00.396 "base_bdevs_list": [ 00:18:00.396 { 00:18:00.396 "name": "spare", 00:18:00.396 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:00.396 "is_configured": true, 00:18:00.396 "data_offset": 0, 00:18:00.396 "data_size": 65536 00:18:00.396 }, 00:18:00.396 { 00:18:00.396 "name": "BaseBdev2", 00:18:00.396 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:00.396 "is_configured": true, 00:18:00.396 "data_offset": 0, 00:18:00.396 "data_size": 65536 00:18:00.396 } 00:18:00.396 ] 00:18:00.396 }' 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.396 11:28:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.330 11:28:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:01.588 "name": "raid_bdev1", 00:18:01.588 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:01.588 "strip_size_kb": 0, 00:18:01.588 "state": "online", 00:18:01.588 "raid_level": "raid1", 00:18:01.588 "superblock": false, 00:18:01.588 "num_base_bdevs": 2, 00:18:01.588 "num_base_bdevs_discovered": 2, 00:18:01.588 "num_base_bdevs_operational": 2, 00:18:01.588 "process": { 00:18:01.588 "type": "rebuild", 00:18:01.588 "target": "spare", 00:18:01.588 "progress": { 00:18:01.588 "blocks": 57344, 00:18:01.588 "percent": 87 00:18:01.588 } 00:18:01.588 }, 00:18:01.588 "base_bdevs_list": [ 00:18:01.588 { 00:18:01.588 "name": "spare", 00:18:01.588 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:01.588 "is_configured": true, 00:18:01.588 "data_offset": 0, 00:18:01.588 "data_size": 65536 00:18:01.588 }, 00:18:01.588 { 00:18:01.588 "name": "BaseBdev2", 00:18:01.588 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:01.588 "is_configured": true, 00:18:01.588 "data_offset": 0, 00:18:01.588 "data_size": 65536 00:18:01.588 } 00:18:01.588 ] 00:18:01.588 }' 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.588 11:28:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:02.183 [2024-11-26 11:28:20.157810] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.183 [2024-11-26 11:28:20.157941] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.183 [2024-11-26 11:28:20.158001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.751 11:28:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:03.011 "name": "raid_bdev1", 00:18:03.011 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:03.011 "strip_size_kb": 0, 00:18:03.011 "state": "online", 00:18:03.011 "raid_level": "raid1", 00:18:03.011 "superblock": false, 00:18:03.011 "num_base_bdevs": 2, 00:18:03.011 "num_base_bdevs_discovered": 2, 00:18:03.011 "num_base_bdevs_operational": 2, 00:18:03.011 "base_bdevs_list": [ 00:18:03.011 { 00:18:03.011 "name": "spare", 00:18:03.011 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:03.011 "is_configured": true, 00:18:03.011 "data_offset": 0, 00:18:03.011 "data_size": 65536 00:18:03.011 }, 00:18:03.011 { 00:18:03.011 "name": "BaseBdev2", 00:18:03.011 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:03.011 "is_configured": true, 00:18:03.011 "data_offset": 0, 00:18:03.011 "data_size": 65536 00:18:03.011 } 00:18:03.011 ] 00:18:03.011 }' 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@660 -- # break 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.011 11:28:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:03.270 "name": "raid_bdev1", 00:18:03.270 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:03.270 "strip_size_kb": 0, 00:18:03.270 "state": "online", 00:18:03.270 "raid_level": "raid1", 00:18:03.270 "superblock": false, 00:18:03.270 "num_base_bdevs": 2, 00:18:03.270 "num_base_bdevs_discovered": 2, 00:18:03.270 "num_base_bdevs_operational": 2, 00:18:03.270 "base_bdevs_list": [ 00:18:03.270 { 00:18:03.270 "name": "spare", 00:18:03.270 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:03.270 "is_configured": true, 00:18:03.270 "data_offset": 0, 00:18:03.270 "data_size": 65536 00:18:03.270 }, 00:18:03.270 { 00:18:03.270 "name": "BaseBdev2", 00:18:03.270 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:03.270 "is_configured": true, 00:18:03.270 "data_offset": 0, 00:18:03.270 "data_size": 65536 00:18:03.270 } 00:18:03.270 ] 00:18:03.270 }' 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.270 11:28:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.529 11:28:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.529 "name": "raid_bdev1", 00:18:03.529 "uuid": "c2c2852e-1419-4480-89fa-85b4575b2387", 00:18:03.529 "strip_size_kb": 0, 00:18:03.529 "state": "online", 00:18:03.529 "raid_level": "raid1", 00:18:03.529 "superblock": false, 00:18:03.529 "num_base_bdevs": 2, 00:18:03.529 "num_base_bdevs_discovered": 2, 00:18:03.529 "num_base_bdevs_operational": 2, 00:18:03.529 "base_bdevs_list": [ 00:18:03.529 { 00:18:03.529 "name": "spare", 00:18:03.529 "uuid": "583dbdd5-caa0-5abf-9b31-e2cb5482e1aa", 00:18:03.529 "is_configured": true, 00:18:03.529 "data_offset": 0, 00:18:03.529 "data_size": 65536 00:18:03.529 }, 00:18:03.529 { 00:18:03.529 "name": "BaseBdev2", 00:18:03.529 "uuid": "dc7f342f-bdb5-4b4a-9ce0-afacc2100312", 00:18:03.529 "is_configured": true, 00:18:03.529 "data_offset": 0, 00:18:03.529 "data_size": 65536 00:18:03.529 } 00:18:03.529 ] 00:18:03.529 }' 00:18:03.530 11:28:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.530 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:18:03.788 11:28:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:04.048 [2024-11-26 11:28:22.126436] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.048 [2024-11-26 11:28:22.126476] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.048 [2024-11-26 11:28:22.126579] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.048 [2024-11-26 11:28:22.126668] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.048 [2024-11-26 11:28:22.126684] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:18:04.048 11:28:22 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.048 11:28:22 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:04.307 11:28:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:04.307 11:28:22 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:18:04.307 11:28:22 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@12 -- # local i 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.307 11:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:04.566 /dev/nbd0 00:18:04.566 11:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.566 11:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.566 11:28:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:04.566 11:28:22 -- common/autotest_common.sh@867 -- # local i 00:18:04.566 11:28:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:04.566 11:28:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:04.566 11:28:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:04.566 11:28:22 -- common/autotest_common.sh@871 -- # break 00:18:04.566 11:28:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:04.566 11:28:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:04.566 11:28:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.566 1+0 records in 00:18:04.566 1+0 records out 00:18:04.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195535 s, 20.9 MB/s 00:18:04.566 11:28:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.566 11:28:22 -- common/autotest_common.sh@884 -- # size=4096 00:18:04.566 11:28:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.566 11:28:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:04.566 11:28:22 -- common/autotest_common.sh@887 -- # return 0 00:18:04.566 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.566 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.566 11:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:04.825 /dev/nbd1 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:04.825 11:28:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:04.825 11:28:22 -- common/autotest_common.sh@867 -- # local i 00:18:04.825 11:28:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:04.825 11:28:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:04.825 11:28:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:04.825 11:28:22 -- common/autotest_common.sh@871 -- # break 00:18:04.825 11:28:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:04.825 11:28:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:04.825 11:28:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.825 1+0 records in 00:18:04.825 1+0 records out 00:18:04.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517199 s, 7.9 MB/s 00:18:04.825 11:28:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.825 11:28:22 -- common/autotest_common.sh@884 -- # size=4096 00:18:04.825 11:28:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.825 11:28:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:04.825 11:28:22 -- common/autotest_common.sh@887 -- # return 0 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.825 11:28:22 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:04.825 11:28:22 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@51 -- # local i 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.825 11:28:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@41 -- # break 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.085 11:28:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@41 -- # break 00:18:05.344 11:28:23 -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.344 11:28:23 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:18:05.344 11:28:23 -- bdev/bdev_raid.sh@709 -- # killprocess 88072 00:18:05.344 11:28:23 -- common/autotest_common.sh@936 -- # '[' -z 88072 ']' 00:18:05.344 11:28:23 -- common/autotest_common.sh@940 -- # kill -0 88072 00:18:05.344 11:28:23 -- common/autotest_common.sh@941 -- # uname 00:18:05.344 11:28:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.344 11:28:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88072 00:18:05.344 killing process with pid 88072 00:18:05.344 Received shutdown signal, test time was about 60.000000 seconds 00:18:05.344 00:18:05.344 Latency(us) 00:18:05.344 [2024-11-26T11:28:23.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.344 [2024-11-26T11:28:23.574Z] =================================================================================================================== 00:18:05.344 [2024-11-26T11:28:23.574Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.344 11:28:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:05.344 11:28:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:05.344 11:28:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88072' 00:18:05.344 11:28:23 -- common/autotest_common.sh@955 -- # kill 88072 00:18:05.344 [2024-11-26 11:28:23.443747] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.344 11:28:23 -- common/autotest_common.sh@960 -- # wait 88072 00:18:05.344 [2024-11-26 11:28:23.461466] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.603 ************************************ 00:18:05.603 END TEST raid_rebuild_test 00:18:05.603 ************************************ 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:05.603 00:18:05.603 real 0m19.195s 00:18:05.603 user 0m24.900s 00:18:05.603 sys 0m4.014s 00:18:05.603 11:28:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:05.603 11:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:18:05.603 11:28:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:05.603 11:28:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.603 11:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.603 ************************************ 00:18:05.603 START TEST raid_rebuild_test_sb 00:18:05.603 ************************************ 00:18:05.603 11:28:23 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@544 -- # raid_pid=88560 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@545 -- # waitforlisten 88560 /var/tmp/spdk-raid.sock 00:18:05.603 11:28:23 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.603 11:28:23 -- common/autotest_common.sh@829 -- # '[' -z 88560 ']' 00:18:05.603 11:28:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.603 11:28:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.603 11:28:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.603 11:28:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.603 11:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.603 [2024-11-26 11:28:23.759653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.603 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.603 Zero copy mechanism will not be used. 00:18:05.603 [2024-11-26 11:28:23.759871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88560 ] 00:18:05.862 [2024-11-26 11:28:23.909198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.862 [2024-11-26 11:28:23.942102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.862 [2024-11-26 11:28:23.973051] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.798 11:28:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.798 11:28:24 -- common/autotest_common.sh@862 -- # return 0 00:18:06.798 11:28:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:06.798 11:28:24 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:06.798 11:28:24 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:06.798 BaseBdev1_malloc 00:18:06.798 11:28:24 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.056 [2024-11-26 11:28:25.147358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.057 [2024-11-26 11:28:25.147481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.057 [2024-11-26 11:28:25.147514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:07.057 [2024-11-26 11:28:25.147534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.057 [2024-11-26 11:28:25.150235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.057 [2024-11-26 11:28:25.150315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.057 BaseBdev1 00:18:07.057 11:28:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:07.057 11:28:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:07.057 11:28:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:07.315 BaseBdev2_malloc 00:18:07.315 11:28:25 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:07.574 [2024-11-26 11:28:25.593800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:07.574 [2024-11-26 11:28:25.593887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.574 [2024-11-26 11:28:25.593941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:07.574 [2024-11-26 11:28:25.593958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.574 [2024-11-26 11:28:25.596569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.574 [2024-11-26 11:28:25.596623] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.574 BaseBdev2 00:18:07.574 11:28:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:07.832 spare_malloc 00:18:07.832 11:28:25 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:07.832 spare_delay 00:18:07.832 11:28:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:08.091 [2024-11-26 11:28:26.216895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.091 [2024-11-26 11:28:26.216990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.091 [2024-11-26 11:28:26.217018] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:18:08.091 [2024-11-26 11:28:26.217034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.091 [2024-11-26 11:28:26.220053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.091 [2024-11-26 11:28:26.220116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.091 spare 00:18:08.091 11:28:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:08.350 [2024-11-26 11:28:26.413074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.350 [2024-11-26 11:28:26.415129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.350 [2024-11-26 11:28:26.415356] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:18:08.350 [2024-11-26 11:28:26.415379] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:08.350 [2024-11-26 11:28:26.415541] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:18:08.350 [2024-11-26 11:28:26.416023] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:18:08.350 [2024-11-26 11:28:26.416041] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:18:08.350 [2024-11-26 11:28:26.416214] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.350 11:28:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.610 11:28:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.610 "name": "raid_bdev1", 00:18:08.610 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:08.610 "strip_size_kb": 0, 00:18:08.610 "state": "online", 00:18:08.610 "raid_level": "raid1", 00:18:08.610 "superblock": true, 00:18:08.610 "num_base_bdevs": 2, 00:18:08.610 "num_base_bdevs_discovered": 2, 00:18:08.610 "num_base_bdevs_operational": 2, 00:18:08.610 "base_bdevs_list": [ 00:18:08.610 { 00:18:08.610 "name": "BaseBdev1", 00:18:08.610 "uuid": "f9c4a320-8271-57a6-8b72-2f2c8fb25b60", 00:18:08.610 "is_configured": true, 00:18:08.610 "data_offset": 2048, 00:18:08.610 "data_size": 63488 00:18:08.610 }, 00:18:08.610 { 00:18:08.610 "name": "BaseBdev2", 00:18:08.610 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:08.610 "is_configured": true, 00:18:08.610 "data_offset": 2048, 00:18:08.610 "data_size": 63488 00:18:08.610 } 00:18:08.610 ] 00:18:08.610 }' 00:18:08.610 11:28:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.610 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:08.869 11:28:26 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.869 11:28:26 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:09.128 [2024-11-26 11:28:27.173585] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.128 11:28:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:18:09.128 11:28:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.128 11:28:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.386 11:28:27 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:18:09.386 11:28:27 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:18:09.386 11:28:27 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:18:09.386 11:28:27 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@12 -- # local i 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:09.386 [2024-11-26 11:28:27.597555] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:09.386 /dev/nbd0 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:09.386 11:28:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:09.386 11:28:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:09.386 11:28:27 -- common/autotest_common.sh@867 -- # local i 00:18:09.386 11:28:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:09.386 11:28:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:09.386 11:28:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:09.644 11:28:27 -- common/autotest_common.sh@871 -- # break 00:18:09.644 11:28:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:09.644 11:28:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:09.644 11:28:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.644 1+0 records in 00:18:09.644 1+0 records out 00:18:09.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197537 s, 20.7 MB/s 00:18:09.644 11:28:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.644 11:28:27 -- common/autotest_common.sh@884 -- # size=4096 00:18:09.644 11:28:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.644 11:28:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:09.644 11:28:27 -- common/autotest_common.sh@887 -- # return 0 00:18:09.644 11:28:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.644 11:28:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.644 11:28:27 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:18:09.644 11:28:27 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:18:09.644 11:28:27 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:16.209 63488+0 records in 00:18:16.209 63488+0 records out 00:18:16.209 32505856 bytes (33 MB, 31 MiB) copied, 5.58903 s, 5.8 MB/s 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@51 -- # local i 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.209 [2024-11-26 11:28:33.482184] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@41 -- # break 00:18:16.209 11:28:33 -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:16.209 [2024-11-26 11:28:33.682394] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.209 "name": "raid_bdev1", 00:18:16.209 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:16.209 "strip_size_kb": 0, 00:18:16.209 "state": "online", 00:18:16.209 "raid_level": "raid1", 00:18:16.209 "superblock": true, 00:18:16.209 "num_base_bdevs": 2, 00:18:16.209 "num_base_bdevs_discovered": 1, 00:18:16.209 "num_base_bdevs_operational": 1, 00:18:16.209 "base_bdevs_list": [ 00:18:16.209 { 00:18:16.209 "name": null, 00:18:16.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.209 "is_configured": false, 00:18:16.209 "data_offset": 2048, 00:18:16.209 "data_size": 63488 00:18:16.209 }, 00:18:16.209 { 00:18:16.209 "name": "BaseBdev2", 00:18:16.209 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:16.209 "is_configured": true, 00:18:16.209 "data_offset": 2048, 00:18:16.209 "data_size": 63488 00:18:16.209 } 00:18:16.209 ] 00:18:16.209 }' 00:18:16.209 11:28:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.209 11:28:33 -- common/autotest_common.sh@10 -- # set +x 00:18:16.209 11:28:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.469 [2024-11-26 11:28:34.466661] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:16.469 [2024-11-26 11:28:34.466733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.469 [2024-11-26 11:28:34.469811] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2c10 00:18:16.469 [2024-11-26 11:28:34.471829] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.469 11:28:34 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.404 11:28:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:17.663 "name": "raid_bdev1", 00:18:17.663 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:17.663 "strip_size_kb": 0, 00:18:17.663 "state": "online", 00:18:17.663 "raid_level": "raid1", 00:18:17.663 "superblock": true, 00:18:17.663 "num_base_bdevs": 2, 00:18:17.663 "num_base_bdevs_discovered": 2, 00:18:17.663 "num_base_bdevs_operational": 2, 00:18:17.663 "process": { 00:18:17.663 "type": "rebuild", 00:18:17.663 "target": "spare", 00:18:17.663 "progress": { 00:18:17.663 "blocks": 24576, 00:18:17.663 "percent": 38 00:18:17.663 } 00:18:17.663 }, 00:18:17.663 "base_bdevs_list": [ 00:18:17.663 { 00:18:17.663 "name": "spare", 00:18:17.663 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:17.663 "is_configured": true, 00:18:17.663 "data_offset": 2048, 00:18:17.663 "data_size": 63488 00:18:17.663 }, 00:18:17.663 { 00:18:17.663 "name": "BaseBdev2", 00:18:17.663 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:17.663 "is_configured": true, 00:18:17.663 "data_offset": 2048, 00:18:17.663 "data_size": 63488 00:18:17.663 } 00:18:17.663 ] 00:18:17.663 }' 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.663 11:28:35 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:17.922 [2024-11-26 11:28:35.965156] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.922 [2024-11-26 11:28:35.979619] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:17.922 [2024-11-26 11:28:35.979724] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.922 11:28:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.180 11:28:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.180 "name": "raid_bdev1", 00:18:18.180 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:18.180 "strip_size_kb": 0, 00:18:18.180 "state": "online", 00:18:18.180 "raid_level": "raid1", 00:18:18.180 "superblock": true, 00:18:18.180 "num_base_bdevs": 2, 00:18:18.180 "num_base_bdevs_discovered": 1, 00:18:18.180 "num_base_bdevs_operational": 1, 00:18:18.180 "base_bdevs_list": [ 00:18:18.180 { 00:18:18.180 "name": null, 00:18:18.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.180 "is_configured": false, 00:18:18.180 "data_offset": 2048, 00:18:18.180 "data_size": 63488 00:18:18.180 }, 00:18:18.180 { 00:18:18.180 "name": "BaseBdev2", 00:18:18.180 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:18.180 "is_configured": true, 00:18:18.180 "data_offset": 2048, 00:18:18.180 "data_size": 63488 00:18:18.180 } 00:18:18.180 ] 00:18:18.180 }' 00:18:18.180 11:28:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.180 11:28:36 -- common/autotest_common.sh@10 -- # set +x 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.439 11:28:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:18.698 "name": "raid_bdev1", 00:18:18.698 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:18.698 "strip_size_kb": 0, 00:18:18.698 "state": "online", 00:18:18.698 "raid_level": "raid1", 00:18:18.698 "superblock": true, 00:18:18.698 "num_base_bdevs": 2, 00:18:18.698 "num_base_bdevs_discovered": 1, 00:18:18.698 "num_base_bdevs_operational": 1, 00:18:18.698 "base_bdevs_list": [ 00:18:18.698 { 00:18:18.698 "name": null, 00:18:18.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.698 "is_configured": false, 00:18:18.698 "data_offset": 2048, 00:18:18.698 "data_size": 63488 00:18:18.698 }, 00:18:18.698 { 00:18:18.698 "name": "BaseBdev2", 00:18:18.698 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:18.698 "is_configured": true, 00:18:18.698 "data_offset": 2048, 00:18:18.698 "data_size": 63488 00:18:18.698 } 00:18:18.698 ] 00:18:18.698 }' 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:18.698 11:28:36 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.956 [2024-11-26 11:28:36.947512] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:18.956 [2024-11-26 11:28:36.947580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.956 [2024-11-26 11:28:36.950395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2ce0 00:18:18.957 [2024-11-26 11:28:36.952403] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.957 11:28:36 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:19.893 11:28:37 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.893 11:28:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:19.893 11:28:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:19.893 11:28:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:19.894 11:28:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:19.894 11:28:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.894 11:28:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:20.152 "name": "raid_bdev1", 00:18:20.152 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:20.152 "strip_size_kb": 0, 00:18:20.152 "state": "online", 00:18:20.152 "raid_level": "raid1", 00:18:20.152 "superblock": true, 00:18:20.152 "num_base_bdevs": 2, 00:18:20.152 "num_base_bdevs_discovered": 2, 00:18:20.152 "num_base_bdevs_operational": 2, 00:18:20.152 "process": { 00:18:20.152 "type": "rebuild", 00:18:20.152 "target": "spare", 00:18:20.152 "progress": { 00:18:20.152 "blocks": 24576, 00:18:20.152 "percent": 38 00:18:20.152 } 00:18:20.152 }, 00:18:20.152 "base_bdevs_list": [ 00:18:20.152 { 00:18:20.152 "name": "spare", 00:18:20.152 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:20.152 "is_configured": true, 00:18:20.152 "data_offset": 2048, 00:18:20.152 "data_size": 63488 00:18:20.152 }, 00:18:20.152 { 00:18:20.152 "name": "BaseBdev2", 00:18:20.152 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:20.152 "is_configured": true, 00:18:20.152 "data_offset": 2048, 00:18:20.152 "data_size": 63488 00:18:20.152 } 00:18:20.152 ] 00:18:20.152 }' 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:18:20.152 11:28:38 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:18:20.152 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@657 -- # local timeout=339 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.153 11:28:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:20.411 "name": "raid_bdev1", 00:18:20.411 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:20.411 "strip_size_kb": 0, 00:18:20.411 "state": "online", 00:18:20.411 "raid_level": "raid1", 00:18:20.411 "superblock": true, 00:18:20.411 "num_base_bdevs": 2, 00:18:20.411 "num_base_bdevs_discovered": 2, 00:18:20.411 "num_base_bdevs_operational": 2, 00:18:20.411 "process": { 00:18:20.411 "type": "rebuild", 00:18:20.411 "target": "spare", 00:18:20.411 "progress": { 00:18:20.411 "blocks": 30720, 00:18:20.411 "percent": 48 00:18:20.411 } 00:18:20.411 }, 00:18:20.411 "base_bdevs_list": [ 00:18:20.411 { 00:18:20.411 "name": "spare", 00:18:20.411 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:20.411 "is_configured": true, 00:18:20.411 "data_offset": 2048, 00:18:20.411 "data_size": 63488 00:18:20.411 }, 00:18:20.411 { 00:18:20.411 "name": "BaseBdev2", 00:18:20.411 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:20.411 "is_configured": true, 00:18:20.411 "data_offset": 2048, 00:18:20.411 "data_size": 63488 00:18:20.411 } 00:18:20.411 ] 00:18:20.411 }' 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.411 11:28:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.347 11:28:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.632 11:28:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:21.632 "name": "raid_bdev1", 00:18:21.632 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:21.632 "strip_size_kb": 0, 00:18:21.632 "state": "online", 00:18:21.632 "raid_level": "raid1", 00:18:21.632 "superblock": true, 00:18:21.632 "num_base_bdevs": 2, 00:18:21.632 "num_base_bdevs_discovered": 2, 00:18:21.632 "num_base_bdevs_operational": 2, 00:18:21.632 "process": { 00:18:21.632 "type": "rebuild", 00:18:21.632 "target": "spare", 00:18:21.632 "progress": { 00:18:21.632 "blocks": 55296, 00:18:21.632 "percent": 87 00:18:21.632 } 00:18:21.632 }, 00:18:21.632 "base_bdevs_list": [ 00:18:21.632 { 00:18:21.632 "name": "spare", 00:18:21.632 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:21.632 "is_configured": true, 00:18:21.632 "data_offset": 2048, 00:18:21.632 "data_size": 63488 00:18:21.632 }, 00:18:21.632 { 00:18:21.632 "name": "BaseBdev2", 00:18:21.632 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:21.632 "is_configured": true, 00:18:21.633 "data_offset": 2048, 00:18:21.633 "data_size": 63488 00:18:21.633 } 00:18:21.633 ] 00:18:21.633 }' 00:18:21.633 11:28:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:21.633 11:28:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.633 11:28:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:21.633 11:28:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.633 11:28:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:21.891 [2024-11-26 11:28:40.066284] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:21.891 [2024-11-26 11:28:40.066379] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:21.891 [2024-11-26 11:28:40.066527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.850 11:28:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.850 11:28:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:22.850 "name": "raid_bdev1", 00:18:22.850 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:22.850 "strip_size_kb": 0, 00:18:22.850 "state": "online", 00:18:22.850 "raid_level": "raid1", 00:18:22.850 "superblock": true, 00:18:22.850 "num_base_bdevs": 2, 00:18:22.850 "num_base_bdevs_discovered": 2, 00:18:22.850 "num_base_bdevs_operational": 2, 00:18:22.850 "base_bdevs_list": [ 00:18:22.850 { 00:18:22.850 "name": "spare", 00:18:22.851 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:22.851 "is_configured": true, 00:18:22.851 "data_offset": 2048, 00:18:22.851 "data_size": 63488 00:18:22.851 }, 00:18:22.851 { 00:18:22.851 "name": "BaseBdev2", 00:18:22.851 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:22.851 "is_configured": true, 00:18:22.851 "data_offset": 2048, 00:18:22.851 "data_size": 63488 00:18:22.851 } 00:18:22.851 ] 00:18:22.851 }' 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@660 -- # break 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.851 11:28:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:23.111 "name": "raid_bdev1", 00:18:23.111 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:23.111 "strip_size_kb": 0, 00:18:23.111 "state": "online", 00:18:23.111 "raid_level": "raid1", 00:18:23.111 "superblock": true, 00:18:23.111 "num_base_bdevs": 2, 00:18:23.111 "num_base_bdevs_discovered": 2, 00:18:23.111 "num_base_bdevs_operational": 2, 00:18:23.111 "base_bdevs_list": [ 00:18:23.111 { 00:18:23.111 "name": "spare", 00:18:23.111 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:23.111 "is_configured": true, 00:18:23.111 "data_offset": 2048, 00:18:23.111 "data_size": 63488 00:18:23.111 }, 00:18:23.111 { 00:18:23.111 "name": "BaseBdev2", 00:18:23.111 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:23.111 "is_configured": true, 00:18:23.111 "data_offset": 2048, 00:18:23.111 "data_size": 63488 00:18:23.111 } 00:18:23.111 ] 00:18:23.111 }' 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.111 11:28:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.368 11:28:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.368 "name": "raid_bdev1", 00:18:23.368 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:23.368 "strip_size_kb": 0, 00:18:23.368 "state": "online", 00:18:23.368 "raid_level": "raid1", 00:18:23.368 "superblock": true, 00:18:23.368 "num_base_bdevs": 2, 00:18:23.368 "num_base_bdevs_discovered": 2, 00:18:23.368 "num_base_bdevs_operational": 2, 00:18:23.368 "base_bdevs_list": [ 00:18:23.368 { 00:18:23.368 "name": "spare", 00:18:23.368 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:23.368 "is_configured": true, 00:18:23.368 "data_offset": 2048, 00:18:23.368 "data_size": 63488 00:18:23.368 }, 00:18:23.368 { 00:18:23.368 "name": "BaseBdev2", 00:18:23.368 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:23.368 "is_configured": true, 00:18:23.368 "data_offset": 2048, 00:18:23.368 "data_size": 63488 00:18:23.368 } 00:18:23.368 ] 00:18:23.368 }' 00:18:23.368 11:28:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.368 11:28:41 -- common/autotest_common.sh@10 -- # set +x 00:18:23.625 11:28:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:23.884 [2024-11-26 11:28:42.082484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.884 [2024-11-26 11:28:42.082541] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.884 [2024-11-26 11:28:42.082649] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.884 [2024-11-26 11:28:42.082740] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.884 [2024-11-26 11:28:42.082756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:18:23.884 11:28:42 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.884 11:28:42 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:24.142 11:28:42 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:24.142 11:28:42 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:18:24.142 11:28:42 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@12 -- # local i 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.142 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.143 11:28:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.401 /dev/nbd0 00:18:24.401 11:28:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.401 11:28:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.401 11:28:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:24.401 11:28:42 -- common/autotest_common.sh@867 -- # local i 00:18:24.401 11:28:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:24.401 11:28:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:24.401 11:28:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:24.401 11:28:42 -- common/autotest_common.sh@871 -- # break 00:18:24.401 11:28:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:24.401 11:28:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:24.401 11:28:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.401 1+0 records in 00:18:24.401 1+0 records out 00:18:24.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017564 s, 23.3 MB/s 00:18:24.401 11:28:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.401 11:28:42 -- common/autotest_common.sh@884 -- # size=4096 00:18:24.401 11:28:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.401 11:28:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:24.401 11:28:42 -- common/autotest_common.sh@887 -- # return 0 00:18:24.401 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.401 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.401 11:28:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:24.660 /dev/nbd1 00:18:24.660 11:28:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:24.660 11:28:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:24.660 11:28:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:24.660 11:28:42 -- common/autotest_common.sh@867 -- # local i 00:18:24.660 11:28:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:24.660 11:28:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:24.660 11:28:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:24.660 11:28:42 -- common/autotest_common.sh@871 -- # break 00:18:24.660 11:28:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:24.660 11:28:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:24.660 11:28:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.660 1+0 records in 00:18:24.660 1+0 records out 00:18:24.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039533 s, 10.4 MB/s 00:18:24.660 11:28:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.660 11:28:42 -- common/autotest_common.sh@884 -- # size=4096 00:18:24.660 11:28:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.660 11:28:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:24.660 11:28:42 -- common/autotest_common.sh@887 -- # return 0 00:18:24.660 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.660 11:28:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.660 11:28:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:24.918 11:28:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@51 -- # local i 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.918 11:28:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@41 -- # break 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.177 11:28:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@41 -- # break 00:18:25.436 11:28:43 -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.436 11:28:43 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:18:25.436 11:28:43 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:25.436 11:28:43 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:18:25.436 11:28:43 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:25.696 11:28:43 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:25.696 [2024-11-26 11:28:43.905850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:25.696 [2024-11-26 11:28:43.905986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.696 [2024-11-26 11:28:43.906022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:18:25.696 [2024-11-26 11:28:43.906036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.696 [2024-11-26 11:28:43.908763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.696 [2024-11-26 11:28:43.908819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.696 [2024-11-26 11:28:43.908931] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:25.696 [2024-11-26 11:28:43.909011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.696 BaseBdev1 00:18:25.696 11:28:43 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:25.696 11:28:43 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:18:25.696 11:28:43 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:18:25.954 11:28:44 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:26.213 [2024-11-26 11:28:44.329996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:26.213 [2024-11-26 11:28:44.330090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.213 [2024-11-26 11:28:44.330125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:18:26.213 [2024-11-26 11:28:44.330140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.213 [2024-11-26 11:28:44.330621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.213 [2024-11-26 11:28:44.330657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.213 [2024-11-26 11:28:44.330756] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:18:26.213 [2024-11-26 11:28:44.330773] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:18:26.213 [2024-11-26 11:28:44.330787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.213 [2024-11-26 11:28:44.330825] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state configuring 00:18:26.213 [2024-11-26 11:28:44.330895] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.213 BaseBdev2 00:18:26.213 11:28:44 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:26.472 11:28:44 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:26.731 [2024-11-26 11:28:44.738117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.731 [2024-11-26 11:28:44.738224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.731 [2024-11-26 11:28:44.738255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:26.731 [2024-11-26 11:28:44.738271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.731 [2024-11-26 11:28:44.738767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.731 [2024-11-26 11:28:44.738806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.731 [2024-11-26 11:28:44.738911] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:18:26.731 [2024-11-26 11:28:44.738956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.731 spare 00:18:26.731 11:28:44 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.731 11:28:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.732 [2024-11-26 11:28:44.839066] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:18:26.732 [2024-11-26 11:28:44.839122] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.732 [2024-11-26 11:28:44.839287] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1390 00:18:26.732 [2024-11-26 11:28:44.839750] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:18:26.732 [2024-11-26 11:28:44.839792] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:18:26.732 [2024-11-26 11:28:44.839984] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.732 "name": "raid_bdev1", 00:18:26.732 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:26.732 "strip_size_kb": 0, 00:18:26.732 "state": "online", 00:18:26.732 "raid_level": "raid1", 00:18:26.732 "superblock": true, 00:18:26.732 "num_base_bdevs": 2, 00:18:26.732 "num_base_bdevs_discovered": 2, 00:18:26.732 "num_base_bdevs_operational": 2, 00:18:26.732 "base_bdevs_list": [ 00:18:26.732 { 00:18:26.732 "name": "spare", 00:18:26.732 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:26.732 "is_configured": true, 00:18:26.732 "data_offset": 2048, 00:18:26.732 "data_size": 63488 00:18:26.732 }, 00:18:26.732 { 00:18:26.732 "name": "BaseBdev2", 00:18:26.732 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:26.732 "is_configured": true, 00:18:26.732 "data_offset": 2048, 00:18:26.732 "data_size": 63488 00:18:26.732 } 00:18:26.732 ] 00:18:26.732 }' 00:18:26.732 11:28:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.732 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.299 11:28:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:27.299 "name": "raid_bdev1", 00:18:27.299 "uuid": "68c946c9-a3dc-4f2e-8cbe-02dd3f804cbe", 00:18:27.299 "strip_size_kb": 0, 00:18:27.299 "state": "online", 00:18:27.299 "raid_level": "raid1", 00:18:27.299 "superblock": true, 00:18:27.299 "num_base_bdevs": 2, 00:18:27.299 "num_base_bdevs_discovered": 2, 00:18:27.299 "num_base_bdevs_operational": 2, 00:18:27.299 "base_bdevs_list": [ 00:18:27.299 { 00:18:27.299 "name": "spare", 00:18:27.299 "uuid": "ee41a5cf-a0f4-5686-8342-a45ff805bb46", 00:18:27.299 "is_configured": true, 00:18:27.299 "data_offset": 2048, 00:18:27.299 "data_size": 63488 00:18:27.299 }, 00:18:27.299 { 00:18:27.299 "name": "BaseBdev2", 00:18:27.299 "uuid": "5a7bf46a-5c17-5e8c-9a33-5c514c2e1629", 00:18:27.299 "is_configured": true, 00:18:27.299 "data_offset": 2048, 00:18:27.299 "data_size": 63488 00:18:27.299 } 00:18:27.299 ] 00:18:27.299 }' 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:27.557 11:28:45 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.815 11:28:45 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.815 11:28:45 -- bdev/bdev_raid.sh@709 -- # killprocess 88560 00:18:27.815 11:28:45 -- common/autotest_common.sh@936 -- # '[' -z 88560 ']' 00:18:27.815 11:28:45 -- common/autotest_common.sh@940 -- # kill -0 88560 00:18:27.815 11:28:45 -- common/autotest_common.sh@941 -- # uname 00:18:27.815 11:28:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.815 11:28:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88560 00:18:27.815 11:28:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.815 11:28:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.815 killing process with pid 88560 00:18:27.815 11:28:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88560' 00:18:27.815 Received shutdown signal, test time was about 60.000000 seconds 00:18:27.815 00:18:27.815 Latency(us) 00:18:27.815 [2024-11-26T11:28:46.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.815 [2024-11-26T11:28:46.045Z] =================================================================================================================== 00:18:27.815 [2024-11-26T11:28:46.045Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.815 11:28:45 -- common/autotest_common.sh@955 -- # kill 88560 00:18:27.815 [2024-11-26 11:28:45.830273] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.815 [2024-11-26 11:28:45.830420] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.815 11:28:45 -- common/autotest_common.sh@960 -- # wait 88560 00:18:27.815 [2024-11-26 11:28:45.830492] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.815 [2024-11-26 11:28:45.830523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:18:27.815 [2024-11-26 11:28:45.848693] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.815 11:28:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:27.815 00:18:27.815 real 0m22.325s 00:18:27.815 user 0m30.399s 00:18:27.815 sys 0m4.216s 00:18:27.815 11:28:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:27.815 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:18:27.815 ************************************ 00:18:27.815 END TEST raid_rebuild_test_sb 00:18:27.815 ************************************ 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:18:28.074 11:28:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:28.074 11:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.074 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:18:28.074 ************************************ 00:18:28.074 START TEST raid_rebuild_test_io 00:18:28.074 ************************************ 00:18:28.074 11:28:46 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=89123 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 89123 /var/tmp/spdk-raid.sock 00:18:28.074 11:28:46 -- common/autotest_common.sh@829 -- # '[' -z 89123 ']' 00:18:28.074 11:28:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:28.074 11:28:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.074 11:28:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.074 11:28:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.074 11:28:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.074 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:18:28.074 [2024-11-26 11:28:46.145238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.074 [2024-11-26 11:28:46.145463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89123 ] 00:18:28.074 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:28.074 Zero copy mechanism will not be used. 00:18:28.074 [2024-11-26 11:28:46.310511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.332 [2024-11-26 11:28:46.345724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.332 [2024-11-26 11:28:46.378205] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.899 11:28:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.899 11:28:47 -- common/autotest_common.sh@862 -- # return 0 00:18:28.899 11:28:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:28.899 11:28:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:28.899 11:28:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:29.158 BaseBdev1 00:18:29.158 11:28:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:29.158 11:28:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:29.158 11:28:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:29.416 BaseBdev2 00:18:29.416 11:28:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:29.673 spare_malloc 00:18:29.673 11:28:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:29.931 spare_delay 00:18:29.931 11:28:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:30.190 [2024-11-26 11:28:48.183972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.190 [2024-11-26 11:28:48.184098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.190 [2024-11-26 11:28:48.184134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:18:30.190 [2024-11-26 11:28:48.184155] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.190 [2024-11-26 11:28:48.186617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.190 [2024-11-26 11:28:48.186683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.190 spare 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:30.190 [2024-11-26 11:28:48.388052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.190 [2024-11-26 11:28:48.390189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.190 [2024-11-26 11:28:48.390292] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:18:30.190 [2024-11-26 11:28:48.390309] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:30.190 [2024-11-26 11:28:48.390418] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:18:30.190 [2024-11-26 11:28:48.390799] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:18:30.190 [2024-11-26 11:28:48.390825] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:18:30.190 [2024-11-26 11:28:48.391011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.190 11:28:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.449 11:28:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.449 "name": "raid_bdev1", 00:18:30.449 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:30.449 "strip_size_kb": 0, 00:18:30.449 "state": "online", 00:18:30.449 "raid_level": "raid1", 00:18:30.449 "superblock": false, 00:18:30.449 "num_base_bdevs": 2, 00:18:30.449 "num_base_bdevs_discovered": 2, 00:18:30.449 "num_base_bdevs_operational": 2, 00:18:30.449 "base_bdevs_list": [ 00:18:30.449 { 00:18:30.449 "name": "BaseBdev1", 00:18:30.449 "uuid": "3f603035-aa46-4a53-a1c8-91f94191d248", 00:18:30.449 "is_configured": true, 00:18:30.449 "data_offset": 0, 00:18:30.449 "data_size": 65536 00:18:30.449 }, 00:18:30.449 { 00:18:30.449 "name": "BaseBdev2", 00:18:30.449 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:30.449 "is_configured": true, 00:18:30.449 "data_offset": 0, 00:18:30.449 "data_size": 65536 00:18:30.449 } 00:18:30.449 ] 00:18:30.449 }' 00:18:30.449 11:28:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.449 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:30.707 11:28:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:30.707 11:28:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:30.965 [2024-11-26 11:28:49.132529] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.965 11:28:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:18:30.965 11:28:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.965 11:28:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.222 11:28:49 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:18:31.222 11:28:49 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:18:31.222 11:28:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:31.222 11:28:49 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:31.222 [2024-11-26 11:28:49.461848] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:31.222 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.222 Zero copy mechanism will not be used. 00:18:31.222 Running I/O for 60 seconds... 00:18:31.480 [2024-11-26 11:28:49.546604] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.480 [2024-11-26 11:28:49.560286] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.480 11:28:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.739 11:28:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.739 "name": "raid_bdev1", 00:18:31.739 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:31.739 "strip_size_kb": 0, 00:18:31.739 "state": "online", 00:18:31.739 "raid_level": "raid1", 00:18:31.739 "superblock": false, 00:18:31.739 "num_base_bdevs": 2, 00:18:31.739 "num_base_bdevs_discovered": 1, 00:18:31.739 "num_base_bdevs_operational": 1, 00:18:31.739 "base_bdevs_list": [ 00:18:31.739 { 00:18:31.739 "name": null, 00:18:31.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.739 "is_configured": false, 00:18:31.739 "data_offset": 0, 00:18:31.739 "data_size": 65536 00:18:31.739 }, 00:18:31.739 { 00:18:31.739 "name": "BaseBdev2", 00:18:31.739 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:31.739 "is_configured": true, 00:18:31.739 "data_offset": 0, 00:18:31.739 "data_size": 65536 00:18:31.739 } 00:18:31.739 ] 00:18:31.739 }' 00:18:31.739 11:28:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.739 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.997 11:28:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.256 [2024-11-26 11:28:50.287503] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:32.256 [2024-11-26 11:28:50.287574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.256 [2024-11-26 11:28:50.314605] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:32.256 11:28:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:32.256 [2024-11-26 11:28:50.316818] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.256 [2024-11-26 11:28:50.446525] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.256 [2024-11-26 11:28:50.446885] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.514 [2024-11-26 11:28:50.663239] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.514 [2024-11-26 11:28:50.663451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.773 [2024-11-26 11:28:51.010246] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:33.031 [2024-11-26 11:28:51.135631] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.031 [2024-11-26 11:28:51.135857] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.289 11:28:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.289 [2024-11-26 11:28:51.454471] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:33.289 [2024-11-26 11:28:51.454947] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:33.548 "name": "raid_bdev1", 00:18:33.548 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:33.548 "strip_size_kb": 0, 00:18:33.548 "state": "online", 00:18:33.548 "raid_level": "raid1", 00:18:33.548 "superblock": false, 00:18:33.548 "num_base_bdevs": 2, 00:18:33.548 "num_base_bdevs_discovered": 2, 00:18:33.548 "num_base_bdevs_operational": 2, 00:18:33.548 "process": { 00:18:33.548 "type": "rebuild", 00:18:33.548 "target": "spare", 00:18:33.548 "progress": { 00:18:33.548 "blocks": 14336, 00:18:33.548 "percent": 21 00:18:33.548 } 00:18:33.548 }, 00:18:33.548 "base_bdevs_list": [ 00:18:33.548 { 00:18:33.548 "name": "spare", 00:18:33.548 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:33.548 "is_configured": true, 00:18:33.548 "data_offset": 0, 00:18:33.548 "data_size": 65536 00:18:33.548 }, 00:18:33.548 { 00:18:33.548 "name": "BaseBdev2", 00:18:33.548 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:33.548 "is_configured": true, 00:18:33.548 "data_offset": 0, 00:18:33.548 "data_size": 65536 00:18:33.548 } 00:18:33.548 ] 00:18:33.548 }' 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:33.548 [2024-11-26 11:28:51.585673] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.548 11:28:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:33.548 [2024-11-26 11:28:51.768767] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.807 [2024-11-26 11:28:51.810910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:33.807 [2024-11-26 11:28:51.911698] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.807 [2024-11-26 11:28:51.920309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.807 [2024-11-26 11:28:51.951212] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.807 11:28:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.065 11:28:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.065 "name": "raid_bdev1", 00:18:34.065 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:34.065 "strip_size_kb": 0, 00:18:34.065 "state": "online", 00:18:34.065 "raid_level": "raid1", 00:18:34.066 "superblock": false, 00:18:34.066 "num_base_bdevs": 2, 00:18:34.066 "num_base_bdevs_discovered": 1, 00:18:34.066 "num_base_bdevs_operational": 1, 00:18:34.066 "base_bdevs_list": [ 00:18:34.066 { 00:18:34.066 "name": null, 00:18:34.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.066 "is_configured": false, 00:18:34.066 "data_offset": 0, 00:18:34.066 "data_size": 65536 00:18:34.066 }, 00:18:34.066 { 00:18:34.066 "name": "BaseBdev2", 00:18:34.066 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:34.066 "is_configured": true, 00:18:34.066 "data_offset": 0, 00:18:34.066 "data_size": 65536 00:18:34.066 } 00:18:34.066 ] 00:18:34.066 }' 00:18:34.066 11:28:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.066 11:28:52 -- common/autotest_common.sh@10 -- # set +x 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.633 11:28:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.892 11:28:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:34.892 "name": "raid_bdev1", 00:18:34.892 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:34.892 "strip_size_kb": 0, 00:18:34.892 "state": "online", 00:18:34.892 "raid_level": "raid1", 00:18:34.892 "superblock": false, 00:18:34.892 "num_base_bdevs": 2, 00:18:34.892 "num_base_bdevs_discovered": 1, 00:18:34.892 "num_base_bdevs_operational": 1, 00:18:34.892 "base_bdevs_list": [ 00:18:34.892 { 00:18:34.892 "name": null, 00:18:34.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.892 "is_configured": false, 00:18:34.892 "data_offset": 0, 00:18:34.892 "data_size": 65536 00:18:34.892 }, 00:18:34.892 { 00:18:34.892 "name": "BaseBdev2", 00:18:34.893 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:34.893 "is_configured": true, 00:18:34.893 "data_offset": 0, 00:18:34.893 "data_size": 65536 00:18:34.893 } 00:18:34.893 ] 00:18:34.893 }' 00:18:34.893 11:28:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:34.893 11:28:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:34.893 11:28:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:34.893 11:28:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:34.893 11:28:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.151 [2024-11-26 11:28:53.159400] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:35.151 [2024-11-26 11:28:53.159479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.151 [2024-11-26 11:28:53.190208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:35.151 [2024-11-26 11:28:53.192501] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.151 11:28:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:35.151 [2024-11-26 11:28:53.309421] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:35.151 [2024-11-26 11:28:53.309872] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:35.410 [2024-11-26 11:28:53.517841] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.410 [2024-11-26 11:28:53.518091] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.668 [2024-11-26 11:28:53.844990] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:35.927 [2024-11-26 11:28:54.067496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:35.927 [2024-11-26 11:28:54.067701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:36.185 11:28:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.185 11:28:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:36.185 11:28:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:36.185 11:28:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:36.185 11:28:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:36.186 11:28:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.186 11:28:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:36.444 "name": "raid_bdev1", 00:18:36.444 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:36.444 "strip_size_kb": 0, 00:18:36.444 "state": "online", 00:18:36.444 "raid_level": "raid1", 00:18:36.444 "superblock": false, 00:18:36.444 "num_base_bdevs": 2, 00:18:36.444 "num_base_bdevs_discovered": 2, 00:18:36.444 "num_base_bdevs_operational": 2, 00:18:36.444 "process": { 00:18:36.444 "type": "rebuild", 00:18:36.444 "target": "spare", 00:18:36.444 "progress": { 00:18:36.444 "blocks": 14336, 00:18:36.444 "percent": 21 00:18:36.444 } 00:18:36.444 }, 00:18:36.444 "base_bdevs_list": [ 00:18:36.444 { 00:18:36.444 "name": "spare", 00:18:36.444 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:36.444 "is_configured": true, 00:18:36.444 "data_offset": 0, 00:18:36.444 "data_size": 65536 00:18:36.444 }, 00:18:36.444 { 00:18:36.444 "name": "BaseBdev2", 00:18:36.444 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:36.444 "is_configured": true, 00:18:36.444 "data_offset": 0, 00:18:36.444 "data_size": 65536 00:18:36.444 } 00:18:36.444 ] 00:18:36.444 }' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@657 -- # local timeout=355 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.444 11:28:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.704 [2024-11-26 11:28:54.706553] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:36.704 "name": "raid_bdev1", 00:18:36.704 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:36.704 "strip_size_kb": 0, 00:18:36.704 "state": "online", 00:18:36.704 "raid_level": "raid1", 00:18:36.704 "superblock": false, 00:18:36.704 "num_base_bdevs": 2, 00:18:36.704 "num_base_bdevs_discovered": 2, 00:18:36.704 "num_base_bdevs_operational": 2, 00:18:36.704 "process": { 00:18:36.704 "type": "rebuild", 00:18:36.704 "target": "spare", 00:18:36.704 "progress": { 00:18:36.704 "blocks": 18432, 00:18:36.704 "percent": 28 00:18:36.704 } 00:18:36.704 }, 00:18:36.704 "base_bdevs_list": [ 00:18:36.704 { 00:18:36.704 "name": "spare", 00:18:36.704 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:36.704 "is_configured": true, 00:18:36.704 "data_offset": 0, 00:18:36.704 "data_size": 65536 00:18:36.704 }, 00:18:36.704 { 00:18:36.704 "name": "BaseBdev2", 00:18:36.704 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:36.704 "is_configured": true, 00:18:36.704 "data_offset": 0, 00:18:36.704 "data_size": 65536 00:18:36.704 } 00:18:36.704 ] 00:18:36.704 }' 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.704 11:28:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:36.704 [2024-11-26 11:28:54.840121] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:36.970 [2024-11-26 11:28:55.058478] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:37.543 [2024-11-26 11:28:55.489593] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:37.543 [2024-11-26 11:28:55.489838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.543 11:28:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.802 [2024-11-26 11:28:55.934583] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:37.802 11:28:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:37.802 "name": "raid_bdev1", 00:18:37.802 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:37.803 "strip_size_kb": 0, 00:18:37.803 "state": "online", 00:18:37.803 "raid_level": "raid1", 00:18:37.803 "superblock": false, 00:18:37.803 "num_base_bdevs": 2, 00:18:37.803 "num_base_bdevs_discovered": 2, 00:18:37.803 "num_base_bdevs_operational": 2, 00:18:37.803 "process": { 00:18:37.803 "type": "rebuild", 00:18:37.803 "target": "spare", 00:18:37.803 "progress": { 00:18:37.803 "blocks": 40960, 00:18:37.803 "percent": 62 00:18:37.803 } 00:18:37.803 }, 00:18:37.803 "base_bdevs_list": [ 00:18:37.803 { 00:18:37.803 "name": "spare", 00:18:37.803 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:37.803 "is_configured": true, 00:18:37.803 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 }, 00:18:37.803 { 00:18:37.803 "name": "BaseBdev2", 00:18:37.803 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:37.803 "is_configured": true, 00:18:37.803 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 } 00:18:37.803 ] 00:18:37.803 }' 00:18:37.803 11:28:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:37.803 11:28:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.803 11:28:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:37.803 11:28:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.803 11:28:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:38.370 [2024-11-26 11:28:56.607172] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:38.370 [2024-11-26 11:28:56.607414] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.938 11:28:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.197 [2024-11-26 11:28:57.283489] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:39.197 "name": "raid_bdev1", 00:18:39.197 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:39.197 "strip_size_kb": 0, 00:18:39.197 "state": "online", 00:18:39.197 "raid_level": "raid1", 00:18:39.197 "superblock": false, 00:18:39.197 "num_base_bdevs": 2, 00:18:39.197 "num_base_bdevs_discovered": 2, 00:18:39.197 "num_base_bdevs_operational": 2, 00:18:39.197 "process": { 00:18:39.197 "type": "rebuild", 00:18:39.197 "target": "spare", 00:18:39.197 "progress": { 00:18:39.197 "blocks": 63488, 00:18:39.197 "percent": 96 00:18:39.197 } 00:18:39.197 }, 00:18:39.197 "base_bdevs_list": [ 00:18:39.197 { 00:18:39.197 "name": "spare", 00:18:39.197 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:39.197 "is_configured": true, 00:18:39.197 "data_offset": 0, 00:18:39.197 "data_size": 65536 00:18:39.197 }, 00:18:39.197 { 00:18:39.197 "name": "BaseBdev2", 00:18:39.197 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:39.197 "is_configured": true, 00:18:39.197 "data_offset": 0, 00:18:39.197 "data_size": 65536 00:18:39.197 } 00:18:39.197 ] 00:18:39.197 }' 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.197 11:28:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:39.197 [2024-11-26 11:28:57.390128] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:39.197 [2024-11-26 11:28:57.391488] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.133 11:28:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:40.392 "name": "raid_bdev1", 00:18:40.392 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:40.392 "strip_size_kb": 0, 00:18:40.392 "state": "online", 00:18:40.392 "raid_level": "raid1", 00:18:40.392 "superblock": false, 00:18:40.392 "num_base_bdevs": 2, 00:18:40.392 "num_base_bdevs_discovered": 2, 00:18:40.392 "num_base_bdevs_operational": 2, 00:18:40.392 "base_bdevs_list": [ 00:18:40.392 { 00:18:40.392 "name": "spare", 00:18:40.392 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:40.392 "is_configured": true, 00:18:40.392 "data_offset": 0, 00:18:40.392 "data_size": 65536 00:18:40.392 }, 00:18:40.392 { 00:18:40.392 "name": "BaseBdev2", 00:18:40.392 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:40.392 "is_configured": true, 00:18:40.392 "data_offset": 0, 00:18:40.392 "data_size": 65536 00:18:40.392 } 00:18:40.392 ] 00:18:40.392 }' 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@660 -- # break 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.392 11:28:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:40.651 "name": "raid_bdev1", 00:18:40.651 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:40.651 "strip_size_kb": 0, 00:18:40.651 "state": "online", 00:18:40.651 "raid_level": "raid1", 00:18:40.651 "superblock": false, 00:18:40.651 "num_base_bdevs": 2, 00:18:40.651 "num_base_bdevs_discovered": 2, 00:18:40.651 "num_base_bdevs_operational": 2, 00:18:40.651 "base_bdevs_list": [ 00:18:40.651 { 00:18:40.651 "name": "spare", 00:18:40.651 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:40.651 "is_configured": true, 00:18:40.651 "data_offset": 0, 00:18:40.651 "data_size": 65536 00:18:40.651 }, 00:18:40.651 { 00:18:40.651 "name": "BaseBdev2", 00:18:40.651 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:40.651 "is_configured": true, 00:18:40.651 "data_offset": 0, 00:18:40.651 "data_size": 65536 00:18:40.651 } 00:18:40.651 ] 00:18:40.651 }' 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.651 11:28:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.910 11:28:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.910 "name": "raid_bdev1", 00:18:40.910 "uuid": "7041a141-7f25-44e2-b009-28edef09ec6c", 00:18:40.910 "strip_size_kb": 0, 00:18:40.910 "state": "online", 00:18:40.910 "raid_level": "raid1", 00:18:40.910 "superblock": false, 00:18:40.910 "num_base_bdevs": 2, 00:18:40.910 "num_base_bdevs_discovered": 2, 00:18:40.910 "num_base_bdevs_operational": 2, 00:18:40.910 "base_bdevs_list": [ 00:18:40.910 { 00:18:40.910 "name": "spare", 00:18:40.910 "uuid": "06ce7f9e-84a2-5c7d-9a04-b5ff8644927b", 00:18:40.910 "is_configured": true, 00:18:40.910 "data_offset": 0, 00:18:40.910 "data_size": 65536 00:18:40.910 }, 00:18:40.910 { 00:18:40.910 "name": "BaseBdev2", 00:18:40.910 "uuid": "705014cc-50d9-4688-9f80-b395eba0fcb4", 00:18:40.910 "is_configured": true, 00:18:40.910 "data_offset": 0, 00:18:40.910 "data_size": 65536 00:18:40.910 } 00:18:40.910 ] 00:18:40.910 }' 00:18:40.910 11:28:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.910 11:28:59 -- common/autotest_common.sh@10 -- # set +x 00:18:41.168 11:28:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:41.426 [2024-11-26 11:28:59.583165] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.426 [2024-11-26 11:28:59.583226] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.685 00:18:41.685 Latency(us) 00:18:41.685 [2024-11-26T11:28:59.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.685 [2024-11-26T11:28:59.915Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:41.685 raid_bdev1 : 10.20 97.03 291.09 0.00 0.00 13770.81 249.48 115343.36 00:18:41.685 [2024-11-26T11:28:59.915Z] =================================================================================================================== 00:18:41.685 [2024-11-26T11:28:59.915Z] Total : 97.03 291.09 0.00 0.00 13770.81 249.48 115343.36 00:18:41.685 [2024-11-26 11:28:59.671304] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.685 [2024-11-26 11:28:59.671372] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.685 [2024-11-26 11:28:59.671482] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.685 [2024-11-26 11:28:59.671508] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:18:41.685 0 00:18:41.685 11:28:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.685 11:28:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:41.943 11:28:59 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:41.943 11:28:59 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:18:41.943 11:28:59 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@12 -- # local i 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.943 11:28:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:41.943 /dev/nbd0 00:18:41.943 11:29:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:41.943 11:29:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:41.943 11:29:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:41.943 11:29:00 -- common/autotest_common.sh@867 -- # local i 00:18:41.943 11:29:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:41.943 11:29:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:41.943 11:29:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:41.944 11:29:00 -- common/autotest_common.sh@871 -- # break 00:18:41.944 11:29:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:41.944 11:29:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:41.944 11:29:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.944 1+0 records in 00:18:41.944 1+0 records out 00:18:41.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260813 s, 15.7 MB/s 00:18:41.944 11:29:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.944 11:29:00 -- common/autotest_common.sh@884 -- # size=4096 00:18:41.944 11:29:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.944 11:29:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:41.944 11:29:00 -- common/autotest_common.sh@887 -- # return 0 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.944 11:29:00 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.944 11:29:00 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:18:41.944 11:29:00 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@12 -- # local i 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.944 11:29:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:42.202 /dev/nbd1 00:18:42.202 11:29:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:42.202 11:29:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:42.202 11:29:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:42.202 11:29:00 -- common/autotest_common.sh@867 -- # local i 00:18:42.202 11:29:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.202 11:29:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.202 11:29:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:42.202 11:29:00 -- common/autotest_common.sh@871 -- # break 00:18:42.202 11:29:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.202 11:29:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.202 11:29:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.202 1+0 records in 00:18:42.202 1+0 records out 00:18:42.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250425 s, 16.4 MB/s 00:18:42.202 11:29:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.202 11:29:00 -- common/autotest_common.sh@884 -- # size=4096 00:18:42.202 11:29:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.202 11:29:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.202 11:29:00 -- common/autotest_common.sh@887 -- # return 0 00:18:42.202 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.202 11:29:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.202 11:29:00 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:42.461 11:29:00 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@51 -- # local i 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.461 11:29:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@41 -- # break 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.720 11:29:00 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@51 -- # local i 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.720 11:29:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:42.978 11:29:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@41 -- # break 00:18:42.979 11:29:01 -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.979 11:29:01 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:18:42.979 11:29:01 -- bdev/bdev_raid.sh@709 -- # killprocess 89123 00:18:42.979 11:29:01 -- common/autotest_common.sh@936 -- # '[' -z 89123 ']' 00:18:42.979 11:29:01 -- common/autotest_common.sh@940 -- # kill -0 89123 00:18:42.979 11:29:01 -- common/autotest_common.sh@941 -- # uname 00:18:42.979 11:29:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.979 11:29:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89123 00:18:42.979 11:29:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:42.979 11:29:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:42.979 11:29:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89123' 00:18:42.979 killing process with pid 89123 00:18:42.979 Received shutdown signal, test time was about 11.601800 seconds 00:18:42.979 00:18:42.979 Latency(us) 00:18:42.979 [2024-11-26T11:29:01.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.979 [2024-11-26T11:29:01.209Z] =================================================================================================================== 00:18:42.979 [2024-11-26T11:29:01.209Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.979 11:29:01 -- common/autotest_common.sh@955 -- # kill 89123 00:18:42.979 [2024-11-26 11:29:01.065591] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:42.979 11:29:01 -- common/autotest_common.sh@960 -- # wait 89123 00:18:42.979 [2024-11-26 11:29:01.081044] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.238 11:29:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:43.238 00:18:43.238 real 0m15.176s 00:18:43.238 user 0m22.323s 00:18:43.238 sys 0m1.829s 00:18:43.238 11:29:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:43.238 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:43.238 ************************************ 00:18:43.238 END TEST raid_rebuild_test_io 00:18:43.238 ************************************ 00:18:43.238 11:29:01 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:18:43.238 11:29:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:43.238 11:29:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.238 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:43.238 ************************************ 00:18:43.238 START TEST raid_rebuild_test_sb_io 00:18:43.238 ************************************ 00:18:43.238 11:29:01 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:18:43.238 11:29:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=89544 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 89544 /var/tmp/spdk-raid.sock 00:18:43.239 11:29:01 -- common/autotest_common.sh@829 -- # '[' -z 89544 ']' 00:18:43.239 11:29:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:43.239 11:29:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:43.239 11:29:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:43.239 11:29:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:43.239 11:29:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.239 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:43.239 [2024-11-26 11:29:01.370593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:43.239 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:43.239 Zero copy mechanism will not be used. 00:18:43.239 [2024-11-26 11:29:01.370768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89544 ] 00:18:43.498 [2024-11-26 11:29:01.525592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.498 [2024-11-26 11:29:01.561467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.498 [2024-11-26 11:29:01.594574] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.067 11:29:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.067 11:29:02 -- common/autotest_common.sh@862 -- # return 0 00:18:44.067 11:29:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:44.067 11:29:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:44.067 11:29:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:44.325 BaseBdev1_malloc 00:18:44.325 11:29:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.584 [2024-11-26 11:29:02.709599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.584 [2024-11-26 11:29:02.709715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.584 [2024-11-26 11:29:02.709748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:44.584 [2024-11-26 11:29:02.709771] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.584 [2024-11-26 11:29:02.712539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.584 [2024-11-26 11:29:02.712598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.584 BaseBdev1 00:18:44.584 11:29:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:44.584 11:29:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:44.584 11:29:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:44.843 BaseBdev2_malloc 00:18:44.843 11:29:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:45.102 [2024-11-26 11:29:03.132227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:45.102 [2024-11-26 11:29:03.132317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.102 [2024-11-26 11:29:03.132385] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:45.102 [2024-11-26 11:29:03.132400] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.102 [2024-11-26 11:29:03.134763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.102 [2024-11-26 11:29:03.134820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:45.102 BaseBdev2 00:18:45.102 11:29:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:45.362 spare_malloc 00:18:45.362 11:29:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:45.362 spare_delay 00:18:45.621 11:29:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:45.621 [2024-11-26 11:29:03.796377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.621 [2024-11-26 11:29:03.796465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.621 [2024-11-26 11:29:03.796494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:18:45.621 [2024-11-26 11:29:03.796510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.621 [2024-11-26 11:29:03.799067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.621 [2024-11-26 11:29:03.799109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.621 spare 00:18:45.621 11:29:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:45.880 [2024-11-26 11:29:04.004539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.880 [2024-11-26 11:29:04.006840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.880 [2024-11-26 11:29:04.007147] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:18:45.880 [2024-11-26 11:29:04.007180] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.880 [2024-11-26 11:29:04.007326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:18:45.880 [2024-11-26 11:29:04.007761] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:18:45.880 [2024-11-26 11:29:04.007788] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:18:45.880 [2024-11-26 11:29:04.008003] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.880 11:29:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.881 11:29:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.881 11:29:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.140 11:29:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.140 "name": "raid_bdev1", 00:18:46.140 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:46.140 "strip_size_kb": 0, 00:18:46.140 "state": "online", 00:18:46.140 "raid_level": "raid1", 00:18:46.140 "superblock": true, 00:18:46.140 "num_base_bdevs": 2, 00:18:46.140 "num_base_bdevs_discovered": 2, 00:18:46.140 "num_base_bdevs_operational": 2, 00:18:46.140 "base_bdevs_list": [ 00:18:46.140 { 00:18:46.140 "name": "BaseBdev1", 00:18:46.140 "uuid": "8e23f83a-b57b-5c80-9f19-5115e1fce5df", 00:18:46.140 "is_configured": true, 00:18:46.140 "data_offset": 2048, 00:18:46.140 "data_size": 63488 00:18:46.140 }, 00:18:46.140 { 00:18:46.140 "name": "BaseBdev2", 00:18:46.140 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:46.140 "is_configured": true, 00:18:46.140 "data_offset": 2048, 00:18:46.140 "data_size": 63488 00:18:46.140 } 00:18:46.140 ] 00:18:46.140 }' 00:18:46.140 11:29:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.140 11:29:04 -- common/autotest_common.sh@10 -- # set +x 00:18:46.399 11:29:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:46.399 11:29:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:46.658 [2024-11-26 11:29:04.748822] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.658 11:29:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:18:46.658 11:29:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.658 11:29:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:46.917 11:29:04 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:18:46.917 11:29:04 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:18:46.917 11:29:04 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:46.917 11:29:04 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:46.917 [2024-11-26 11:29:05.098214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:46.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.917 Zero copy mechanism will not be used. 00:18:46.917 Running I/O for 60 seconds... 00:18:47.175 [2024-11-26 11:29:05.170475] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.175 [2024-11-26 11:29:05.177955] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.175 11:29:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.434 11:29:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.434 "name": "raid_bdev1", 00:18:47.434 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:47.434 "strip_size_kb": 0, 00:18:47.434 "state": "online", 00:18:47.434 "raid_level": "raid1", 00:18:47.434 "superblock": true, 00:18:47.434 "num_base_bdevs": 2, 00:18:47.434 "num_base_bdevs_discovered": 1, 00:18:47.434 "num_base_bdevs_operational": 1, 00:18:47.434 "base_bdevs_list": [ 00:18:47.434 { 00:18:47.434 "name": null, 00:18:47.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.434 "is_configured": false, 00:18:47.434 "data_offset": 2048, 00:18:47.434 "data_size": 63488 00:18:47.434 }, 00:18:47.434 { 00:18:47.434 "name": "BaseBdev2", 00:18:47.434 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:47.434 "is_configured": true, 00:18:47.434 "data_offset": 2048, 00:18:47.434 "data_size": 63488 00:18:47.434 } 00:18:47.434 ] 00:18:47.434 }' 00:18:47.434 11:29:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.434 11:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.692 11:29:05 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.950 [2024-11-26 11:29:06.028592] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:47.950 [2024-11-26 11:29:06.028658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.950 [2024-11-26 11:29:06.055838] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:47.950 [2024-11-26 11:29:06.057980] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.950 11:29:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:47.950 [2024-11-26 11:29:06.175929] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:47.950 [2024-11-26 11:29:06.176225] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:48.208 [2024-11-26 11:29:06.393750] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:48.208 [2024-11-26 11:29:06.394005] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:48.774 [2024-11-26 11:29:06.729299] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:48.774 [2024-11-26 11:29:06.955530] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.032 11:29:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.290 [2024-11-26 11:29:07.310745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:49.290 [2024-11-26 11:29:07.311204] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:49.290 "name": "raid_bdev1", 00:18:49.290 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:49.290 "strip_size_kb": 0, 00:18:49.290 "state": "online", 00:18:49.290 "raid_level": "raid1", 00:18:49.290 "superblock": true, 00:18:49.290 "num_base_bdevs": 2, 00:18:49.290 "num_base_bdevs_discovered": 2, 00:18:49.290 "num_base_bdevs_operational": 2, 00:18:49.290 "process": { 00:18:49.290 "type": "rebuild", 00:18:49.290 "target": "spare", 00:18:49.290 "progress": { 00:18:49.290 "blocks": 12288, 00:18:49.290 "percent": 19 00:18:49.290 } 00:18:49.290 }, 00:18:49.290 "base_bdevs_list": [ 00:18:49.290 { 00:18:49.290 "name": "spare", 00:18:49.290 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:49.290 "is_configured": true, 00:18:49.290 "data_offset": 2048, 00:18:49.290 "data_size": 63488 00:18:49.290 }, 00:18:49.290 { 00:18:49.290 "name": "BaseBdev2", 00:18:49.290 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:49.290 "is_configured": true, 00:18:49.290 "data_offset": 2048, 00:18:49.290 "data_size": 63488 00:18:49.290 } 00:18:49.290 ] 00:18:49.290 }' 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.290 11:29:07 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:49.290 [2024-11-26 11:29:07.512580] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:49.290 [2024-11-26 11:29:07.512817] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:49.548 [2024-11-26 11:29:07.572440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.548 [2024-11-26 11:29:07.687327] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.548 [2024-11-26 11:29:07.696865] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.548 [2024-11-26 11:29:07.721550] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:18:49.548 11:29:07 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.548 11:29:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.549 11:29:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.807 11:29:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.807 "name": "raid_bdev1", 00:18:49.807 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:49.807 "strip_size_kb": 0, 00:18:49.807 "state": "online", 00:18:49.807 "raid_level": "raid1", 00:18:49.807 "superblock": true, 00:18:49.807 "num_base_bdevs": 2, 00:18:49.807 "num_base_bdevs_discovered": 1, 00:18:49.807 "num_base_bdevs_operational": 1, 00:18:49.807 "base_bdevs_list": [ 00:18:49.807 { 00:18:49.807 "name": null, 00:18:49.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.807 "is_configured": false, 00:18:49.807 "data_offset": 2048, 00:18:49.807 "data_size": 63488 00:18:49.807 }, 00:18:49.807 { 00:18:49.807 "name": "BaseBdev2", 00:18:49.807 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:49.807 "is_configured": true, 00:18:49.807 "data_offset": 2048, 00:18:49.807 "data_size": 63488 00:18:49.807 } 00:18:49.807 ] 00:18:49.807 }' 00:18:49.807 11:29:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.807 11:29:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:50.430 "name": "raid_bdev1", 00:18:50.430 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:50.430 "strip_size_kb": 0, 00:18:50.430 "state": "online", 00:18:50.430 "raid_level": "raid1", 00:18:50.430 "superblock": true, 00:18:50.430 "num_base_bdevs": 2, 00:18:50.430 "num_base_bdevs_discovered": 1, 00:18:50.430 "num_base_bdevs_operational": 1, 00:18:50.430 "base_bdevs_list": [ 00:18:50.430 { 00:18:50.430 "name": null, 00:18:50.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.430 "is_configured": false, 00:18:50.430 "data_offset": 2048, 00:18:50.430 "data_size": 63488 00:18:50.430 }, 00:18:50.430 { 00:18:50.430 "name": "BaseBdev2", 00:18:50.430 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:50.430 "is_configured": true, 00:18:50.430 "data_offset": 2048, 00:18:50.430 "data_size": 63488 00:18:50.430 } 00:18:50.430 ] 00:18:50.430 }' 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:50.430 11:29:08 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.689 [2024-11-26 11:29:08.833938] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:50.689 [2024-11-26 11:29:08.833981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.689 [2024-11-26 11:29:08.862137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:50.689 [2024-11-26 11:29:08.864586] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.689 11:29:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:50.947 [2024-11-26 11:29:08.996238] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:50.947 [2024-11-26 11:29:08.996463] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:51.204 [2024-11-26 11:29:09.231384] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:51.204 [2024-11-26 11:29:09.231780] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.771 11:29:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.771 [2024-11-26 11:29:09.921620] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:51.771 [2024-11-26 11:29:09.922142] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:52.031 "name": "raid_bdev1", 00:18:52.031 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:52.031 "strip_size_kb": 0, 00:18:52.031 "state": "online", 00:18:52.031 "raid_level": "raid1", 00:18:52.031 "superblock": true, 00:18:52.031 "num_base_bdevs": 2, 00:18:52.031 "num_base_bdevs_discovered": 2, 00:18:52.031 "num_base_bdevs_operational": 2, 00:18:52.031 "process": { 00:18:52.031 "type": "rebuild", 00:18:52.031 "target": "spare", 00:18:52.031 "progress": { 00:18:52.031 "blocks": 14336, 00:18:52.031 "percent": 22 00:18:52.031 } 00:18:52.031 }, 00:18:52.031 "base_bdevs_list": [ 00:18:52.031 { 00:18:52.031 "name": "spare", 00:18:52.031 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:52.031 "is_configured": true, 00:18:52.031 "data_offset": 2048, 00:18:52.031 "data_size": 63488 00:18:52.031 }, 00:18:52.031 { 00:18:52.031 "name": "BaseBdev2", 00:18:52.031 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:52.031 "is_configured": true, 00:18:52.031 "data_offset": 2048, 00:18:52.031 "data_size": 63488 00:18:52.031 } 00:18:52.031 ] 00:18:52.031 }' 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:52.031 [2024-11-26 11:29:10.147467] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:52.031 [2024-11-26 11:29:10.147974] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.031 11:29:10 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:18:52.032 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@657 -- # local timeout=371 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.032 11:29:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:52.289 "name": "raid_bdev1", 00:18:52.289 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:52.289 "strip_size_kb": 0, 00:18:52.289 "state": "online", 00:18:52.289 "raid_level": "raid1", 00:18:52.289 "superblock": true, 00:18:52.289 "num_base_bdevs": 2, 00:18:52.289 "num_base_bdevs_discovered": 2, 00:18:52.289 "num_base_bdevs_operational": 2, 00:18:52.289 "process": { 00:18:52.289 "type": "rebuild", 00:18:52.289 "target": "spare", 00:18:52.289 "progress": { 00:18:52.289 "blocks": 16384, 00:18:52.289 "percent": 25 00:18:52.289 } 00:18:52.289 }, 00:18:52.289 "base_bdevs_list": [ 00:18:52.289 { 00:18:52.289 "name": "spare", 00:18:52.289 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:52.289 "is_configured": true, 00:18:52.289 "data_offset": 2048, 00:18:52.289 "data_size": 63488 00:18:52.289 }, 00:18:52.289 { 00:18:52.289 "name": "BaseBdev2", 00:18:52.289 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:52.289 "is_configured": true, 00:18:52.289 "data_offset": 2048, 00:18:52.289 "data_size": 63488 00:18:52.289 } 00:18:52.289 ] 00:18:52.289 }' 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.289 11:29:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:52.547 [2024-11-26 11:29:10.606936] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.483 [2024-11-26 11:29:11.577635] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:53.483 "name": "raid_bdev1", 00:18:53.483 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:53.483 "strip_size_kb": 0, 00:18:53.483 "state": "online", 00:18:53.483 "raid_level": "raid1", 00:18:53.483 "superblock": true, 00:18:53.483 "num_base_bdevs": 2, 00:18:53.483 "num_base_bdevs_discovered": 2, 00:18:53.483 "num_base_bdevs_operational": 2, 00:18:53.483 "process": { 00:18:53.483 "type": "rebuild", 00:18:53.483 "target": "spare", 00:18:53.483 "progress": { 00:18:53.483 "blocks": 38912, 00:18:53.483 "percent": 61 00:18:53.483 } 00:18:53.483 }, 00:18:53.483 "base_bdevs_list": [ 00:18:53.483 { 00:18:53.483 "name": "spare", 00:18:53.483 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:53.483 "is_configured": true, 00:18:53.483 "data_offset": 2048, 00:18:53.483 "data_size": 63488 00:18:53.483 }, 00:18:53.483 { 00:18:53.483 "name": "BaseBdev2", 00:18:53.483 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:53.483 "is_configured": true, 00:18:53.483 "data_offset": 2048, 00:18:53.483 "data_size": 63488 00:18:53.483 } 00:18:53.483 ] 00:18:53.483 }' 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.483 11:29:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:54.050 [2024-11-26 11:29:12.009724] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:54.617 [2024-11-26 11:29:12.687753] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.617 11:29:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:54.875 "name": "raid_bdev1", 00:18:54.875 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:54.875 "strip_size_kb": 0, 00:18:54.875 "state": "online", 00:18:54.875 "raid_level": "raid1", 00:18:54.875 "superblock": true, 00:18:54.875 "num_base_bdevs": 2, 00:18:54.875 "num_base_bdevs_discovered": 2, 00:18:54.875 "num_base_bdevs_operational": 2, 00:18:54.875 "process": { 00:18:54.875 "type": "rebuild", 00:18:54.875 "target": "spare", 00:18:54.875 "progress": { 00:18:54.875 "blocks": 59392, 00:18:54.875 "percent": 93 00:18:54.875 } 00:18:54.875 }, 00:18:54.875 "base_bdevs_list": [ 00:18:54.875 { 00:18:54.875 "name": "spare", 00:18:54.875 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:54.875 "is_configured": true, 00:18:54.875 "data_offset": 2048, 00:18:54.875 "data_size": 63488 00:18:54.875 }, 00:18:54.875 { 00:18:54.875 "name": "BaseBdev2", 00:18:54.875 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:54.875 "is_configured": true, 00:18:54.875 "data_offset": 2048, 00:18:54.875 "data_size": 63488 00:18:54.875 } 00:18:54.875 ] 00:18:54.875 }' 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.875 11:29:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:55.133 [2024-11-26 11:29:13.118489] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:55.133 [2024-11-26 11:29:13.225029] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:55.134 [2024-11-26 11:29:13.226022] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.069 11:29:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:56.069 "name": "raid_bdev1", 00:18:56.069 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:56.069 "strip_size_kb": 0, 00:18:56.069 "state": "online", 00:18:56.069 "raid_level": "raid1", 00:18:56.069 "superblock": true, 00:18:56.069 "num_base_bdevs": 2, 00:18:56.069 "num_base_bdevs_discovered": 2, 00:18:56.069 "num_base_bdevs_operational": 2, 00:18:56.069 "base_bdevs_list": [ 00:18:56.069 { 00:18:56.069 "name": "spare", 00:18:56.069 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:56.069 "is_configured": true, 00:18:56.069 "data_offset": 2048, 00:18:56.069 "data_size": 63488 00:18:56.069 }, 00:18:56.069 { 00:18:56.069 "name": "BaseBdev2", 00:18:56.069 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:56.069 "is_configured": true, 00:18:56.069 "data_offset": 2048, 00:18:56.069 "data_size": 63488 00:18:56.069 } 00:18:56.069 ] 00:18:56.069 }' 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@660 -- # break 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:56.069 11:29:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:56.070 11:29:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:56.070 11:29:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:56.070 11:29:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.070 11:29:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:56.328 "name": "raid_bdev1", 00:18:56.328 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:56.328 "strip_size_kb": 0, 00:18:56.328 "state": "online", 00:18:56.328 "raid_level": "raid1", 00:18:56.328 "superblock": true, 00:18:56.328 "num_base_bdevs": 2, 00:18:56.328 "num_base_bdevs_discovered": 2, 00:18:56.328 "num_base_bdevs_operational": 2, 00:18:56.328 "base_bdevs_list": [ 00:18:56.328 { 00:18:56.328 "name": "spare", 00:18:56.328 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:56.328 "is_configured": true, 00:18:56.328 "data_offset": 2048, 00:18:56.328 "data_size": 63488 00:18:56.328 }, 00:18:56.328 { 00:18:56.328 "name": "BaseBdev2", 00:18:56.328 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:56.328 "is_configured": true, 00:18:56.328 "data_offset": 2048, 00:18:56.328 "data_size": 63488 00:18:56.328 } 00:18:56.328 ] 00:18:56.328 }' 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.328 11:29:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.587 11:29:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.587 "name": "raid_bdev1", 00:18:56.587 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:18:56.587 "strip_size_kb": 0, 00:18:56.587 "state": "online", 00:18:56.587 "raid_level": "raid1", 00:18:56.587 "superblock": true, 00:18:56.587 "num_base_bdevs": 2, 00:18:56.587 "num_base_bdevs_discovered": 2, 00:18:56.587 "num_base_bdevs_operational": 2, 00:18:56.587 "base_bdevs_list": [ 00:18:56.587 { 00:18:56.587 "name": "spare", 00:18:56.587 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:18:56.587 "is_configured": true, 00:18:56.587 "data_offset": 2048, 00:18:56.587 "data_size": 63488 00:18:56.587 }, 00:18:56.587 { 00:18:56.587 "name": "BaseBdev2", 00:18:56.587 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:18:56.587 "is_configured": true, 00:18:56.587 "data_offset": 2048, 00:18:56.587 "data_size": 63488 00:18:56.587 } 00:18:56.587 ] 00:18:56.587 }' 00:18:56.587 11:29:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.587 11:29:14 -- common/autotest_common.sh@10 -- # set +x 00:18:57.154 11:29:15 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:57.154 [2024-11-26 11:29:15.344625] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.154 [2024-11-26 11:29:15.344664] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.154 00:18:57.154 Latency(us) 00:18:57.154 [2024-11-26T11:29:15.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.154 [2024-11-26T11:29:15.384Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:57.154 raid_bdev1 : 10.29 95.61 286.82 0.00 0.00 14526.52 253.21 116296.61 00:18:57.154 [2024-11-26T11:29:15.384Z] =================================================================================================================== 00:18:57.154 [2024-11-26T11:29:15.384Z] Total : 95.61 286.82 0.00 0.00 14526.52 253.21 116296.61 00:18:57.413 0 00:18:57.413 [2024-11-26 11:29:15.396370] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.413 [2024-11-26 11:29:15.396439] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.413 [2024-11-26 11:29:15.396536] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.413 [2024-11-26 11:29:15.396556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:18:57.413 11:29:15 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.413 11:29:15 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:57.672 11:29:15 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:57.672 11:29:15 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:18:57.672 11:29:15 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@12 -- # local i 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:57.672 11:29:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:57.932 /dev/nbd0 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:57.932 11:29:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:57.932 11:29:15 -- common/autotest_common.sh@867 -- # local i 00:18:57.932 11:29:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:57.932 11:29:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:57.932 11:29:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:57.932 11:29:15 -- common/autotest_common.sh@871 -- # break 00:18:57.932 11:29:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:57.932 11:29:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:57.932 11:29:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:57.932 1+0 records in 00:18:57.932 1+0 records out 00:18:57.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439481 s, 9.3 MB/s 00:18:57.932 11:29:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.932 11:29:15 -- common/autotest_common.sh@884 -- # size=4096 00:18:57.932 11:29:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:57.932 11:29:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:57.932 11:29:15 -- common/autotest_common.sh@887 -- # return 0 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:57.932 11:29:15 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:18:57.932 11:29:15 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:18:57.932 11:29:15 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@12 -- # local i 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:57.932 11:29:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:58.199 /dev/nbd1 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:58.199 11:29:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:58.199 11:29:16 -- common/autotest_common.sh@867 -- # local i 00:18:58.199 11:29:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:58.199 11:29:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:58.199 11:29:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:58.199 11:29:16 -- common/autotest_common.sh@871 -- # break 00:18:58.199 11:29:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:58.199 11:29:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:58.199 11:29:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.199 1+0 records in 00:18:58.199 1+0 records out 00:18:58.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266165 s, 15.4 MB/s 00:18:58.199 11:29:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.199 11:29:16 -- common/autotest_common.sh@884 -- # size=4096 00:18:58.199 11:29:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.199 11:29:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:58.199 11:29:16 -- common/autotest_common.sh@887 -- # return 0 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.199 11:29:16 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:58.199 11:29:16 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@51 -- # local i 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.199 11:29:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@41 -- # break 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.474 11:29:16 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@51 -- # local i 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.474 11:29:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@41 -- # break 00:18:58.733 11:29:16 -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.733 11:29:16 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:18:58.733 11:29:16 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:58.733 11:29:16 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:18:58.733 11:29:16 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:58.992 11:29:17 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.251 [2024-11-26 11:29:17.332791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.251 [2024-11-26 11:29:17.332903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.252 [2024-11-26 11:29:17.332935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:59.252 [2024-11-26 11:29:17.332969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.252 [2024-11-26 11:29:17.335448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.252 [2024-11-26 11:29:17.335490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.252 [2024-11-26 11:29:17.335582] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:59.252 [2024-11-26 11:29:17.335634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.252 BaseBdev1 00:18:59.252 11:29:17 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:59.252 11:29:17 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:18:59.252 11:29:17 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:18:59.510 11:29:17 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:59.769 [2024-11-26 11:29:17.792979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:59.769 [2024-11-26 11:29:17.793066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.769 [2024-11-26 11:29:17.793113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:59.769 [2024-11-26 11:29:17.793130] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.769 [2024-11-26 11:29:17.793565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.769 [2024-11-26 11:29:17.793596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:59.769 [2024-11-26 11:29:17.793670] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:18:59.770 [2024-11-26 11:29:17.793688] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:18:59.770 [2024-11-26 11:29:17.793713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.770 [2024-11-26 11:29:17.793771] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:18:59.770 [2024-11-26 11:29:17.793815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.770 BaseBdev2 00:18:59.770 11:29:17 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:00.029 11:29:18 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:00.288 [2024-11-26 11:29:18.289182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.288 [2024-11-26 11:29:18.289513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.288 [2024-11-26 11:29:18.289600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:19:00.288 [2024-11-26 11:29:18.289837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.288 [2024-11-26 11:29:18.290469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.288 [2024-11-26 11:29:18.290500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.288 [2024-11-26 11:29:18.290605] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:00.288 [2024-11-26 11:29:18.290632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.288 spare 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.288 [2024-11-26 11:29:18.390761] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:19:00.288 [2024-11-26 11:29:18.391002] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:00.288 [2024-11-26 11:29:18.391204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a7e0 00:19:00.288 [2024-11-26 11:29:18.391822] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:19:00.288 [2024-11-26 11:29:18.392028] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:19:00.288 [2024-11-26 11:29:18.392210] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.288 "name": "raid_bdev1", 00:19:00.288 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:19:00.288 "strip_size_kb": 0, 00:19:00.288 "state": "online", 00:19:00.288 "raid_level": "raid1", 00:19:00.288 "superblock": true, 00:19:00.288 "num_base_bdevs": 2, 00:19:00.288 "num_base_bdevs_discovered": 2, 00:19:00.288 "num_base_bdevs_operational": 2, 00:19:00.288 "base_bdevs_list": [ 00:19:00.288 { 00:19:00.288 "name": "spare", 00:19:00.288 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:19:00.288 "is_configured": true, 00:19:00.288 "data_offset": 2048, 00:19:00.288 "data_size": 63488 00:19:00.288 }, 00:19:00.288 { 00:19:00.288 "name": "BaseBdev2", 00:19:00.288 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:19:00.288 "is_configured": true, 00:19:00.288 "data_offset": 2048, 00:19:00.288 "data_size": 63488 00:19:00.288 } 00:19:00.288 ] 00:19:00.288 }' 00:19:00.288 11:29:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.548 11:29:18 -- common/autotest_common.sh@10 -- # set +x 00:19:00.806 11:29:18 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.806 11:29:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:00.806 11:29:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:00.806 11:29:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:00.807 11:29:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:00.807 11:29:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.807 11:29:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.807 11:29:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:00.807 "name": "raid_bdev1", 00:19:00.807 "uuid": "9a0e178c-612e-4f9c-ae00-ddc220a44932", 00:19:00.807 "strip_size_kb": 0, 00:19:00.807 "state": "online", 00:19:00.807 "raid_level": "raid1", 00:19:00.807 "superblock": true, 00:19:00.807 "num_base_bdevs": 2, 00:19:00.807 "num_base_bdevs_discovered": 2, 00:19:00.807 "num_base_bdevs_operational": 2, 00:19:00.807 "base_bdevs_list": [ 00:19:00.807 { 00:19:00.807 "name": "spare", 00:19:00.807 "uuid": "2c58faff-a92d-5f11-8fb2-e09656e70505", 00:19:00.807 "is_configured": true, 00:19:00.807 "data_offset": 2048, 00:19:00.807 "data_size": 63488 00:19:00.807 }, 00:19:00.807 { 00:19:00.807 "name": "BaseBdev2", 00:19:00.807 "uuid": "7608db8a-a956-53c2-a0c1-33b33ee7c188", 00:19:00.807 "is_configured": true, 00:19:00.807 "data_offset": 2048, 00:19:00.807 "data_size": 63488 00:19:00.807 } 00:19:00.807 ] 00:19:00.807 }' 00:19:00.807 11:29:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:00.807 11:29:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:00.807 11:29:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:01.065 11:29:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:01.065 11:29:19 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.065 11:29:19 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:01.324 11:29:19 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.324 11:29:19 -- bdev/bdev_raid.sh@709 -- # killprocess 89544 00:19:01.324 11:29:19 -- common/autotest_common.sh@936 -- # '[' -z 89544 ']' 00:19:01.324 11:29:19 -- common/autotest_common.sh@940 -- # kill -0 89544 00:19:01.324 11:29:19 -- common/autotest_common.sh@941 -- # uname 00:19:01.324 11:29:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.324 11:29:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89544 00:19:01.324 killing process with pid 89544 00:19:01.324 Received shutdown signal, test time was about 14.249925 seconds 00:19:01.324 00:19:01.324 Latency(us) 00:19:01.324 [2024-11-26T11:29:19.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.324 [2024-11-26T11:29:19.554Z] =================================================================================================================== 00:19:01.324 [2024-11-26T11:29:19.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.324 11:29:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:01.324 11:29:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:01.324 11:29:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89544' 00:19:01.324 11:29:19 -- common/autotest_common.sh@955 -- # kill 89544 00:19:01.324 [2024-11-26 11:29:19.350080] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.324 11:29:19 -- common/autotest_common.sh@960 -- # wait 89544 00:19:01.324 [2024-11-26 11:29:19.350174] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.324 [2024-11-26 11:29:19.350288] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.324 [2024-11-26 11:29:19.350317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:19:01.324 [2024-11-26 11:29:19.366331] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.324 ************************************ 00:19:01.324 END TEST raid_rebuild_test_sb_io 00:19:01.324 ************************************ 00:19:01.324 11:29:19 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:01.324 00:19:01.324 real 0m18.235s 00:19:01.324 user 0m27.923s 00:19:01.324 sys 0m2.389s 00:19:01.324 11:29:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:01.324 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.583 11:29:19 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:19:01.584 11:29:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:01.584 11:29:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:01.584 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.584 ************************************ 00:19:01.584 START TEST raid_rebuild_test 00:19:01.584 ************************************ 00:19:01.584 11:29:19 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@544 -- # raid_pid=90053 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@545 -- # waitforlisten 90053 /var/tmp/spdk-raid.sock 00:19:01.584 11:29:19 -- common/autotest_common.sh@829 -- # '[' -z 90053 ']' 00:19:01.584 11:29:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:01.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:01.584 11:29:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.584 11:29:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:01.584 11:29:19 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:01.584 11:29:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.584 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.584 [2024-11-26 11:29:19.661534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:01.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:01.584 Zero copy mechanism will not be used. 00:19:01.584 [2024-11-26 11:29:19.661696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90053 ] 00:19:01.584 [2024-11-26 11:29:19.812214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.844 [2024-11-26 11:29:19.846570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.844 [2024-11-26 11:29:19.877672] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.412 11:29:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.412 11:29:20 -- common/autotest_common.sh@862 -- # return 0 00:19:02.412 11:29:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:02.412 11:29:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:02.412 11:29:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:02.671 BaseBdev1 00:19:02.671 11:29:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:02.671 11:29:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:02.671 11:29:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.930 BaseBdev2 00:19:02.930 11:29:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:02.930 11:29:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:02.930 11:29:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.189 BaseBdev3 00:19:03.189 11:29:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:03.189 11:29:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:03.189 11:29:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:03.448 BaseBdev4 00:19:03.448 11:29:21 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:03.448 spare_malloc 00:19:03.448 11:29:21 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:03.707 spare_delay 00:19:03.707 11:29:21 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:03.966 [2024-11-26 11:29:22.104073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.966 [2024-11-26 11:29:22.104153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.966 [2024-11-26 11:29:22.104182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:19:03.966 [2024-11-26 11:29:22.104200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.966 [2024-11-26 11:29:22.106667] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.966 [2024-11-26 11:29:22.106724] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.966 spare 00:19:03.966 11:29:22 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:04.226 [2024-11-26 11:29:22.300166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.226 [2024-11-26 11:29:22.302268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.226 [2024-11-26 11:29:22.302355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.226 [2024-11-26 11:29:22.302403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:04.226 [2024-11-26 11:29:22.302525] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:19:04.226 [2024-11-26 11:29:22.302549] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:04.226 [2024-11-26 11:29:22.302698] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:04.226 [2024-11-26 11:29:22.303165] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:19:04.226 [2024-11-26 11:29:22.303220] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:19:04.226 [2024-11-26 11:29:22.303469] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.226 11:29:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.485 11:29:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.485 "name": "raid_bdev1", 00:19:04.485 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:04.485 "strip_size_kb": 0, 00:19:04.485 "state": "online", 00:19:04.485 "raid_level": "raid1", 00:19:04.485 "superblock": false, 00:19:04.485 "num_base_bdevs": 4, 00:19:04.485 "num_base_bdevs_discovered": 4, 00:19:04.485 "num_base_bdevs_operational": 4, 00:19:04.485 "base_bdevs_list": [ 00:19:04.485 { 00:19:04.485 "name": "BaseBdev1", 00:19:04.485 "uuid": "5dc3decb-feee-41da-8861-9ab07c28571a", 00:19:04.485 "is_configured": true, 00:19:04.485 "data_offset": 0, 00:19:04.485 "data_size": 65536 00:19:04.485 }, 00:19:04.485 { 00:19:04.485 "name": "BaseBdev2", 00:19:04.485 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:04.485 "is_configured": true, 00:19:04.485 "data_offset": 0, 00:19:04.485 "data_size": 65536 00:19:04.485 }, 00:19:04.485 { 00:19:04.485 "name": "BaseBdev3", 00:19:04.485 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:04.485 "is_configured": true, 00:19:04.485 "data_offset": 0, 00:19:04.485 "data_size": 65536 00:19:04.485 }, 00:19:04.485 { 00:19:04.485 "name": "BaseBdev4", 00:19:04.485 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:04.485 "is_configured": true, 00:19:04.485 "data_offset": 0, 00:19:04.485 "data_size": 65536 00:19:04.485 } 00:19:04.485 ] 00:19:04.485 }' 00:19:04.485 11:29:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.485 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:19:04.744 11:29:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:04.744 11:29:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:05.004 [2024-11-26 11:29:23.032596] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.004 11:29:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:05.004 11:29:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.004 11:29:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:05.262 11:29:23 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:05.262 11:29:23 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:05.262 11:29:23 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:05.262 11:29:23 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@12 -- # local i 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.262 11:29:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:05.262 [2024-11-26 11:29:23.488493] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:19:05.262 /dev/nbd0 00:19:05.521 11:29:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.521 11:29:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.521 11:29:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:05.521 11:29:23 -- common/autotest_common.sh@867 -- # local i 00:19:05.521 11:29:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:05.521 11:29:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:05.521 11:29:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:05.521 11:29:23 -- common/autotest_common.sh@871 -- # break 00:19:05.521 11:29:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:05.521 11:29:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:05.521 11:29:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.521 1+0 records in 00:19:05.521 1+0 records out 00:19:05.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252267 s, 16.2 MB/s 00:19:05.521 11:29:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.521 11:29:23 -- common/autotest_common.sh@884 -- # size=4096 00:19:05.521 11:29:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.521 11:29:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:05.521 11:29:23 -- common/autotest_common.sh@887 -- # return 0 00:19:05.521 11:29:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.521 11:29:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.521 11:29:23 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:05.521 11:29:23 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:05.521 11:29:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:12.087 65536+0 records in 00:19:12.087 65536+0 records out 00:19:12.087 33554432 bytes (34 MB, 32 MiB) copied, 6.31891 s, 5.3 MB/s 00:19:12.087 11:29:29 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@51 -- # local i 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.087 11:29:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.087 [2024-11-26 11:29:30.117161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@41 -- # break 00:19:12.087 11:29:30 -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:12.087 [2024-11-26 11:29:30.301318] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.087 11:29:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.346 11:29:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.346 "name": "raid_bdev1", 00:19:12.346 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:12.346 "strip_size_kb": 0, 00:19:12.346 "state": "online", 00:19:12.346 "raid_level": "raid1", 00:19:12.346 "superblock": false, 00:19:12.346 "num_base_bdevs": 4, 00:19:12.346 "num_base_bdevs_discovered": 3, 00:19:12.346 "num_base_bdevs_operational": 3, 00:19:12.346 "base_bdevs_list": [ 00:19:12.346 { 00:19:12.346 "name": null, 00:19:12.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.346 "is_configured": false, 00:19:12.346 "data_offset": 0, 00:19:12.346 "data_size": 65536 00:19:12.346 }, 00:19:12.346 { 00:19:12.346 "name": "BaseBdev2", 00:19:12.346 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:12.346 "is_configured": true, 00:19:12.346 "data_offset": 0, 00:19:12.346 "data_size": 65536 00:19:12.346 }, 00:19:12.346 { 00:19:12.346 "name": "BaseBdev3", 00:19:12.346 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:12.346 "is_configured": true, 00:19:12.346 "data_offset": 0, 00:19:12.346 "data_size": 65536 00:19:12.346 }, 00:19:12.346 { 00:19:12.347 "name": "BaseBdev4", 00:19:12.347 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:12.347 "is_configured": true, 00:19:12.347 "data_offset": 0, 00:19:12.347 "data_size": 65536 00:19:12.347 } 00:19:12.347 ] 00:19:12.347 }' 00:19:12.347 11:29:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.347 11:29:30 -- common/autotest_common.sh@10 -- # set +x 00:19:12.606 11:29:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.865 [2024-11-26 11:29:31.029522] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:12.865 [2024-11-26 11:29:31.029596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.865 [2024-11-26 11:29:31.032149] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09620 00:19:12.865 [2024-11-26 11:29:31.034356] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.865 11:29:31 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:14.242 "name": "raid_bdev1", 00:19:14.242 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:14.242 "strip_size_kb": 0, 00:19:14.242 "state": "online", 00:19:14.242 "raid_level": "raid1", 00:19:14.242 "superblock": false, 00:19:14.242 "num_base_bdevs": 4, 00:19:14.242 "num_base_bdevs_discovered": 4, 00:19:14.242 "num_base_bdevs_operational": 4, 00:19:14.242 "process": { 00:19:14.242 "type": "rebuild", 00:19:14.242 "target": "spare", 00:19:14.242 "progress": { 00:19:14.242 "blocks": 24576, 00:19:14.242 "percent": 37 00:19:14.242 } 00:19:14.242 }, 00:19:14.242 "base_bdevs_list": [ 00:19:14.242 { 00:19:14.242 "name": "spare", 00:19:14.242 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:14.242 "is_configured": true, 00:19:14.242 "data_offset": 0, 00:19:14.242 "data_size": 65536 00:19:14.242 }, 00:19:14.242 { 00:19:14.242 "name": "BaseBdev2", 00:19:14.242 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:14.242 "is_configured": true, 00:19:14.242 "data_offset": 0, 00:19:14.242 "data_size": 65536 00:19:14.242 }, 00:19:14.242 { 00:19:14.242 "name": "BaseBdev3", 00:19:14.242 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:14.242 "is_configured": true, 00:19:14.242 "data_offset": 0, 00:19:14.242 "data_size": 65536 00:19:14.242 }, 00:19:14.242 { 00:19:14.242 "name": "BaseBdev4", 00:19:14.242 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:14.242 "is_configured": true, 00:19:14.242 "data_offset": 0, 00:19:14.242 "data_size": 65536 00:19:14.242 } 00:19:14.242 ] 00:19:14.242 }' 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.242 11:29:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:14.502 [2024-11-26 11:29:32.568205] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.502 [2024-11-26 11:29:32.642565] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:14.502 [2024-11-26 11:29:32.642635] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.502 11:29:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.761 11:29:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.761 "name": "raid_bdev1", 00:19:14.761 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:14.761 "strip_size_kb": 0, 00:19:14.761 "state": "online", 00:19:14.761 "raid_level": "raid1", 00:19:14.761 "superblock": false, 00:19:14.761 "num_base_bdevs": 4, 00:19:14.761 "num_base_bdevs_discovered": 3, 00:19:14.761 "num_base_bdevs_operational": 3, 00:19:14.761 "base_bdevs_list": [ 00:19:14.761 { 00:19:14.761 "name": null, 00:19:14.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.761 "is_configured": false, 00:19:14.761 "data_offset": 0, 00:19:14.761 "data_size": 65536 00:19:14.761 }, 00:19:14.761 { 00:19:14.761 "name": "BaseBdev2", 00:19:14.761 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:14.761 "is_configured": true, 00:19:14.761 "data_offset": 0, 00:19:14.761 "data_size": 65536 00:19:14.761 }, 00:19:14.761 { 00:19:14.761 "name": "BaseBdev3", 00:19:14.761 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:14.761 "is_configured": true, 00:19:14.761 "data_offset": 0, 00:19:14.761 "data_size": 65536 00:19:14.761 }, 00:19:14.761 { 00:19:14.761 "name": "BaseBdev4", 00:19:14.761 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:14.761 "is_configured": true, 00:19:14.761 "data_offset": 0, 00:19:14.761 "data_size": 65536 00:19:14.761 } 00:19:14.761 ] 00:19:14.761 }' 00:19:14.761 11:29:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.761 11:29:32 -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.021 11:29:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.280 11:29:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:15.280 "name": "raid_bdev1", 00:19:15.280 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:15.280 "strip_size_kb": 0, 00:19:15.280 "state": "online", 00:19:15.280 "raid_level": "raid1", 00:19:15.280 "superblock": false, 00:19:15.281 "num_base_bdevs": 4, 00:19:15.281 "num_base_bdevs_discovered": 3, 00:19:15.281 "num_base_bdevs_operational": 3, 00:19:15.281 "base_bdevs_list": [ 00:19:15.281 { 00:19:15.281 "name": null, 00:19:15.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.281 "is_configured": false, 00:19:15.281 "data_offset": 0, 00:19:15.281 "data_size": 65536 00:19:15.281 }, 00:19:15.281 { 00:19:15.281 "name": "BaseBdev2", 00:19:15.281 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:15.281 "is_configured": true, 00:19:15.281 "data_offset": 0, 00:19:15.281 "data_size": 65536 00:19:15.281 }, 00:19:15.281 { 00:19:15.281 "name": "BaseBdev3", 00:19:15.281 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:15.281 "is_configured": true, 00:19:15.281 "data_offset": 0, 00:19:15.281 "data_size": 65536 00:19:15.281 }, 00:19:15.281 { 00:19:15.281 "name": "BaseBdev4", 00:19:15.281 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:15.281 "is_configured": true, 00:19:15.281 "data_offset": 0, 00:19:15.281 "data_size": 65536 00:19:15.281 } 00:19:15.281 ] 00:19:15.281 }' 00:19:15.281 11:29:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:15.281 11:29:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:15.281 11:29:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:15.281 11:29:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:15.281 11:29:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:15.540 [2024-11-26 11:29:33.661915] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:15.540 [2024-11-26 11:29:33.661991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.540 [2024-11-26 11:29:33.664313] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:19:15.540 [2024-11-26 11:29:33.666433] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.540 11:29:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.477 11:29:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.735 11:29:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:16.735 "name": "raid_bdev1", 00:19:16.736 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:16.736 "strip_size_kb": 0, 00:19:16.736 "state": "online", 00:19:16.736 "raid_level": "raid1", 00:19:16.736 "superblock": false, 00:19:16.736 "num_base_bdevs": 4, 00:19:16.736 "num_base_bdevs_discovered": 4, 00:19:16.736 "num_base_bdevs_operational": 4, 00:19:16.736 "process": { 00:19:16.736 "type": "rebuild", 00:19:16.736 "target": "spare", 00:19:16.736 "progress": { 00:19:16.736 "blocks": 24576, 00:19:16.736 "percent": 37 00:19:16.736 } 00:19:16.736 }, 00:19:16.736 "base_bdevs_list": [ 00:19:16.736 { 00:19:16.736 "name": "spare", 00:19:16.736 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:16.736 "is_configured": true, 00:19:16.736 "data_offset": 0, 00:19:16.736 "data_size": 65536 00:19:16.736 }, 00:19:16.736 { 00:19:16.736 "name": "BaseBdev2", 00:19:16.736 "uuid": "4f5046cc-fead-44d0-b5f3-5076e7c33601", 00:19:16.736 "is_configured": true, 00:19:16.736 "data_offset": 0, 00:19:16.736 "data_size": 65536 00:19:16.736 }, 00:19:16.736 { 00:19:16.736 "name": "BaseBdev3", 00:19:16.736 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:16.736 "is_configured": true, 00:19:16.736 "data_offset": 0, 00:19:16.736 "data_size": 65536 00:19:16.736 }, 00:19:16.736 { 00:19:16.736 "name": "BaseBdev4", 00:19:16.736 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:16.736 "is_configured": true, 00:19:16.736 "data_offset": 0, 00:19:16.736 "data_size": 65536 00:19:16.736 } 00:19:16.736 ] 00:19:16.736 }' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:19:16.736 11:29:34 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:16.994 [2024-11-26 11:29:35.163922] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.994 [2024-11-26 11:29:35.173068] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d096f0 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.994 11:29:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:17.254 "name": "raid_bdev1", 00:19:17.254 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:17.254 "strip_size_kb": 0, 00:19:17.254 "state": "online", 00:19:17.254 "raid_level": "raid1", 00:19:17.254 "superblock": false, 00:19:17.254 "num_base_bdevs": 4, 00:19:17.254 "num_base_bdevs_discovered": 3, 00:19:17.254 "num_base_bdevs_operational": 3, 00:19:17.254 "process": { 00:19:17.254 "type": "rebuild", 00:19:17.254 "target": "spare", 00:19:17.254 "progress": { 00:19:17.254 "blocks": 34816, 00:19:17.254 "percent": 53 00:19:17.254 } 00:19:17.254 }, 00:19:17.254 "base_bdevs_list": [ 00:19:17.254 { 00:19:17.254 "name": "spare", 00:19:17.254 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:17.254 "is_configured": true, 00:19:17.254 "data_offset": 0, 00:19:17.254 "data_size": 65536 00:19:17.254 }, 00:19:17.254 { 00:19:17.254 "name": null, 00:19:17.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.254 "is_configured": false, 00:19:17.254 "data_offset": 0, 00:19:17.254 "data_size": 65536 00:19:17.254 }, 00:19:17.254 { 00:19:17.254 "name": "BaseBdev3", 00:19:17.254 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:17.254 "is_configured": true, 00:19:17.254 "data_offset": 0, 00:19:17.254 "data_size": 65536 00:19:17.254 }, 00:19:17.254 { 00:19:17.254 "name": "BaseBdev4", 00:19:17.254 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:17.254 "is_configured": true, 00:19:17.254 "data_offset": 0, 00:19:17.254 "data_size": 65536 00:19:17.254 } 00:19:17.254 ] 00:19:17.254 }' 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@657 -- # local timeout=396 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.254 11:29:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.513 11:29:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:17.513 "name": "raid_bdev1", 00:19:17.513 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:17.513 "strip_size_kb": 0, 00:19:17.513 "state": "online", 00:19:17.513 "raid_level": "raid1", 00:19:17.513 "superblock": false, 00:19:17.513 "num_base_bdevs": 4, 00:19:17.513 "num_base_bdevs_discovered": 3, 00:19:17.513 "num_base_bdevs_operational": 3, 00:19:17.513 "process": { 00:19:17.513 "type": "rebuild", 00:19:17.513 "target": "spare", 00:19:17.513 "progress": { 00:19:17.513 "blocks": 38912, 00:19:17.513 "percent": 59 00:19:17.513 } 00:19:17.513 }, 00:19:17.513 "base_bdevs_list": [ 00:19:17.513 { 00:19:17.513 "name": "spare", 00:19:17.513 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:17.513 "is_configured": true, 00:19:17.513 "data_offset": 0, 00:19:17.513 "data_size": 65536 00:19:17.513 }, 00:19:17.513 { 00:19:17.513 "name": null, 00:19:17.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.513 "is_configured": false, 00:19:17.513 "data_offset": 0, 00:19:17.513 "data_size": 65536 00:19:17.513 }, 00:19:17.513 { 00:19:17.513 "name": "BaseBdev3", 00:19:17.513 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:17.513 "is_configured": true, 00:19:17.513 "data_offset": 0, 00:19:17.513 "data_size": 65536 00:19:17.513 }, 00:19:17.513 { 00:19:17.513 "name": "BaseBdev4", 00:19:17.513 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:17.513 "is_configured": true, 00:19:17.513 "data_offset": 0, 00:19:17.513 "data_size": 65536 00:19:17.513 } 00:19:17.513 ] 00:19:17.513 }' 00:19:17.514 11:29:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:17.514 11:29:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.514 11:29:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:17.514 11:29:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.514 11:29:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.891 [2024-11-26 11:29:36.880835] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:18.891 [2024-11-26 11:29:36.880935] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:18.891 [2024-11-26 11:29:36.881009] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:18.891 "name": "raid_bdev1", 00:19:18.891 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:18.891 "strip_size_kb": 0, 00:19:18.891 "state": "online", 00:19:18.891 "raid_level": "raid1", 00:19:18.891 "superblock": false, 00:19:18.891 "num_base_bdevs": 4, 00:19:18.891 "num_base_bdevs_discovered": 3, 00:19:18.891 "num_base_bdevs_operational": 3, 00:19:18.891 "base_bdevs_list": [ 00:19:18.891 { 00:19:18.891 "name": "spare", 00:19:18.891 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:18.891 "is_configured": true, 00:19:18.891 "data_offset": 0, 00:19:18.891 "data_size": 65536 00:19:18.891 }, 00:19:18.891 { 00:19:18.891 "name": null, 00:19:18.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.891 "is_configured": false, 00:19:18.891 "data_offset": 0, 00:19:18.891 "data_size": 65536 00:19:18.891 }, 00:19:18.891 { 00:19:18.891 "name": "BaseBdev3", 00:19:18.891 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:18.891 "is_configured": true, 00:19:18.891 "data_offset": 0, 00:19:18.891 "data_size": 65536 00:19:18.891 }, 00:19:18.891 { 00:19:18.891 "name": "BaseBdev4", 00:19:18.891 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:18.891 "is_configured": true, 00:19:18.891 "data_offset": 0, 00:19:18.891 "data_size": 65536 00:19:18.891 } 00:19:18.891 ] 00:19:18.891 }' 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@660 -- # break 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.891 11:29:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:19.150 "name": "raid_bdev1", 00:19:19.150 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:19.150 "strip_size_kb": 0, 00:19:19.150 "state": "online", 00:19:19.150 "raid_level": "raid1", 00:19:19.150 "superblock": false, 00:19:19.150 "num_base_bdevs": 4, 00:19:19.150 "num_base_bdevs_discovered": 3, 00:19:19.150 "num_base_bdevs_operational": 3, 00:19:19.150 "base_bdevs_list": [ 00:19:19.150 { 00:19:19.150 "name": "spare", 00:19:19.150 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:19.150 "is_configured": true, 00:19:19.150 "data_offset": 0, 00:19:19.150 "data_size": 65536 00:19:19.150 }, 00:19:19.150 { 00:19:19.150 "name": null, 00:19:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.150 "is_configured": false, 00:19:19.150 "data_offset": 0, 00:19:19.150 "data_size": 65536 00:19:19.150 }, 00:19:19.150 { 00:19:19.150 "name": "BaseBdev3", 00:19:19.150 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:19.150 "is_configured": true, 00:19:19.150 "data_offset": 0, 00:19:19.150 "data_size": 65536 00:19:19.150 }, 00:19:19.150 { 00:19:19.150 "name": "BaseBdev4", 00:19:19.150 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:19.150 "is_configured": true, 00:19:19.150 "data_offset": 0, 00:19:19.150 "data_size": 65536 00:19:19.150 } 00:19:19.150 ] 00:19:19.150 }' 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.150 11:29:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.409 11:29:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.409 "name": "raid_bdev1", 00:19:19.409 "uuid": "adefeb41-3248-43e8-9582-9c62ef213660", 00:19:19.409 "strip_size_kb": 0, 00:19:19.409 "state": "online", 00:19:19.409 "raid_level": "raid1", 00:19:19.409 "superblock": false, 00:19:19.409 "num_base_bdevs": 4, 00:19:19.409 "num_base_bdevs_discovered": 3, 00:19:19.409 "num_base_bdevs_operational": 3, 00:19:19.409 "base_bdevs_list": [ 00:19:19.409 { 00:19:19.409 "name": "spare", 00:19:19.409 "uuid": "b96fb7fb-0d48-5094-b052-c15a3d93a306", 00:19:19.409 "is_configured": true, 00:19:19.409 "data_offset": 0, 00:19:19.409 "data_size": 65536 00:19:19.409 }, 00:19:19.409 { 00:19:19.409 "name": null, 00:19:19.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.409 "is_configured": false, 00:19:19.409 "data_offset": 0, 00:19:19.409 "data_size": 65536 00:19:19.409 }, 00:19:19.409 { 00:19:19.409 "name": "BaseBdev3", 00:19:19.409 "uuid": "1865d4da-c791-45dd-b95d-ab85600e66da", 00:19:19.409 "is_configured": true, 00:19:19.409 "data_offset": 0, 00:19:19.409 "data_size": 65536 00:19:19.409 }, 00:19:19.409 { 00:19:19.409 "name": "BaseBdev4", 00:19:19.409 "uuid": "dfe6eb81-8c1f-4077-b995-157594de0140", 00:19:19.409 "is_configured": true, 00:19:19.409 "data_offset": 0, 00:19:19.409 "data_size": 65536 00:19:19.409 } 00:19:19.409 ] 00:19:19.409 }' 00:19:19.409 11:29:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.409 11:29:37 -- common/autotest_common.sh@10 -- # set +x 00:19:19.668 11:29:37 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:19.926 [2024-11-26 11:29:38.037979] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.926 [2024-11-26 11:29:38.038038] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.926 [2024-11-26 11:29:38.038125] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.926 [2024-11-26 11:29:38.038220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.926 [2024-11-26 11:29:38.038268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:19:19.926 11:29:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:19.926 11:29:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.184 11:29:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:20.184 11:29:38 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:20.184 11:29:38 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@12 -- # local i 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.184 11:29:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:20.444 /dev/nbd0 00:19:20.444 11:29:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.444 11:29:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.444 11:29:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:20.444 11:29:38 -- common/autotest_common.sh@867 -- # local i 00:19:20.444 11:29:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:20.444 11:29:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:20.444 11:29:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:20.444 11:29:38 -- common/autotest_common.sh@871 -- # break 00:19:20.444 11:29:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:20.444 11:29:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:20.444 11:29:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.444 1+0 records in 00:19:20.444 1+0 records out 00:19:20.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298207 s, 13.7 MB/s 00:19:20.444 11:29:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.444 11:29:38 -- common/autotest_common.sh@884 -- # size=4096 00:19:20.444 11:29:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.444 11:29:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:20.444 11:29:38 -- common/autotest_common.sh@887 -- # return 0 00:19:20.444 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.444 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.444 11:29:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:20.703 /dev/nbd1 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:20.703 11:29:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:20.703 11:29:38 -- common/autotest_common.sh@867 -- # local i 00:19:20.703 11:29:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:20.703 11:29:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:20.703 11:29:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:20.703 11:29:38 -- common/autotest_common.sh@871 -- # break 00:19:20.703 11:29:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:20.703 11:29:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:20.703 11:29:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.703 1+0 records in 00:19:20.703 1+0 records out 00:19:20.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371569 s, 11.0 MB/s 00:19:20.703 11:29:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.703 11:29:38 -- common/autotest_common.sh@884 -- # size=4096 00:19:20.703 11:29:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.703 11:29:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:20.703 11:29:38 -- common/autotest_common.sh@887 -- # return 0 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.703 11:29:38 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:20.703 11:29:38 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@51 -- # local i 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.703 11:29:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@41 -- # break 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.962 11:29:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@41 -- # break 00:19:21.220 11:29:39 -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.220 11:29:39 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:21.220 11:29:39 -- bdev/bdev_raid.sh@709 -- # killprocess 90053 00:19:21.220 11:29:39 -- common/autotest_common.sh@936 -- # '[' -z 90053 ']' 00:19:21.220 11:29:39 -- common/autotest_common.sh@940 -- # kill -0 90053 00:19:21.220 11:29:39 -- common/autotest_common.sh@941 -- # uname 00:19:21.220 11:29:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.220 11:29:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90053 00:19:21.220 killing process with pid 90053 00:19:21.220 Received shutdown signal, test time was about 60.000000 seconds 00:19:21.220 00:19:21.220 Latency(us) 00:19:21.220 [2024-11-26T11:29:39.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.220 [2024-11-26T11:29:39.450Z] =================================================================================================================== 00:19:21.220 [2024-11-26T11:29:39.450Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.220 11:29:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.220 11:29:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.220 11:29:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90053' 00:19:21.220 11:29:39 -- common/autotest_common.sh@955 -- # kill 90053 00:19:21.220 11:29:39 -- common/autotest_common.sh@960 -- # wait 90053 00:19:21.220 [2024-11-26 11:29:39.441646] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.479 [2024-11-26 11:29:39.472497] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:21.479 00:19:21.479 real 0m20.044s 00:19:21.479 user 0m26.060s 00:19:21.479 sys 0m4.425s 00:19:21.479 11:29:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:21.479 ************************************ 00:19:21.479 END TEST raid_rebuild_test 00:19:21.479 11:29:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 ************************************ 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:19:21.479 11:29:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:21.479 11:29:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.479 11:29:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 ************************************ 00:19:21.479 START TEST raid_rebuild_test_sb 00:19:21.479 ************************************ 00:19:21.479 11:29:39 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:21.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:21.479 11:29:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=90549 00:19:21.480 11:29:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 90549 /var/tmp/spdk-raid.sock 00:19:21.480 11:29:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:21.480 11:29:39 -- common/autotest_common.sh@829 -- # '[' -z 90549 ']' 00:19:21.480 11:29:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:21.480 11:29:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.480 11:29:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:21.480 11:29:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.480 11:29:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.739 [2024-11-26 11:29:39.758234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:21.739 [2024-11-26 11:29:39.758585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90549 ] 00:19:21.739 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:21.739 Zero copy mechanism will not be used. 00:19:21.739 [2024-11-26 11:29:39.913092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.739 [2024-11-26 11:29:39.948059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.998 [2024-11-26 11:29:39.982308] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.568 11:29:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.568 11:29:40 -- common/autotest_common.sh@862 -- # return 0 00:19:22.568 11:29:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:22.568 11:29:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:22.568 11:29:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:22.827 BaseBdev1_malloc 00:19:22.827 11:29:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.086 [2024-11-26 11:29:41.249069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.086 [2024-11-26 11:29:41.249345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.086 [2024-11-26 11:29:41.249409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:19:23.086 [2024-11-26 11:29:41.249433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.086 [2024-11-26 11:29:41.252235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.086 BaseBdev1 00:19:23.086 [2024-11-26 11:29:41.252491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.086 11:29:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:23.086 11:29:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:23.086 11:29:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:23.344 BaseBdev2_malloc 00:19:23.344 11:29:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:23.603 [2024-11-26 11:29:41.675664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:23.603 [2024-11-26 11:29:41.675965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.603 [2024-11-26 11:29:41.676063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:19:23.603 [2024-11-26 11:29:41.676281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.603 [2024-11-26 11:29:41.678922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.603 [2024-11-26 11:29:41.679127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.603 BaseBdev2 00:19:23.603 11:29:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:23.603 11:29:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:23.603 11:29:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:23.861 BaseBdev3_malloc 00:19:23.861 11:29:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:24.119 [2024-11-26 11:29:42.142098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:24.119 [2024-11-26 11:29:42.142177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.119 [2024-11-26 11:29:42.142209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:19:24.119 [2024-11-26 11:29:42.142248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.119 [2024-11-26 11:29:42.144972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.119 [2024-11-26 11:29:42.145056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:24.119 BaseBdev3 00:19:24.119 11:29:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:24.119 11:29:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:24.119 11:29:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:24.376 BaseBdev4_malloc 00:19:24.376 11:29:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:24.376 [2024-11-26 11:29:42.569165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:24.376 [2024-11-26 11:29:42.569249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.376 [2024-11-26 11:29:42.569283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:24.376 [2024-11-26 11:29:42.569317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.376 [2024-11-26 11:29:42.572029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.376 [2024-11-26 11:29:42.572217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:24.376 BaseBdev4 00:19:24.376 11:29:42 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:24.632 spare_malloc 00:19:24.632 11:29:42 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:24.890 spare_delay 00:19:24.890 11:29:43 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:25.147 [2024-11-26 11:29:43.252038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.147 [2024-11-26 11:29:43.252137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.147 [2024-11-26 11:29:43.252172] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:19:25.147 [2024-11-26 11:29:43.252188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.147 [2024-11-26 11:29:43.254807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.147 [2024-11-26 11:29:43.254851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.147 spare 00:19:25.147 11:29:43 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:25.404 [2024-11-26 11:29:43.464136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.404 [2024-11-26 11:29:43.466427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.404 [2024-11-26 11:29:43.466507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:25.404 [2024-11-26 11:29:43.466629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:25.404 [2024-11-26 11:29:43.466847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:19:25.404 [2024-11-26 11:29:43.466899] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:25.404 [2024-11-26 11:29:43.467020] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:25.404 [2024-11-26 11:29:43.467451] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:19:25.404 [2024-11-26 11:29:43.467481] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:19:25.404 [2024-11-26 11:29:43.467664] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.405 11:29:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.663 11:29:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.663 "name": "raid_bdev1", 00:19:25.663 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:25.663 "strip_size_kb": 0, 00:19:25.663 "state": "online", 00:19:25.663 "raid_level": "raid1", 00:19:25.663 "superblock": true, 00:19:25.663 "num_base_bdevs": 4, 00:19:25.663 "num_base_bdevs_discovered": 4, 00:19:25.663 "num_base_bdevs_operational": 4, 00:19:25.663 "base_bdevs_list": [ 00:19:25.663 { 00:19:25.663 "name": "BaseBdev1", 00:19:25.663 "uuid": "71848575-ce54-535b-952b-4e717294e936", 00:19:25.663 "is_configured": true, 00:19:25.663 "data_offset": 2048, 00:19:25.663 "data_size": 63488 00:19:25.663 }, 00:19:25.663 { 00:19:25.663 "name": "BaseBdev2", 00:19:25.663 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:25.663 "is_configured": true, 00:19:25.663 "data_offset": 2048, 00:19:25.663 "data_size": 63488 00:19:25.663 }, 00:19:25.663 { 00:19:25.663 "name": "BaseBdev3", 00:19:25.663 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:25.663 "is_configured": true, 00:19:25.663 "data_offset": 2048, 00:19:25.663 "data_size": 63488 00:19:25.663 }, 00:19:25.663 { 00:19:25.663 "name": "BaseBdev4", 00:19:25.663 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:25.663 "is_configured": true, 00:19:25.663 "data_offset": 2048, 00:19:25.663 "data_size": 63488 00:19:25.663 } 00:19:25.663 ] 00:19:25.663 }' 00:19:25.663 11:29:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.663 11:29:43 -- common/autotest_common.sh@10 -- # set +x 00:19:25.921 11:29:44 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:25.921 11:29:44 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:26.179 [2024-11-26 11:29:44.220548] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.179 11:29:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:26.179 11:29:44 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.179 11:29:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:26.437 11:29:44 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:26.437 11:29:44 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:26.437 11:29:44 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:26.437 11:29:44 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@12 -- # local i 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.437 11:29:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:26.437 [2024-11-26 11:29:44.648478] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:19:26.437 /dev/nbd0 00:19:26.695 11:29:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.695 11:29:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.695 11:29:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:26.695 11:29:44 -- common/autotest_common.sh@867 -- # local i 00:19:26.695 11:29:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:26.695 11:29:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:26.695 11:29:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:26.695 11:29:44 -- common/autotest_common.sh@871 -- # break 00:19:26.695 11:29:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:26.695 11:29:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:26.695 11:29:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.695 1+0 records in 00:19:26.695 1+0 records out 00:19:26.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439831 s, 9.3 MB/s 00:19:26.695 11:29:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.695 11:29:44 -- common/autotest_common.sh@884 -- # size=4096 00:19:26.695 11:29:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.695 11:29:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:26.695 11:29:44 -- common/autotest_common.sh@887 -- # return 0 00:19:26.695 11:29:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.695 11:29:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.695 11:29:44 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:26.695 11:29:44 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:26.695 11:29:44 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:34.838 63488+0 records in 00:19:34.838 63488+0 records out 00:19:34.838 32505856 bytes (33 MB, 31 MiB) copied, 7.34061 s, 4.4 MB/s 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@51 -- # local i 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.838 [2024-11-26 11:29:52.301863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@41 -- # break 00:19:34.838 11:29:52 -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:34.838 [2024-11-26 11:29:52.550070] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.838 11:29:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.838 "name": "raid_bdev1", 00:19:34.838 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:34.838 "strip_size_kb": 0, 00:19:34.838 "state": "online", 00:19:34.838 "raid_level": "raid1", 00:19:34.838 "superblock": true, 00:19:34.838 "num_base_bdevs": 4, 00:19:34.838 "num_base_bdevs_discovered": 3, 00:19:34.838 "num_base_bdevs_operational": 3, 00:19:34.838 "base_bdevs_list": [ 00:19:34.838 { 00:19:34.838 "name": null, 00:19:34.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.838 "is_configured": false, 00:19:34.838 "data_offset": 2048, 00:19:34.838 "data_size": 63488 00:19:34.838 }, 00:19:34.838 { 00:19:34.838 "name": "BaseBdev2", 00:19:34.838 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:34.839 "is_configured": true, 00:19:34.839 "data_offset": 2048, 00:19:34.839 "data_size": 63488 00:19:34.839 }, 00:19:34.839 { 00:19:34.839 "name": "BaseBdev3", 00:19:34.839 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:34.839 "is_configured": true, 00:19:34.839 "data_offset": 2048, 00:19:34.839 "data_size": 63488 00:19:34.839 }, 00:19:34.839 { 00:19:34.839 "name": "BaseBdev4", 00:19:34.839 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:34.839 "is_configured": true, 00:19:34.839 "data_offset": 2048, 00:19:34.839 "data_size": 63488 00:19:34.839 } 00:19:34.839 ] 00:19:34.839 }' 00:19:34.839 11:29:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.839 11:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:35.096 11:29:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.354 [2024-11-26 11:29:53.366264] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:35.354 [2024-11-26 11:29:53.366335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.354 [2024-11-26 11:29:53.368822] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2db0 00:19:35.354 [2024-11-26 11:29:53.371101] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.354 11:29:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.288 11:29:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:36.546 "name": "raid_bdev1", 00:19:36.546 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:36.546 "strip_size_kb": 0, 00:19:36.546 "state": "online", 00:19:36.546 "raid_level": "raid1", 00:19:36.546 "superblock": true, 00:19:36.546 "num_base_bdevs": 4, 00:19:36.546 "num_base_bdevs_discovered": 4, 00:19:36.546 "num_base_bdevs_operational": 4, 00:19:36.546 "process": { 00:19:36.546 "type": "rebuild", 00:19:36.546 "target": "spare", 00:19:36.546 "progress": { 00:19:36.546 "blocks": 24576, 00:19:36.546 "percent": 38 00:19:36.546 } 00:19:36.546 }, 00:19:36.546 "base_bdevs_list": [ 00:19:36.546 { 00:19:36.546 "name": "spare", 00:19:36.546 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:36.546 "is_configured": true, 00:19:36.546 "data_offset": 2048, 00:19:36.546 "data_size": 63488 00:19:36.546 }, 00:19:36.546 { 00:19:36.546 "name": "BaseBdev2", 00:19:36.546 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:36.546 "is_configured": true, 00:19:36.546 "data_offset": 2048, 00:19:36.546 "data_size": 63488 00:19:36.546 }, 00:19:36.546 { 00:19:36.546 "name": "BaseBdev3", 00:19:36.546 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:36.546 "is_configured": true, 00:19:36.546 "data_offset": 2048, 00:19:36.546 "data_size": 63488 00:19:36.546 }, 00:19:36.546 { 00:19:36.546 "name": "BaseBdev4", 00:19:36.546 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:36.546 "is_configured": true, 00:19:36.546 "data_offset": 2048, 00:19:36.546 "data_size": 63488 00:19:36.546 } 00:19:36.546 ] 00:19:36.546 }' 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.546 11:29:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:36.804 [2024-11-26 11:29:54.900308] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.804 [2024-11-26 11:29:54.979575] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:36.804 [2024-11-26 11:29:54.979652] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.804 11:29:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.062 11:29:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.062 "name": "raid_bdev1", 00:19:37.062 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:37.062 "strip_size_kb": 0, 00:19:37.062 "state": "online", 00:19:37.062 "raid_level": "raid1", 00:19:37.062 "superblock": true, 00:19:37.062 "num_base_bdevs": 4, 00:19:37.062 "num_base_bdevs_discovered": 3, 00:19:37.062 "num_base_bdevs_operational": 3, 00:19:37.062 "base_bdevs_list": [ 00:19:37.062 { 00:19:37.062 "name": null, 00:19:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.062 "is_configured": false, 00:19:37.062 "data_offset": 2048, 00:19:37.062 "data_size": 63488 00:19:37.062 }, 00:19:37.062 { 00:19:37.062 "name": "BaseBdev2", 00:19:37.062 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:37.062 "is_configured": true, 00:19:37.062 "data_offset": 2048, 00:19:37.062 "data_size": 63488 00:19:37.062 }, 00:19:37.062 { 00:19:37.062 "name": "BaseBdev3", 00:19:37.062 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:37.062 "is_configured": true, 00:19:37.062 "data_offset": 2048, 00:19:37.062 "data_size": 63488 00:19:37.062 }, 00:19:37.062 { 00:19:37.062 "name": "BaseBdev4", 00:19:37.062 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:37.062 "is_configured": true, 00:19:37.062 "data_offset": 2048, 00:19:37.062 "data_size": 63488 00:19:37.062 } 00:19:37.062 ] 00:19:37.062 }' 00:19:37.062 11:29:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.062 11:29:55 -- common/autotest_common.sh@10 -- # set +x 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.321 11:29:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:37.580 "name": "raid_bdev1", 00:19:37.580 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:37.580 "strip_size_kb": 0, 00:19:37.580 "state": "online", 00:19:37.580 "raid_level": "raid1", 00:19:37.580 "superblock": true, 00:19:37.580 "num_base_bdevs": 4, 00:19:37.580 "num_base_bdevs_discovered": 3, 00:19:37.580 "num_base_bdevs_operational": 3, 00:19:37.580 "base_bdevs_list": [ 00:19:37.580 { 00:19:37.580 "name": null, 00:19:37.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.580 "is_configured": false, 00:19:37.580 "data_offset": 2048, 00:19:37.580 "data_size": 63488 00:19:37.580 }, 00:19:37.580 { 00:19:37.580 "name": "BaseBdev2", 00:19:37.580 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:37.580 "is_configured": true, 00:19:37.580 "data_offset": 2048, 00:19:37.580 "data_size": 63488 00:19:37.580 }, 00:19:37.580 { 00:19:37.580 "name": "BaseBdev3", 00:19:37.580 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:37.580 "is_configured": true, 00:19:37.580 "data_offset": 2048, 00:19:37.580 "data_size": 63488 00:19:37.580 }, 00:19:37.580 { 00:19:37.580 "name": "BaseBdev4", 00:19:37.580 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:37.580 "is_configured": true, 00:19:37.580 "data_offset": 2048, 00:19:37.580 "data_size": 63488 00:19:37.580 } 00:19:37.580 ] 00:19:37.580 }' 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:37.580 11:29:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.838 [2024-11-26 11:29:56.038959] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:37.838 [2024-11-26 11:29:56.039011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.838 [2024-11-26 11:29:56.041410] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:19:37.838 [2024-11-26 11:29:56.043639] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.838 11:29:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:39.215 "name": "raid_bdev1", 00:19:39.215 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:39.215 "strip_size_kb": 0, 00:19:39.215 "state": "online", 00:19:39.215 "raid_level": "raid1", 00:19:39.215 "superblock": true, 00:19:39.215 "num_base_bdevs": 4, 00:19:39.215 "num_base_bdevs_discovered": 4, 00:19:39.215 "num_base_bdevs_operational": 4, 00:19:39.215 "process": { 00:19:39.215 "type": "rebuild", 00:19:39.215 "target": "spare", 00:19:39.215 "progress": { 00:19:39.215 "blocks": 24576, 00:19:39.215 "percent": 38 00:19:39.215 } 00:19:39.215 }, 00:19:39.215 "base_bdevs_list": [ 00:19:39.215 { 00:19:39.215 "name": "spare", 00:19:39.215 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:39.215 "is_configured": true, 00:19:39.215 "data_offset": 2048, 00:19:39.215 "data_size": 63488 00:19:39.215 }, 00:19:39.215 { 00:19:39.215 "name": "BaseBdev2", 00:19:39.215 "uuid": "89b6a3a4-3768-5272-bca7-d61846b24404", 00:19:39.215 "is_configured": true, 00:19:39.215 "data_offset": 2048, 00:19:39.215 "data_size": 63488 00:19:39.215 }, 00:19:39.215 { 00:19:39.215 "name": "BaseBdev3", 00:19:39.215 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:39.215 "is_configured": true, 00:19:39.215 "data_offset": 2048, 00:19:39.215 "data_size": 63488 00:19:39.215 }, 00:19:39.215 { 00:19:39.215 "name": "BaseBdev4", 00:19:39.215 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:39.215 "is_configured": true, 00:19:39.215 "data_offset": 2048, 00:19:39.215 "data_size": 63488 00:19:39.215 } 00:19:39.215 ] 00:19:39.215 }' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:39.215 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:19:39.215 11:29:57 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:39.475 [2024-11-26 11:29:57.536945] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.475 [2024-11-26 11:29:57.550753] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca2e80 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.475 11:29:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.734 11:29:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:39.734 "name": "raid_bdev1", 00:19:39.734 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:39.734 "strip_size_kb": 0, 00:19:39.734 "state": "online", 00:19:39.734 "raid_level": "raid1", 00:19:39.734 "superblock": true, 00:19:39.734 "num_base_bdevs": 4, 00:19:39.734 "num_base_bdevs_discovered": 3, 00:19:39.734 "num_base_bdevs_operational": 3, 00:19:39.734 "process": { 00:19:39.734 "type": "rebuild", 00:19:39.734 "target": "spare", 00:19:39.734 "progress": { 00:19:39.734 "blocks": 36864, 00:19:39.734 "percent": 58 00:19:39.734 } 00:19:39.734 }, 00:19:39.734 "base_bdevs_list": [ 00:19:39.734 { 00:19:39.734 "name": "spare", 00:19:39.734 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:39.734 "is_configured": true, 00:19:39.734 "data_offset": 2048, 00:19:39.734 "data_size": 63488 00:19:39.734 }, 00:19:39.734 { 00:19:39.734 "name": null, 00:19:39.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.734 "is_configured": false, 00:19:39.734 "data_offset": 2048, 00:19:39.734 "data_size": 63488 00:19:39.734 }, 00:19:39.735 { 00:19:39.735 "name": "BaseBdev3", 00:19:39.735 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:39.735 "is_configured": true, 00:19:39.735 "data_offset": 2048, 00:19:39.735 "data_size": 63488 00:19:39.735 }, 00:19:39.735 { 00:19:39.735 "name": "BaseBdev4", 00:19:39.735 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:39.735 "is_configured": true, 00:19:39.735 "data_offset": 2048, 00:19:39.735 "data_size": 63488 00:19:39.735 } 00:19:39.735 ] 00:19:39.735 }' 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@657 -- # local timeout=418 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.735 11:29:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:39.994 "name": "raid_bdev1", 00:19:39.994 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:39.994 "strip_size_kb": 0, 00:19:39.994 "state": "online", 00:19:39.994 "raid_level": "raid1", 00:19:39.994 "superblock": true, 00:19:39.994 "num_base_bdevs": 4, 00:19:39.994 "num_base_bdevs_discovered": 3, 00:19:39.994 "num_base_bdevs_operational": 3, 00:19:39.994 "process": { 00:19:39.994 "type": "rebuild", 00:19:39.994 "target": "spare", 00:19:39.994 "progress": { 00:19:39.994 "blocks": 40960, 00:19:39.994 "percent": 64 00:19:39.994 } 00:19:39.994 }, 00:19:39.994 "base_bdevs_list": [ 00:19:39.994 { 00:19:39.994 "name": "spare", 00:19:39.994 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:39.994 "is_configured": true, 00:19:39.994 "data_offset": 2048, 00:19:39.994 "data_size": 63488 00:19:39.994 }, 00:19:39.994 { 00:19:39.994 "name": null, 00:19:39.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.994 "is_configured": false, 00:19:39.994 "data_offset": 2048, 00:19:39.994 "data_size": 63488 00:19:39.994 }, 00:19:39.994 { 00:19:39.994 "name": "BaseBdev3", 00:19:39.994 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:39.994 "is_configured": true, 00:19:39.994 "data_offset": 2048, 00:19:39.994 "data_size": 63488 00:19:39.994 }, 00:19:39.994 { 00:19:39.994 "name": "BaseBdev4", 00:19:39.994 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:39.994 "is_configured": true, 00:19:39.994 "data_offset": 2048, 00:19:39.994 "data_size": 63488 00:19:39.994 } 00:19:39.994 ] 00:19:39.994 }' 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.994 11:29:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.928 11:29:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.928 [2024-11-26 11:29:59.158474] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:40.928 [2024-11-26 11:29:59.158570] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:40.928 [2024-11-26 11:29:59.158693] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:41.186 "name": "raid_bdev1", 00:19:41.186 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:41.186 "strip_size_kb": 0, 00:19:41.186 "state": "online", 00:19:41.186 "raid_level": "raid1", 00:19:41.186 "superblock": true, 00:19:41.186 "num_base_bdevs": 4, 00:19:41.186 "num_base_bdevs_discovered": 3, 00:19:41.186 "num_base_bdevs_operational": 3, 00:19:41.186 "base_bdevs_list": [ 00:19:41.186 { 00:19:41.186 "name": "spare", 00:19:41.186 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:41.186 "is_configured": true, 00:19:41.186 "data_offset": 2048, 00:19:41.186 "data_size": 63488 00:19:41.186 }, 00:19:41.186 { 00:19:41.186 "name": null, 00:19:41.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.186 "is_configured": false, 00:19:41.186 "data_offset": 2048, 00:19:41.186 "data_size": 63488 00:19:41.186 }, 00:19:41.186 { 00:19:41.186 "name": "BaseBdev3", 00:19:41.186 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:41.186 "is_configured": true, 00:19:41.186 "data_offset": 2048, 00:19:41.186 "data_size": 63488 00:19:41.186 }, 00:19:41.186 { 00:19:41.186 "name": "BaseBdev4", 00:19:41.186 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:41.186 "is_configured": true, 00:19:41.186 "data_offset": 2048, 00:19:41.186 "data_size": 63488 00:19:41.186 } 00:19:41.186 ] 00:19:41.186 }' 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@660 -- # break 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.186 11:29:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:41.444 "name": "raid_bdev1", 00:19:41.444 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:41.444 "strip_size_kb": 0, 00:19:41.444 "state": "online", 00:19:41.444 "raid_level": "raid1", 00:19:41.444 "superblock": true, 00:19:41.444 "num_base_bdevs": 4, 00:19:41.444 "num_base_bdevs_discovered": 3, 00:19:41.444 "num_base_bdevs_operational": 3, 00:19:41.444 "base_bdevs_list": [ 00:19:41.444 { 00:19:41.444 "name": "spare", 00:19:41.444 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:41.444 "is_configured": true, 00:19:41.444 "data_offset": 2048, 00:19:41.444 "data_size": 63488 00:19:41.444 }, 00:19:41.444 { 00:19:41.444 "name": null, 00:19:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.444 "is_configured": false, 00:19:41.444 "data_offset": 2048, 00:19:41.444 "data_size": 63488 00:19:41.444 }, 00:19:41.444 { 00:19:41.444 "name": "BaseBdev3", 00:19:41.444 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:41.444 "is_configured": true, 00:19:41.444 "data_offset": 2048, 00:19:41.444 "data_size": 63488 00:19:41.444 }, 00:19:41.444 { 00:19:41.444 "name": "BaseBdev4", 00:19:41.444 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:41.444 "is_configured": true, 00:19:41.444 "data_offset": 2048, 00:19:41.444 "data_size": 63488 00:19:41.444 } 00:19:41.444 ] 00:19:41.444 }' 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.444 11:29:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.702 11:29:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.702 "name": "raid_bdev1", 00:19:41.702 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:41.702 "strip_size_kb": 0, 00:19:41.702 "state": "online", 00:19:41.702 "raid_level": "raid1", 00:19:41.702 "superblock": true, 00:19:41.702 "num_base_bdevs": 4, 00:19:41.702 "num_base_bdevs_discovered": 3, 00:19:41.702 "num_base_bdevs_operational": 3, 00:19:41.702 "base_bdevs_list": [ 00:19:41.702 { 00:19:41.702 "name": "spare", 00:19:41.702 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:41.702 "is_configured": true, 00:19:41.702 "data_offset": 2048, 00:19:41.702 "data_size": 63488 00:19:41.702 }, 00:19:41.702 { 00:19:41.702 "name": null, 00:19:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.702 "is_configured": false, 00:19:41.702 "data_offset": 2048, 00:19:41.702 "data_size": 63488 00:19:41.702 }, 00:19:41.702 { 00:19:41.702 "name": "BaseBdev3", 00:19:41.702 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:41.702 "is_configured": true, 00:19:41.702 "data_offset": 2048, 00:19:41.702 "data_size": 63488 00:19:41.702 }, 00:19:41.702 { 00:19:41.702 "name": "BaseBdev4", 00:19:41.702 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:41.702 "is_configured": true, 00:19:41.702 "data_offset": 2048, 00:19:41.702 "data_size": 63488 00:19:41.702 } 00:19:41.702 ] 00:19:41.702 }' 00:19:41.702 11:29:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.702 11:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.265 11:30:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:42.266 [2024-11-26 11:30:00.462716] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.266 [2024-11-26 11:30:00.462757] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.266 [2024-11-26 11:30:00.462857] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.266 [2024-11-26 11:30:00.462983] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.266 [2024-11-26 11:30:00.463003] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:19:42.266 11:30:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.266 11:30:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:42.524 11:30:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:42.524 11:30:00 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:42.524 11:30:00 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@12 -- # local i 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.524 11:30:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:42.782 /dev/nbd0 00:19:42.782 11:30:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.782 11:30:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.782 11:30:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:42.782 11:30:00 -- common/autotest_common.sh@867 -- # local i 00:19:42.782 11:30:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:42.782 11:30:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:42.782 11:30:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:42.782 11:30:01 -- common/autotest_common.sh@871 -- # break 00:19:42.782 11:30:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:42.782 11:30:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:42.782 11:30:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.782 1+0 records in 00:19:42.782 1+0 records out 00:19:42.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278819 s, 14.7 MB/s 00:19:42.782 11:30:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.782 11:30:01 -- common/autotest_common.sh@884 -- # size=4096 00:19:42.782 11:30:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.782 11:30:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:42.782 11:30:01 -- common/autotest_common.sh@887 -- # return 0 00:19:42.782 11:30:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.782 11:30:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.782 11:30:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:43.040 /dev/nbd1 00:19:43.040 11:30:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:43.299 11:30:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:43.299 11:30:01 -- common/autotest_common.sh@867 -- # local i 00:19:43.299 11:30:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:43.299 11:30:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:43.299 11:30:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:43.299 11:30:01 -- common/autotest_common.sh@871 -- # break 00:19:43.299 11:30:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:43.299 11:30:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:43.299 11:30:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.299 1+0 records in 00:19:43.299 1+0 records out 00:19:43.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313084 s, 13.1 MB/s 00:19:43.299 11:30:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.299 11:30:01 -- common/autotest_common.sh@884 -- # size=4096 00:19:43.299 11:30:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.299 11:30:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:43.299 11:30:01 -- common/autotest_common.sh@887 -- # return 0 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:43.299 11:30:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:43.299 11:30:01 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@51 -- # local i 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.299 11:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@41 -- # break 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.557 11:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@41 -- # break 00:19:43.815 11:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.815 11:30:01 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:43.815 11:30:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:43.815 11:30:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:43.815 11:30:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:44.074 11:30:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:44.332 [2024-11-26 11:30:02.387000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:44.333 [2024-11-26 11:30:02.387079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.333 [2024-11-26 11:30:02.387117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:19:44.333 [2024-11-26 11:30:02.387132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.333 [2024-11-26 11:30:02.389790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.333 [2024-11-26 11:30:02.389835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.333 [2024-11-26 11:30:02.389947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:44.333 [2024-11-26 11:30:02.390001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.333 BaseBdev1 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@696 -- # continue 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:19:44.333 11:30:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:19:44.593 11:30:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:44.852 [2024-11-26 11:30:02.835097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:44.852 [2024-11-26 11:30:02.835387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.852 [2024-11-26 11:30:02.835470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:19:44.852 [2024-11-26 11:30:02.835597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.852 [2024-11-26 11:30:02.836164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.852 [2024-11-26 11:30:02.836325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:44.852 [2024-11-26 11:30:02.836441] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:19:44.852 [2024-11-26 11:30:02.836460] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:19:44.852 [2024-11-26 11:30:02.836475] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.852 [2024-11-26 11:30:02.836511] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:19:44.852 [2024-11-26 11:30:02.836568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.852 BaseBdev3 00:19:44.852 11:30:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:44.852 11:30:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:19:44.852 11:30:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:19:45.110 11:30:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:45.110 [2024-11-26 11:30:03.307213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:45.110 [2024-11-26 11:30:03.307354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.110 [2024-11-26 11:30:03.307410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:19:45.110 [2024-11-26 11:30:03.307426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.110 [2024-11-26 11:30:03.307925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.110 [2024-11-26 11:30:03.307970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:45.110 [2024-11-26 11:30:03.308068] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:19:45.110 [2024-11-26 11:30:03.308103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:45.110 BaseBdev4 00:19:45.110 11:30:03 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:45.369 11:30:03 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:45.628 [2024-11-26 11:30:03.731276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.628 [2024-11-26 11:30:03.731387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.628 [2024-11-26 11:30:03.731436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:19:45.628 [2024-11-26 11:30:03.731466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.628 [2024-11-26 11:30:03.731976] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.628 [2024-11-26 11:30:03.732004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.628 [2024-11-26 11:30:03.732106] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:45.628 [2024-11-26 11:30:03.732141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:45.628 spare 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.628 11:30:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.628 [2024-11-26 11:30:03.832298] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:19:45.628 [2024-11-26 11:30:03.832349] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:45.628 [2024-11-26 11:30:03.832484] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1530 00:19:45.628 [2024-11-26 11:30:03.832971] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:19:45.628 [2024-11-26 11:30:03.832989] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:19:45.628 [2024-11-26 11:30:03.833145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.886 11:30:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.886 "name": "raid_bdev1", 00:19:45.886 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:45.886 "strip_size_kb": 0, 00:19:45.886 "state": "online", 00:19:45.886 "raid_level": "raid1", 00:19:45.886 "superblock": true, 00:19:45.886 "num_base_bdevs": 4, 00:19:45.886 "num_base_bdevs_discovered": 3, 00:19:45.886 "num_base_bdevs_operational": 3, 00:19:45.886 "base_bdevs_list": [ 00:19:45.886 { 00:19:45.886 "name": "spare", 00:19:45.886 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:45.886 "is_configured": true, 00:19:45.886 "data_offset": 2048, 00:19:45.886 "data_size": 63488 00:19:45.886 }, 00:19:45.886 { 00:19:45.886 "name": null, 00:19:45.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.886 "is_configured": false, 00:19:45.886 "data_offset": 2048, 00:19:45.886 "data_size": 63488 00:19:45.886 }, 00:19:45.886 { 00:19:45.886 "name": "BaseBdev3", 00:19:45.886 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:45.886 "is_configured": true, 00:19:45.886 "data_offset": 2048, 00:19:45.886 "data_size": 63488 00:19:45.886 }, 00:19:45.886 { 00:19:45.886 "name": "BaseBdev4", 00:19:45.886 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:45.886 "is_configured": true, 00:19:45.886 "data_offset": 2048, 00:19:45.886 "data_size": 63488 00:19:45.886 } 00:19:45.886 ] 00:19:45.886 }' 00:19:45.886 11:30:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.886 11:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.144 11:30:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:46.403 "name": "raid_bdev1", 00:19:46.403 "uuid": "d48cbfe3-392d-4621-b1b9-574856f1818d", 00:19:46.403 "strip_size_kb": 0, 00:19:46.403 "state": "online", 00:19:46.403 "raid_level": "raid1", 00:19:46.403 "superblock": true, 00:19:46.403 "num_base_bdevs": 4, 00:19:46.403 "num_base_bdevs_discovered": 3, 00:19:46.403 "num_base_bdevs_operational": 3, 00:19:46.403 "base_bdevs_list": [ 00:19:46.403 { 00:19:46.403 "name": "spare", 00:19:46.403 "uuid": "cf8bb938-41b4-542b-8226-20d3c6b4d872", 00:19:46.403 "is_configured": true, 00:19:46.403 "data_offset": 2048, 00:19:46.403 "data_size": 63488 00:19:46.403 }, 00:19:46.403 { 00:19:46.403 "name": null, 00:19:46.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.403 "is_configured": false, 00:19:46.403 "data_offset": 2048, 00:19:46.403 "data_size": 63488 00:19:46.403 }, 00:19:46.403 { 00:19:46.403 "name": "BaseBdev3", 00:19:46.403 "uuid": "9b2e0335-d066-5ed0-9459-af65bfb34b2b", 00:19:46.403 "is_configured": true, 00:19:46.403 "data_offset": 2048, 00:19:46.403 "data_size": 63488 00:19:46.403 }, 00:19:46.403 { 00:19:46.403 "name": "BaseBdev4", 00:19:46.403 "uuid": "93a08c31-a035-5893-a524-aa9fc7314ea4", 00:19:46.403 "is_configured": true, 00:19:46.403 "data_offset": 2048, 00:19:46.403 "data_size": 63488 00:19:46.403 } 00:19:46.403 ] 00:19:46.403 }' 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.403 11:30:04 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:46.661 11:30:04 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.661 11:30:04 -- bdev/bdev_raid.sh@709 -- # killprocess 90549 00:19:46.662 11:30:04 -- common/autotest_common.sh@936 -- # '[' -z 90549 ']' 00:19:46.662 11:30:04 -- common/autotest_common.sh@940 -- # kill -0 90549 00:19:46.662 11:30:04 -- common/autotest_common.sh@941 -- # uname 00:19:46.662 11:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.662 11:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90549 00:19:46.662 killing process with pid 90549 00:19:46.662 Received shutdown signal, test time was about 60.000000 seconds 00:19:46.662 00:19:46.662 Latency(us) 00:19:46.662 [2024-11-26T11:30:04.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.662 [2024-11-26T11:30:04.892Z] =================================================================================================================== 00:19:46.662 [2024-11-26T11:30:04.892Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.662 11:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.662 11:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.662 11:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90549' 00:19:46.662 11:30:04 -- common/autotest_common.sh@955 -- # kill 90549 00:19:46.662 [2024-11-26 11:30:04.768409] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:46.662 11:30:04 -- common/autotest_common.sh@960 -- # wait 90549 00:19:46.662 [2024-11-26 11:30:04.768497] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.662 [2024-11-26 11:30:04.768586] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:46.662 [2024-11-26 11:30:04.768605] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:19:46.662 [2024-11-26 11:30:04.800459] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.920 11:30:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:46.920 00:19:46.920 real 0m25.284s 00:19:46.920 user 0m34.920s 00:19:46.920 sys 0m4.402s 00:19:46.920 ************************************ 00:19:46.920 END TEST raid_rebuild_test_sb 00:19:46.920 ************************************ 00:19:46.920 11:30:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:46.920 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:19:46.920 11:30:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:46.920 11:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.920 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:46.920 ************************************ 00:19:46.920 START TEST raid_rebuild_test_io 00:19:46.920 ************************************ 00:19:46.920 11:30:05 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@544 -- # raid_pid=91150 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@545 -- # waitforlisten 91150 /var/tmp/spdk-raid.sock 00:19:46.920 11:30:05 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:46.920 11:30:05 -- common/autotest_common.sh@829 -- # '[' -z 91150 ']' 00:19:46.920 11:30:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:46.920 11:30:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:46.920 11:30:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:46.920 11:30:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.920 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:46.920 [2024-11-26 11:30:05.098914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:46.920 [2024-11-26 11:30:05.099138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91150 ] 00:19:46.920 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.920 Zero copy mechanism will not be used. 00:19:47.178 [2024-11-26 11:30:05.254105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.178 [2024-11-26 11:30:05.288794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.178 [2024-11-26 11:30:05.322053] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.112 11:30:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.112 11:30:06 -- common/autotest_common.sh@862 -- # return 0 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:48.112 BaseBdev1 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.112 11:30:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:48.370 BaseBdev2 00:19:48.370 11:30:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.370 11:30:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.370 11:30:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:48.628 BaseBdev3 00:19:48.628 11:30:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:48.628 11:30:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:48.628 11:30:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:48.886 BaseBdev4 00:19:48.887 11:30:07 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:49.145 spare_malloc 00:19:49.145 11:30:07 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:49.410 spare_delay 00:19:49.410 11:30:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:49.410 [2024-11-26 11:30:07.621022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.410 [2024-11-26 11:30:07.621100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.410 [2024-11-26 11:30:07.621146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:19:49.410 [2024-11-26 11:30:07.621166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.410 [2024-11-26 11:30:07.623986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.410 [2024-11-26 11:30:07.624036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.410 spare 00:19:49.410 11:30:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:49.709 [2024-11-26 11:30:07.825098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.709 [2024-11-26 11:30:07.827252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.709 [2024-11-26 11:30:07.827328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.709 [2024-11-26 11:30:07.827390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:49.709 [2024-11-26 11:30:07.827520] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:19:49.709 [2024-11-26 11:30:07.827553] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:49.709 [2024-11-26 11:30:07.827680] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:49.709 [2024-11-26 11:30:07.828154] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:19:49.709 [2024-11-26 11:30:07.828198] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:19:49.709 [2024-11-26 11:30:07.828388] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.709 11:30:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.970 11:30:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.970 "name": "raid_bdev1", 00:19:49.970 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:49.970 "strip_size_kb": 0, 00:19:49.970 "state": "online", 00:19:49.970 "raid_level": "raid1", 00:19:49.970 "superblock": false, 00:19:49.970 "num_base_bdevs": 4, 00:19:49.970 "num_base_bdevs_discovered": 4, 00:19:49.970 "num_base_bdevs_operational": 4, 00:19:49.970 "base_bdevs_list": [ 00:19:49.970 { 00:19:49.970 "name": "BaseBdev1", 00:19:49.970 "uuid": "b263ce90-f7cf-4563-b8d8-b1e79b8c2ae8", 00:19:49.970 "is_configured": true, 00:19:49.970 "data_offset": 0, 00:19:49.970 "data_size": 65536 00:19:49.970 }, 00:19:49.970 { 00:19:49.970 "name": "BaseBdev2", 00:19:49.970 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:49.970 "is_configured": true, 00:19:49.970 "data_offset": 0, 00:19:49.970 "data_size": 65536 00:19:49.970 }, 00:19:49.970 { 00:19:49.970 "name": "BaseBdev3", 00:19:49.970 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:49.970 "is_configured": true, 00:19:49.970 "data_offset": 0, 00:19:49.970 "data_size": 65536 00:19:49.970 }, 00:19:49.970 { 00:19:49.970 "name": "BaseBdev4", 00:19:49.970 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:49.970 "is_configured": true, 00:19:49.970 "data_offset": 0, 00:19:49.970 "data_size": 65536 00:19:49.970 } 00:19:49.970 ] 00:19:49.970 }' 00:19:49.970 11:30:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.970 11:30:08 -- common/autotest_common.sh@10 -- # set +x 00:19:50.228 11:30:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:50.228 11:30:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:50.487 [2024-11-26 11:30:08.573558] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.487 11:30:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:50.487 11:30:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:50.487 11:30:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.746 11:30:08 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:50.746 11:30:08 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:50.746 11:30:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:50.746 11:30:08 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:50.746 [2024-11-26 11:30:08.946942] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:50.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:50.746 Zero copy mechanism will not be used. 00:19:50.746 Running I/O for 60 seconds... 00:19:51.005 [2024-11-26 11:30:09.061142] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.005 [2024-11-26 11:30:09.061369] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.005 11:30:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.264 11:30:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.264 "name": "raid_bdev1", 00:19:51.264 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:51.264 "strip_size_kb": 0, 00:19:51.264 "state": "online", 00:19:51.264 "raid_level": "raid1", 00:19:51.264 "superblock": false, 00:19:51.264 "num_base_bdevs": 4, 00:19:51.264 "num_base_bdevs_discovered": 3, 00:19:51.264 "num_base_bdevs_operational": 3, 00:19:51.264 "base_bdevs_list": [ 00:19:51.264 { 00:19:51.264 "name": null, 00:19:51.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.264 "is_configured": false, 00:19:51.264 "data_offset": 0, 00:19:51.264 "data_size": 65536 00:19:51.264 }, 00:19:51.264 { 00:19:51.264 "name": "BaseBdev2", 00:19:51.264 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:51.264 "is_configured": true, 00:19:51.264 "data_offset": 0, 00:19:51.264 "data_size": 65536 00:19:51.264 }, 00:19:51.264 { 00:19:51.264 "name": "BaseBdev3", 00:19:51.264 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:51.264 "is_configured": true, 00:19:51.264 "data_offset": 0, 00:19:51.264 "data_size": 65536 00:19:51.264 }, 00:19:51.264 { 00:19:51.264 "name": "BaseBdev4", 00:19:51.264 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:51.264 "is_configured": true, 00:19:51.264 "data_offset": 0, 00:19:51.264 "data_size": 65536 00:19:51.264 } 00:19:51.264 ] 00:19:51.264 }' 00:19:51.264 11:30:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.264 11:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:51.522 11:30:09 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.780 [2024-11-26 11:30:09.893023] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.780 [2024-11-26 11:30:09.893078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.780 11:30:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:51.780 [2024-11-26 11:30:09.959732] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:19:51.780 [2024-11-26 11:30:09.962150] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.038 [2024-11-26 11:30:10.079155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:52.038 [2024-11-26 11:30:10.080119] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:52.297 [2024-11-26 11:30:10.314313] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:52.297 [2024-11-26 11:30:10.314566] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:52.555 [2024-11-26 11:30:10.568075] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:52.556 [2024-11-26 11:30:10.778557] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.556 [2024-11-26 11:30:10.779107] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.815 11:30:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.074 [2024-11-26 11:30:11.147811] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.074 "name": "raid_bdev1", 00:19:53.074 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:53.074 "strip_size_kb": 0, 00:19:53.074 "state": "online", 00:19:53.074 "raid_level": "raid1", 00:19:53.074 "superblock": false, 00:19:53.074 "num_base_bdevs": 4, 00:19:53.074 "num_base_bdevs_discovered": 4, 00:19:53.074 "num_base_bdevs_operational": 4, 00:19:53.074 "process": { 00:19:53.074 "type": "rebuild", 00:19:53.074 "target": "spare", 00:19:53.074 "progress": { 00:19:53.074 "blocks": 14336, 00:19:53.074 "percent": 21 00:19:53.074 } 00:19:53.074 }, 00:19:53.074 "base_bdevs_list": [ 00:19:53.074 { 00:19:53.074 "name": "spare", 00:19:53.074 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:53.074 "is_configured": true, 00:19:53.074 "data_offset": 0, 00:19:53.074 "data_size": 65536 00:19:53.074 }, 00:19:53.074 { 00:19:53.074 "name": "BaseBdev2", 00:19:53.074 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:53.074 "is_configured": true, 00:19:53.074 "data_offset": 0, 00:19:53.074 "data_size": 65536 00:19:53.074 }, 00:19:53.074 { 00:19:53.074 "name": "BaseBdev3", 00:19:53.074 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:53.074 "is_configured": true, 00:19:53.074 "data_offset": 0, 00:19:53.074 "data_size": 65536 00:19:53.074 }, 00:19:53.074 { 00:19:53.074 "name": "BaseBdev4", 00:19:53.074 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:53.074 "is_configured": true, 00:19:53.074 "data_offset": 0, 00:19:53.074 "data_size": 65536 00:19:53.074 } 00:19:53.074 ] 00:19:53.074 }' 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.074 11:30:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:53.332 [2024-11-26 11:30:11.376847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:53.332 [2024-11-26 11:30:11.377131] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:53.332 [2024-11-26 11:30:11.452485] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.332 [2024-11-26 11:30:11.495781] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:53.591 [2024-11-26 11:30:11.612005] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.591 [2024-11-26 11:30:11.629744] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.591 [2024-11-26 11:30:11.655960] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.591 11:30:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.850 11:30:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.850 "name": "raid_bdev1", 00:19:53.850 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:53.850 "strip_size_kb": 0, 00:19:53.850 "state": "online", 00:19:53.850 "raid_level": "raid1", 00:19:53.850 "superblock": false, 00:19:53.850 "num_base_bdevs": 4, 00:19:53.850 "num_base_bdevs_discovered": 3, 00:19:53.850 "num_base_bdevs_operational": 3, 00:19:53.850 "base_bdevs_list": [ 00:19:53.850 { 00:19:53.850 "name": null, 00:19:53.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.850 "is_configured": false, 00:19:53.850 "data_offset": 0, 00:19:53.850 "data_size": 65536 00:19:53.850 }, 00:19:53.850 { 00:19:53.850 "name": "BaseBdev2", 00:19:53.850 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:53.850 "is_configured": true, 00:19:53.850 "data_offset": 0, 00:19:53.850 "data_size": 65536 00:19:53.850 }, 00:19:53.850 { 00:19:53.850 "name": "BaseBdev3", 00:19:53.850 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:53.850 "is_configured": true, 00:19:53.850 "data_offset": 0, 00:19:53.850 "data_size": 65536 00:19:53.850 }, 00:19:53.850 { 00:19:53.850 "name": "BaseBdev4", 00:19:53.850 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:53.850 "is_configured": true, 00:19:53.850 "data_offset": 0, 00:19:53.850 "data_size": 65536 00:19:53.850 } 00:19:53.850 ] 00:19:53.850 }' 00:19:53.850 11:30:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.850 11:30:11 -- common/autotest_common.sh@10 -- # set +x 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.108 11:30:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.367 "name": "raid_bdev1", 00:19:54.367 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:54.367 "strip_size_kb": 0, 00:19:54.367 "state": "online", 00:19:54.367 "raid_level": "raid1", 00:19:54.367 "superblock": false, 00:19:54.367 "num_base_bdevs": 4, 00:19:54.367 "num_base_bdevs_discovered": 3, 00:19:54.367 "num_base_bdevs_operational": 3, 00:19:54.367 "base_bdevs_list": [ 00:19:54.367 { 00:19:54.367 "name": null, 00:19:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.367 "is_configured": false, 00:19:54.367 "data_offset": 0, 00:19:54.367 "data_size": 65536 00:19:54.367 }, 00:19:54.367 { 00:19:54.367 "name": "BaseBdev2", 00:19:54.367 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:54.367 "is_configured": true, 00:19:54.367 "data_offset": 0, 00:19:54.367 "data_size": 65536 00:19:54.367 }, 00:19:54.367 { 00:19:54.367 "name": "BaseBdev3", 00:19:54.367 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:54.367 "is_configured": true, 00:19:54.367 "data_offset": 0, 00:19:54.367 "data_size": 65536 00:19:54.367 }, 00:19:54.367 { 00:19:54.367 "name": "BaseBdev4", 00:19:54.367 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:54.367 "is_configured": true, 00:19:54.367 "data_offset": 0, 00:19:54.367 "data_size": 65536 00:19:54.367 } 00:19:54.367 ] 00:19:54.367 }' 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:54.367 11:30:12 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:54.626 [2024-11-26 11:30:12.715821] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:54.626 [2024-11-26 11:30:12.716011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.626 11:30:12 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:54.626 [2024-11-26 11:30:12.766077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:19:54.626 [2024-11-26 11:30:12.768456] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.885 [2024-11-26 11:30:12.903368] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:55.144 [2024-11-26 11:30:13.132539] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:55.144 [2024-11-26 11:30:13.132802] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:55.403 [2024-11-26 11:30:13.482117] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:55.403 [2024-11-26 11:30:13.635377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.663 11:30:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.923 [2024-11-26 11:30:14.002905] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:55.923 11:30:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.923 "name": "raid_bdev1", 00:19:55.923 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:55.923 "strip_size_kb": 0, 00:19:55.923 "state": "online", 00:19:55.923 "raid_level": "raid1", 00:19:55.923 "superblock": false, 00:19:55.923 "num_base_bdevs": 4, 00:19:55.923 "num_base_bdevs_discovered": 4, 00:19:55.923 "num_base_bdevs_operational": 4, 00:19:55.923 "process": { 00:19:55.923 "type": "rebuild", 00:19:55.923 "target": "spare", 00:19:55.923 "progress": { 00:19:55.923 "blocks": 12288, 00:19:55.923 "percent": 18 00:19:55.923 } 00:19:55.923 }, 00:19:55.923 "base_bdevs_list": [ 00:19:55.923 { 00:19:55.923 "name": "spare", 00:19:55.923 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:55.923 "is_configured": true, 00:19:55.923 "data_offset": 0, 00:19:55.923 "data_size": 65536 00:19:55.923 }, 00:19:55.923 { 00:19:55.923 "name": "BaseBdev2", 00:19:55.923 "uuid": "0b444baa-89e7-402a-89c5-53a76cc94d35", 00:19:55.923 "is_configured": true, 00:19:55.923 "data_offset": 0, 00:19:55.924 "data_size": 65536 00:19:55.924 }, 00:19:55.924 { 00:19:55.924 "name": "BaseBdev3", 00:19:55.924 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:55.924 "is_configured": true, 00:19:55.924 "data_offset": 0, 00:19:55.924 "data_size": 65536 00:19:55.924 }, 00:19:55.924 { 00:19:55.924 "name": "BaseBdev4", 00:19:55.924 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:55.924 "is_configured": true, 00:19:55.924 "data_offset": 0, 00:19:55.924 "data_size": 65536 00:19:55.924 } 00:19:55.924 ] 00:19:55.924 }' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:19:55.924 11:30:14 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:55.924 [2024-11-26 11:30:14.141557] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:56.184 [2024-11-26 11:30:14.290125] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:56.444 [2024-11-26 11:30:14.466976] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:19:56.444 [2024-11-26 11:30:14.467072] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.444 11:30:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.705 "name": "raid_bdev1", 00:19:56.705 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:56.705 "strip_size_kb": 0, 00:19:56.705 "state": "online", 00:19:56.705 "raid_level": "raid1", 00:19:56.705 "superblock": false, 00:19:56.705 "num_base_bdevs": 4, 00:19:56.705 "num_base_bdevs_discovered": 3, 00:19:56.705 "num_base_bdevs_operational": 3, 00:19:56.705 "process": { 00:19:56.705 "type": "rebuild", 00:19:56.705 "target": "spare", 00:19:56.705 "progress": { 00:19:56.705 "blocks": 24576, 00:19:56.705 "percent": 37 00:19:56.705 } 00:19:56.705 }, 00:19:56.705 "base_bdevs_list": [ 00:19:56.705 { 00:19:56.705 "name": "spare", 00:19:56.705 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:56.705 "is_configured": true, 00:19:56.705 "data_offset": 0, 00:19:56.705 "data_size": 65536 00:19:56.705 }, 00:19:56.705 { 00:19:56.705 "name": null, 00:19:56.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.705 "is_configured": false, 00:19:56.705 "data_offset": 0, 00:19:56.705 "data_size": 65536 00:19:56.705 }, 00:19:56.705 { 00:19:56.705 "name": "BaseBdev3", 00:19:56.705 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:56.705 "is_configured": true, 00:19:56.705 "data_offset": 0, 00:19:56.705 "data_size": 65536 00:19:56.705 }, 00:19:56.705 { 00:19:56.705 "name": "BaseBdev4", 00:19:56.705 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:56.705 "is_configured": true, 00:19:56.705 "data_offset": 0, 00:19:56.705 "data_size": 65536 00:19:56.705 } 00:19:56.705 ] 00:19:56.705 }' 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@657 -- # local timeout=435 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.705 11:30:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.965 11:30:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.965 "name": "raid_bdev1", 00:19:56.965 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:56.965 "strip_size_kb": 0, 00:19:56.965 "state": "online", 00:19:56.965 "raid_level": "raid1", 00:19:56.965 "superblock": false, 00:19:56.965 "num_base_bdevs": 4, 00:19:56.965 "num_base_bdevs_discovered": 3, 00:19:56.965 "num_base_bdevs_operational": 3, 00:19:56.965 "process": { 00:19:56.965 "type": "rebuild", 00:19:56.965 "target": "spare", 00:19:56.965 "progress": { 00:19:56.965 "blocks": 28672, 00:19:56.965 "percent": 43 00:19:56.965 } 00:19:56.965 }, 00:19:56.965 "base_bdevs_list": [ 00:19:56.965 { 00:19:56.965 "name": "spare", 00:19:56.965 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:56.965 "is_configured": true, 00:19:56.965 "data_offset": 0, 00:19:56.965 "data_size": 65536 00:19:56.965 }, 00:19:56.966 { 00:19:56.966 "name": null, 00:19:56.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.966 "is_configured": false, 00:19:56.966 "data_offset": 0, 00:19:56.966 "data_size": 65536 00:19:56.966 }, 00:19:56.966 { 00:19:56.966 "name": "BaseBdev3", 00:19:56.966 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:56.966 "is_configured": true, 00:19:56.966 "data_offset": 0, 00:19:56.966 "data_size": 65536 00:19:56.966 }, 00:19:56.966 { 00:19:56.966 "name": "BaseBdev4", 00:19:56.966 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:56.966 "is_configured": true, 00:19:56.966 "data_offset": 0, 00:19:56.966 "data_size": 65536 00:19:56.966 } 00:19:56.966 ] 00:19:56.966 }' 00:19:56.966 11:30:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.966 11:30:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.966 11:30:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.966 11:30:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.966 11:30:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:56.966 [2024-11-26 11:30:15.195740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:56.966 [2024-11-26 11:30:15.204010] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:57.225 [2024-11-26 11:30:15.430099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:57.225 [2024-11-26 11:30:15.430534] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.163 "name": "raid_bdev1", 00:19:58.163 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:58.163 "strip_size_kb": 0, 00:19:58.163 "state": "online", 00:19:58.163 "raid_level": "raid1", 00:19:58.163 "superblock": false, 00:19:58.163 "num_base_bdevs": 4, 00:19:58.163 "num_base_bdevs_discovered": 3, 00:19:58.163 "num_base_bdevs_operational": 3, 00:19:58.163 "process": { 00:19:58.163 "type": "rebuild", 00:19:58.163 "target": "spare", 00:19:58.163 "progress": { 00:19:58.163 "blocks": 47104, 00:19:58.163 "percent": 71 00:19:58.163 } 00:19:58.163 }, 00:19:58.163 "base_bdevs_list": [ 00:19:58.163 { 00:19:58.163 "name": "spare", 00:19:58.163 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:58.163 "is_configured": true, 00:19:58.163 "data_offset": 0, 00:19:58.163 "data_size": 65536 00:19:58.163 }, 00:19:58.163 { 00:19:58.163 "name": null, 00:19:58.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.163 "is_configured": false, 00:19:58.163 "data_offset": 0, 00:19:58.163 "data_size": 65536 00:19:58.163 }, 00:19:58.163 { 00:19:58.163 "name": "BaseBdev3", 00:19:58.163 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:58.163 "is_configured": true, 00:19:58.163 "data_offset": 0, 00:19:58.163 "data_size": 65536 00:19:58.163 }, 00:19:58.163 { 00:19:58.163 "name": "BaseBdev4", 00:19:58.163 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:58.163 "is_configured": true, 00:19:58.163 "data_offset": 0, 00:19:58.163 "data_size": 65536 00:19:58.163 } 00:19:58.163 ] 00:19:58.163 }' 00:19:58.163 11:30:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.164 11:30:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.164 11:30:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.164 11:30:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.164 11:30:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:58.732 [2024-11-26 11:30:16.885288] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:58.992 [2024-11-26 11:30:17.221052] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:59.251 [2024-11-26 11:30:17.328155] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:59.251 [2024-11-26 11:30:17.330139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.251 11:30:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.511 "name": "raid_bdev1", 00:19:59.511 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:59.511 "strip_size_kb": 0, 00:19:59.511 "state": "online", 00:19:59.511 "raid_level": "raid1", 00:19:59.511 "superblock": false, 00:19:59.511 "num_base_bdevs": 4, 00:19:59.511 "num_base_bdevs_discovered": 3, 00:19:59.511 "num_base_bdevs_operational": 3, 00:19:59.511 "base_bdevs_list": [ 00:19:59.511 { 00:19:59.511 "name": "spare", 00:19:59.511 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:59.511 "is_configured": true, 00:19:59.511 "data_offset": 0, 00:19:59.511 "data_size": 65536 00:19:59.511 }, 00:19:59.511 { 00:19:59.511 "name": null, 00:19:59.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.511 "is_configured": false, 00:19:59.511 "data_offset": 0, 00:19:59.511 "data_size": 65536 00:19:59.511 }, 00:19:59.511 { 00:19:59.511 "name": "BaseBdev3", 00:19:59.511 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:59.511 "is_configured": true, 00:19:59.511 "data_offset": 0, 00:19:59.511 "data_size": 65536 00:19:59.511 }, 00:19:59.511 { 00:19:59.511 "name": "BaseBdev4", 00:19:59.511 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:59.511 "is_configured": true, 00:19:59.511 "data_offset": 0, 00:19:59.511 "data_size": 65536 00:19:59.511 } 00:19:59.511 ] 00:19:59.511 }' 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@660 -- # break 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.511 11:30:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.770 "name": "raid_bdev1", 00:19:59.770 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:19:59.770 "strip_size_kb": 0, 00:19:59.770 "state": "online", 00:19:59.770 "raid_level": "raid1", 00:19:59.770 "superblock": false, 00:19:59.770 "num_base_bdevs": 4, 00:19:59.770 "num_base_bdevs_discovered": 3, 00:19:59.770 "num_base_bdevs_operational": 3, 00:19:59.770 "base_bdevs_list": [ 00:19:59.770 { 00:19:59.770 "name": "spare", 00:19:59.770 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:19:59.770 "is_configured": true, 00:19:59.770 "data_offset": 0, 00:19:59.770 "data_size": 65536 00:19:59.770 }, 00:19:59.770 { 00:19:59.770 "name": null, 00:19:59.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.770 "is_configured": false, 00:19:59.770 "data_offset": 0, 00:19:59.770 "data_size": 65536 00:19:59.770 }, 00:19:59.770 { 00:19:59.770 "name": "BaseBdev3", 00:19:59.770 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:19:59.770 "is_configured": true, 00:19:59.770 "data_offset": 0, 00:19:59.770 "data_size": 65536 00:19:59.770 }, 00:19:59.770 { 00:19:59.770 "name": "BaseBdev4", 00:19:59.770 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:19:59.770 "is_configured": true, 00:19:59.770 "data_offset": 0, 00:19:59.770 "data_size": 65536 00:19:59.770 } 00:19:59.770 ] 00:19:59.770 }' 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.770 11:30:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.030 11:30:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.030 "name": "raid_bdev1", 00:20:00.030 "uuid": "7ffd1809-3af8-4742-9c40-12c71742b859", 00:20:00.030 "strip_size_kb": 0, 00:20:00.030 "state": "online", 00:20:00.030 "raid_level": "raid1", 00:20:00.030 "superblock": false, 00:20:00.030 "num_base_bdevs": 4, 00:20:00.030 "num_base_bdevs_discovered": 3, 00:20:00.030 "num_base_bdevs_operational": 3, 00:20:00.030 "base_bdevs_list": [ 00:20:00.030 { 00:20:00.030 "name": "spare", 00:20:00.030 "uuid": "8e01f353-933c-5af1-8660-351db0bb7445", 00:20:00.030 "is_configured": true, 00:20:00.030 "data_offset": 0, 00:20:00.030 "data_size": 65536 00:20:00.030 }, 00:20:00.030 { 00:20:00.030 "name": null, 00:20:00.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.030 "is_configured": false, 00:20:00.030 "data_offset": 0, 00:20:00.030 "data_size": 65536 00:20:00.030 }, 00:20:00.030 { 00:20:00.030 "name": "BaseBdev3", 00:20:00.030 "uuid": "3be86b2f-6481-4b1f-9fb2-bb07a5f1e1e3", 00:20:00.030 "is_configured": true, 00:20:00.030 "data_offset": 0, 00:20:00.030 "data_size": 65536 00:20:00.030 }, 00:20:00.030 { 00:20:00.030 "name": "BaseBdev4", 00:20:00.030 "uuid": "7e7f7024-869c-45a1-9cb7-3a4d139b12bf", 00:20:00.030 "is_configured": true, 00:20:00.030 "data_offset": 0, 00:20:00.030 "data_size": 65536 00:20:00.030 } 00:20:00.030 ] 00:20:00.030 }' 00:20:00.030 11:30:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.030 11:30:18 -- common/autotest_common.sh@10 -- # set +x 00:20:00.289 11:30:18 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:00.548 [2024-11-26 11:30:18.682324] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.548 [2024-11-26 11:30:18.682366] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.548 00:20:00.548 Latency(us) 00:20:00.548 [2024-11-26T11:30:18.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.548 [2024-11-26T11:30:18.778Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:00.548 raid_bdev1 : 9.81 92.09 276.28 0.00 0.00 14247.59 258.79 118203.11 00:20:00.548 [2024-11-26T11:30:18.778Z] =================================================================================================================== 00:20:00.548 [2024-11-26T11:30:18.778Z] Total : 92.09 276.28 0.00 0.00 14247.59 258.79 118203.11 00:20:00.548 [2024-11-26 11:30:18.758183] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.548 [2024-11-26 11:30:18.758250] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.548 [2024-11-26 11:30:18.758358] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.548 [2024-11-26 11:30:18.758376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:00.548 0 00:20:00.548 11:30:18 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.549 11:30:18 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:00.808 11:30:18 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:00.808 11:30:18 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:00.808 11:30:18 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@12 -- # local i 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:00.808 11:30:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:01.067 /dev/nbd0 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:01.067 11:30:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:01.067 11:30:19 -- common/autotest_common.sh@867 -- # local i 00:20:01.067 11:30:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:01.067 11:30:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:01.067 11:30:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:01.067 11:30:19 -- common/autotest_common.sh@871 -- # break 00:20:01.067 11:30:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:01.067 11:30:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:01.067 11:30:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.067 1+0 records in 00:20:01.067 1+0 records out 00:20:01.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023981 s, 17.1 MB/s 00:20:01.067 11:30:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.067 11:30:19 -- common/autotest_common.sh@884 -- # size=4096 00:20:01.067 11:30:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.067 11:30:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:01.067 11:30:19 -- common/autotest_common.sh@887 -- # return 0 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@678 -- # continue 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:20:01.067 11:30:19 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@12 -- # local i 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.067 11:30:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:01.325 /dev/nbd1 00:20:01.325 11:30:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:01.325 11:30:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:01.325 11:30:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:01.325 11:30:19 -- common/autotest_common.sh@867 -- # local i 00:20:01.325 11:30:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:01.325 11:30:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:01.325 11:30:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:01.325 11:30:19 -- common/autotest_common.sh@871 -- # break 00:20:01.325 11:30:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:01.325 11:30:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:01.325 11:30:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.325 1+0 records in 00:20:01.325 1+0 records out 00:20:01.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264929 s, 15.5 MB/s 00:20:01.325 11:30:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.325 11:30:19 -- common/autotest_common.sh@884 -- # size=4096 00:20:01.325 11:30:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.325 11:30:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:01.325 11:30:19 -- common/autotest_common.sh@887 -- # return 0 00:20:01.325 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.325 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.325 11:30:19 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:01.583 11:30:19 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@51 -- # local i 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@41 -- # break 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.583 11:30:19 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:01.583 11:30:19 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:20:01.583 11:30:19 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@12 -- # local i 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.583 11:30:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:01.843 /dev/nbd1 00:20:01.843 11:30:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:01.843 11:30:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:01.843 11:30:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:01.843 11:30:20 -- common/autotest_common.sh@867 -- # local i 00:20:01.843 11:30:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:01.843 11:30:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:01.843 11:30:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:01.843 11:30:20 -- common/autotest_common.sh@871 -- # break 00:20:01.843 11:30:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:01.843 11:30:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:01.843 11:30:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.843 1+0 records in 00:20:01.843 1+0 records out 00:20:01.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327097 s, 12.5 MB/s 00:20:01.843 11:30:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.843 11:30:20 -- common/autotest_common.sh@884 -- # size=4096 00:20:01.843 11:30:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.843 11:30:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:01.843 11:30:20 -- common/autotest_common.sh@887 -- # return 0 00:20:01.843 11:30:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.843 11:30:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:01.843 11:30:20 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:02.102 11:30:20 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:02.102 11:30:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.103 11:30:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:02.103 11:30:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.103 11:30:20 -- bdev/nbd_common.sh@51 -- # local i 00:20:02.103 11:30:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.103 11:30:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@41 -- # break 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.361 11:30:20 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:02.361 11:30:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.362 11:30:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.362 11:30:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.362 11:30:20 -- bdev/nbd_common.sh@51 -- # local i 00:20:02.362 11:30:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.362 11:30:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@41 -- # break 00:20:02.620 11:30:20 -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.620 11:30:20 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:02.620 11:30:20 -- bdev/bdev_raid.sh@709 -- # killprocess 91150 00:20:02.620 11:30:20 -- common/autotest_common.sh@936 -- # '[' -z 91150 ']' 00:20:02.620 11:30:20 -- common/autotest_common.sh@940 -- # kill -0 91150 00:20:02.620 11:30:20 -- common/autotest_common.sh@941 -- # uname 00:20:02.620 11:30:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.620 11:30:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91150 00:20:02.620 11:30:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.620 11:30:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.620 killing process with pid 91150 00:20:02.620 11:30:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91150' 00:20:02.620 11:30:20 -- common/autotest_common.sh@955 -- # kill 91150 00:20:02.620 Received shutdown signal, test time was about 11.737059 seconds 00:20:02.620 00:20:02.620 Latency(us) 00:20:02.620 [2024-11-26T11:30:20.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.620 [2024-11-26T11:30:20.850Z] =================================================================================================================== 00:20:02.620 [2024-11-26T11:30:20.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.620 [2024-11-26 11:30:20.686146] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:02.620 11:30:20 -- common/autotest_common.sh@960 -- # wait 91150 00:20:02.620 [2024-11-26 11:30:20.714324] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:02.879 00:20:02.879 real 0m15.865s 00:20:02.879 user 0m23.954s 00:20:02.879 sys 0m2.218s 00:20:02.879 11:30:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:02.879 11:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 ************************************ 00:20:02.879 END TEST raid_rebuild_test_io 00:20:02.879 ************************************ 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:20:02.879 11:30:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:02.879 11:30:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.879 11:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 ************************************ 00:20:02.879 START TEST raid_rebuild_test_sb_io 00:20:02.879 ************************************ 00:20:02.879 11:30:20 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:02.879 11:30:20 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=91611 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 91611 /var/tmp/spdk-raid.sock 00:20:02.880 11:30:20 -- common/autotest_common.sh@829 -- # '[' -z 91611 ']' 00:20:02.880 11:30:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.880 11:30:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.880 11:30:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.880 11:30:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.880 11:30:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.880 11:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:02.880 [2024-11-26 11:30:21.022626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.880 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.880 Zero copy mechanism will not be used. 00:20:02.880 [2024-11-26 11:30:21.022846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91611 ] 00:20:03.138 [2024-11-26 11:30:21.190790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.138 [2024-11-26 11:30:21.233109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.138 [2024-11-26 11:30:21.271761] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.070 11:30:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.070 11:30:21 -- common/autotest_common.sh@862 -- # return 0 00:20:04.070 11:30:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.070 11:30:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.070 11:30:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.070 BaseBdev1_malloc 00:20:04.070 11:30:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.328 [2024-11-26 11:30:22.449503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.328 [2024-11-26 11:30:22.449594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.328 [2024-11-26 11:30:22.449628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:04.328 [2024-11-26 11:30:22.449649] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.328 [2024-11-26 11:30:22.452511] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.328 [2024-11-26 11:30:22.452619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.328 BaseBdev1 00:20:04.328 11:30:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.328 11:30:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.328 11:30:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.586 BaseBdev2_malloc 00:20:04.586 11:30:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:04.844 [2024-11-26 11:30:22.880742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:04.844 [2024-11-26 11:30:22.881051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.844 [2024-11-26 11:30:22.881138] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:04.844 [2024-11-26 11:30:22.881409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.844 [2024-11-26 11:30:22.884211] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.844 [2024-11-26 11:30:22.884472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:04.844 BaseBdev2 00:20:04.844 11:30:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.844 11:30:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.844 11:30:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:05.102 BaseBdev3_malloc 00:20:05.102 11:30:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:05.102 [2024-11-26 11:30:23.328897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:05.102 [2024-11-26 11:30:23.328990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.102 [2024-11-26 11:30:23.329020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:20:05.102 [2024-11-26 11:30:23.329036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.102 [2024-11-26 11:30:23.331633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.102 [2024-11-26 11:30:23.331745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:05.102 BaseBdev3 00:20:05.360 11:30:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:05.360 11:30:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:05.360 11:30:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:05.360 BaseBdev4_malloc 00:20:05.360 11:30:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:05.618 [2024-11-26 11:30:23.755952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:05.618 [2024-11-26 11:30:23.756047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.618 [2024-11-26 11:30:23.756100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:20:05.618 [2024-11-26 11:30:23.756119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.618 [2024-11-26 11:30:23.758787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.618 [2024-11-26 11:30:23.758835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:05.618 BaseBdev4 00:20:05.618 11:30:23 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:05.876 spare_malloc 00:20:05.876 11:30:23 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:06.134 spare_delay 00:20:06.134 11:30:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:06.391 [2024-11-26 11:30:24.431037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:06.391 [2024-11-26 11:30:24.431142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.391 [2024-11-26 11:30:24.431181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:06.392 [2024-11-26 11:30:24.431198] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.392 [2024-11-26 11:30:24.433949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.392 [2024-11-26 11:30:24.434006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:06.392 spare 00:20:06.392 11:30:24 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:06.650 [2024-11-26 11:30:24.679170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.650 [2024-11-26 11:30:24.681562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.650 [2024-11-26 11:30:24.681813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.650 [2024-11-26 11:30:24.681936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:06.650 [2024-11-26 11:30:24.682201] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:20:06.650 [2024-11-26 11:30:24.682222] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.650 [2024-11-26 11:30:24.682350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:06.650 [2024-11-26 11:30:24.682745] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:20:06.650 [2024-11-26 11:30:24.682763] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:20:06.650 [2024-11-26 11:30:24.682979] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.650 11:30:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.908 11:30:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.908 "name": "raid_bdev1", 00:20:06.908 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:06.908 "strip_size_kb": 0, 00:20:06.908 "state": "online", 00:20:06.908 "raid_level": "raid1", 00:20:06.908 "superblock": true, 00:20:06.908 "num_base_bdevs": 4, 00:20:06.908 "num_base_bdevs_discovered": 4, 00:20:06.908 "num_base_bdevs_operational": 4, 00:20:06.908 "base_bdevs_list": [ 00:20:06.908 { 00:20:06.908 "name": "BaseBdev1", 00:20:06.908 "uuid": "1a6f6580-a05a-53fc-be31-df0c1adaa906", 00:20:06.908 "is_configured": true, 00:20:06.908 "data_offset": 2048, 00:20:06.908 "data_size": 63488 00:20:06.908 }, 00:20:06.908 { 00:20:06.908 "name": "BaseBdev2", 00:20:06.908 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:06.908 "is_configured": true, 00:20:06.908 "data_offset": 2048, 00:20:06.908 "data_size": 63488 00:20:06.908 }, 00:20:06.908 { 00:20:06.908 "name": "BaseBdev3", 00:20:06.908 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:06.908 "is_configured": true, 00:20:06.908 "data_offset": 2048, 00:20:06.908 "data_size": 63488 00:20:06.908 }, 00:20:06.908 { 00:20:06.908 "name": "BaseBdev4", 00:20:06.908 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:06.908 "is_configured": true, 00:20:06.908 "data_offset": 2048, 00:20:06.908 "data_size": 63488 00:20:06.908 } 00:20:06.908 ] 00:20:06.908 }' 00:20:06.908 11:30:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.908 11:30:24 -- common/autotest_common.sh@10 -- # set +x 00:20:07.166 11:30:25 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:07.166 11:30:25 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:07.424 [2024-11-26 11:30:25.455626] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.424 11:30:25 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:07.424 11:30:25 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.424 11:30:25 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.683 11:30:25 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:07.683 11:30:25 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:07.683 11:30:25 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:07.683 11:30:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:07.683 [2024-11-26 11:30:25.789068] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:20:07.683 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.683 Zero copy mechanism will not be used. 00:20:07.683 Running I/O for 60 seconds... 00:20:07.683 [2024-11-26 11:30:25.896712] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.683 [2024-11-26 11:30:25.911985] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.942 11:30:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.201 11:30:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.201 "name": "raid_bdev1", 00:20:08.201 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:08.201 "strip_size_kb": 0, 00:20:08.201 "state": "online", 00:20:08.201 "raid_level": "raid1", 00:20:08.201 "superblock": true, 00:20:08.201 "num_base_bdevs": 4, 00:20:08.201 "num_base_bdevs_discovered": 3, 00:20:08.201 "num_base_bdevs_operational": 3, 00:20:08.201 "base_bdevs_list": [ 00:20:08.201 { 00:20:08.201 "name": null, 00:20:08.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.201 "is_configured": false, 00:20:08.201 "data_offset": 2048, 00:20:08.201 "data_size": 63488 00:20:08.201 }, 00:20:08.201 { 00:20:08.201 "name": "BaseBdev2", 00:20:08.201 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:08.201 "is_configured": true, 00:20:08.201 "data_offset": 2048, 00:20:08.201 "data_size": 63488 00:20:08.201 }, 00:20:08.201 { 00:20:08.201 "name": "BaseBdev3", 00:20:08.201 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:08.201 "is_configured": true, 00:20:08.201 "data_offset": 2048, 00:20:08.201 "data_size": 63488 00:20:08.201 }, 00:20:08.201 { 00:20:08.201 "name": "BaseBdev4", 00:20:08.201 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:08.201 "is_configured": true, 00:20:08.201 "data_offset": 2048, 00:20:08.201 "data_size": 63488 00:20:08.201 } 00:20:08.201 ] 00:20:08.201 }' 00:20:08.201 11:30:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.201 11:30:26 -- common/autotest_common.sh@10 -- # set +x 00:20:08.459 11:30:26 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.718 [2024-11-26 11:30:26.854170] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:08.718 [2024-11-26 11:30:26.854232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.718 11:30:26 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:08.718 [2024-11-26 11:30:26.905735] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:08.718 [2024-11-26 11:30:26.908498] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.985 [2024-11-26 11:30:27.019380] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.985 [2024-11-26 11:30:27.019825] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:09.257 [2024-11-26 11:30:27.240736] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:09.257 [2024-11-26 11:30:27.241035] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:09.515 [2024-11-26 11:30:27.579764] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:09.515 [2024-11-26 11:30:27.580802] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:09.774 [2024-11-26 11:30:27.816350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.774 11:30:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.033 [2024-11-26 11:30:28.141210] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:10.033 [2024-11-26 11:30:28.141665] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:10.033 11:30:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.033 "name": "raid_bdev1", 00:20:10.033 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:10.033 "strip_size_kb": 0, 00:20:10.033 "state": "online", 00:20:10.033 "raid_level": "raid1", 00:20:10.033 "superblock": true, 00:20:10.033 "num_base_bdevs": 4, 00:20:10.033 "num_base_bdevs_discovered": 4, 00:20:10.033 "num_base_bdevs_operational": 4, 00:20:10.033 "process": { 00:20:10.033 "type": "rebuild", 00:20:10.033 "target": "spare", 00:20:10.033 "progress": { 00:20:10.033 "blocks": 14336, 00:20:10.033 "percent": 22 00:20:10.033 } 00:20:10.033 }, 00:20:10.033 "base_bdevs_list": [ 00:20:10.033 { 00:20:10.033 "name": "spare", 00:20:10.033 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:10.033 "is_configured": true, 00:20:10.033 "data_offset": 2048, 00:20:10.033 "data_size": 63488 00:20:10.033 }, 00:20:10.033 { 00:20:10.033 "name": "BaseBdev2", 00:20:10.033 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:10.033 "is_configured": true, 00:20:10.033 "data_offset": 2048, 00:20:10.033 "data_size": 63488 00:20:10.033 }, 00:20:10.033 { 00:20:10.033 "name": "BaseBdev3", 00:20:10.033 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:10.033 "is_configured": true, 00:20:10.033 "data_offset": 2048, 00:20:10.033 "data_size": 63488 00:20:10.033 }, 00:20:10.033 { 00:20:10.033 "name": "BaseBdev4", 00:20:10.033 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:10.033 "is_configured": true, 00:20:10.033 "data_offset": 2048, 00:20:10.033 "data_size": 63488 00:20:10.033 } 00:20:10.033 ] 00:20:10.033 }' 00:20:10.033 11:30:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.034 11:30:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.034 11:30:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.034 11:30:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.034 11:30:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:10.293 [2024-11-26 11:30:28.278783] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:10.293 [2024-11-26 11:30:28.465891] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.293 [2024-11-26 11:30:28.509381] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:10.293 [2024-11-26 11:30:28.520567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.552 [2024-11-26 11:30:28.535089] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.553 11:30:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.812 11:30:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.812 "name": "raid_bdev1", 00:20:10.812 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:10.812 "strip_size_kb": 0, 00:20:10.812 "state": "online", 00:20:10.812 "raid_level": "raid1", 00:20:10.812 "superblock": true, 00:20:10.812 "num_base_bdevs": 4, 00:20:10.812 "num_base_bdevs_discovered": 3, 00:20:10.812 "num_base_bdevs_operational": 3, 00:20:10.812 "base_bdevs_list": [ 00:20:10.812 { 00:20:10.812 "name": null, 00:20:10.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.812 "is_configured": false, 00:20:10.812 "data_offset": 2048, 00:20:10.812 "data_size": 63488 00:20:10.812 }, 00:20:10.812 { 00:20:10.812 "name": "BaseBdev2", 00:20:10.812 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:10.812 "is_configured": true, 00:20:10.812 "data_offset": 2048, 00:20:10.812 "data_size": 63488 00:20:10.812 }, 00:20:10.812 { 00:20:10.812 "name": "BaseBdev3", 00:20:10.812 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:10.812 "is_configured": true, 00:20:10.812 "data_offset": 2048, 00:20:10.812 "data_size": 63488 00:20:10.812 }, 00:20:10.812 { 00:20:10.812 "name": "BaseBdev4", 00:20:10.812 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:10.812 "is_configured": true, 00:20:10.812 "data_offset": 2048, 00:20:10.812 "data_size": 63488 00:20:10.812 } 00:20:10.812 ] 00:20:10.812 }' 00:20:10.812 11:30:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.812 11:30:28 -- common/autotest_common.sh@10 -- # set +x 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.071 11:30:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.330 "name": "raid_bdev1", 00:20:11.330 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:11.330 "strip_size_kb": 0, 00:20:11.330 "state": "online", 00:20:11.330 "raid_level": "raid1", 00:20:11.330 "superblock": true, 00:20:11.330 "num_base_bdevs": 4, 00:20:11.330 "num_base_bdevs_discovered": 3, 00:20:11.330 "num_base_bdevs_operational": 3, 00:20:11.330 "base_bdevs_list": [ 00:20:11.330 { 00:20:11.330 "name": null, 00:20:11.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.330 "is_configured": false, 00:20:11.330 "data_offset": 2048, 00:20:11.330 "data_size": 63488 00:20:11.330 }, 00:20:11.330 { 00:20:11.330 "name": "BaseBdev2", 00:20:11.330 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:11.330 "is_configured": true, 00:20:11.330 "data_offset": 2048, 00:20:11.330 "data_size": 63488 00:20:11.330 }, 00:20:11.330 { 00:20:11.330 "name": "BaseBdev3", 00:20:11.330 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:11.330 "is_configured": true, 00:20:11.330 "data_offset": 2048, 00:20:11.330 "data_size": 63488 00:20:11.330 }, 00:20:11.330 { 00:20:11.330 "name": "BaseBdev4", 00:20:11.330 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:11.330 "is_configured": true, 00:20:11.330 "data_offset": 2048, 00:20:11.330 "data_size": 63488 00:20:11.330 } 00:20:11.330 ] 00:20:11.330 }' 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:11.330 11:30:29 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.899 [2024-11-26 11:30:29.835201] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:11.899 [2024-11-26 11:30:29.835273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.899 11:30:29 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:11.899 [2024-11-26 11:30:29.892157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:20:11.899 [2024-11-26 11:30:29.894760] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:11.899 [2024-11-26 11:30:30.044319] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:12.158 [2024-11-26 11:30:30.191890] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:12.158 [2024-11-26 11:30:30.192199] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:12.418 [2024-11-26 11:30:30.550833] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:12.418 [2024-11-26 11:30:30.560344] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.677 11:30:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.937 11:30:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.937 "name": "raid_bdev1", 00:20:12.937 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:12.937 "strip_size_kb": 0, 00:20:12.937 "state": "online", 00:20:12.937 "raid_level": "raid1", 00:20:12.937 "superblock": true, 00:20:12.937 "num_base_bdevs": 4, 00:20:12.937 "num_base_bdevs_discovered": 4, 00:20:12.937 "num_base_bdevs_operational": 4, 00:20:12.937 "process": { 00:20:12.937 "type": "rebuild", 00:20:12.937 "target": "spare", 00:20:12.937 "progress": { 00:20:12.937 "blocks": 14336, 00:20:12.937 "percent": 22 00:20:12.937 } 00:20:12.937 }, 00:20:12.937 "base_bdevs_list": [ 00:20:12.937 { 00:20:12.937 "name": "spare", 00:20:12.937 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:12.937 "is_configured": true, 00:20:12.937 "data_offset": 2048, 00:20:12.937 "data_size": 63488 00:20:12.937 }, 00:20:12.937 { 00:20:12.937 "name": "BaseBdev2", 00:20:12.937 "uuid": "90764c7f-5a6a-5108-bc4a-73bb6282a9a8", 00:20:12.937 "is_configured": true, 00:20:12.937 "data_offset": 2048, 00:20:12.937 "data_size": 63488 00:20:12.937 }, 00:20:12.937 { 00:20:12.937 "name": "BaseBdev3", 00:20:12.937 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:12.937 "is_configured": true, 00:20:12.937 "data_offset": 2048, 00:20:12.937 "data_size": 63488 00:20:12.937 }, 00:20:12.937 { 00:20:12.937 "name": "BaseBdev4", 00:20:12.937 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:12.937 "is_configured": true, 00:20:12.937 "data_offset": 2048, 00:20:12.937 "data_size": 63488 00:20:12.937 } 00:20:12.937 ] 00:20:12.937 }' 00:20:12.937 11:30:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:13.196 [2024-11-26 11:30:31.183490] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:13.196 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:13.196 11:30:31 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:13.196 [2024-11-26 11:30:31.192669] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:13.196 [2024-11-26 11:30:31.416765] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.456 [2024-11-26 11:30:31.556519] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:20:13.456 [2024-11-26 11:30:31.556579] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.716 11:30:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.716 [2024-11-26 11:30:31.708463] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:13.975 [2024-11-26 11:30:31.971403] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:13.975 [2024-11-26 11:30:31.971833] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:13.975 "name": "raid_bdev1", 00:20:13.975 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:13.975 "strip_size_kb": 0, 00:20:13.975 "state": "online", 00:20:13.975 "raid_level": "raid1", 00:20:13.975 "superblock": true, 00:20:13.975 "num_base_bdevs": 4, 00:20:13.975 "num_base_bdevs_discovered": 3, 00:20:13.975 "num_base_bdevs_operational": 3, 00:20:13.975 "process": { 00:20:13.975 "type": "rebuild", 00:20:13.975 "target": "spare", 00:20:13.975 "progress": { 00:20:13.975 "blocks": 20480, 00:20:13.975 "percent": 32 00:20:13.975 } 00:20:13.975 }, 00:20:13.975 "base_bdevs_list": [ 00:20:13.975 { 00:20:13.975 "name": "spare", 00:20:13.975 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:13.975 "is_configured": true, 00:20:13.975 "data_offset": 2048, 00:20:13.975 "data_size": 63488 00:20:13.975 }, 00:20:13.975 { 00:20:13.975 "name": null, 00:20:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.975 "is_configured": false, 00:20:13.975 "data_offset": 2048, 00:20:13.975 "data_size": 63488 00:20:13.975 }, 00:20:13.975 { 00:20:13.975 "name": "BaseBdev3", 00:20:13.975 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:13.975 "is_configured": true, 00:20:13.975 "data_offset": 2048, 00:20:13.975 "data_size": 63488 00:20:13.975 }, 00:20:13.975 { 00:20:13.975 "name": "BaseBdev4", 00:20:13.975 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:13.975 "is_configured": true, 00:20:13.975 "data_offset": 2048, 00:20:13.975 "data_size": 63488 00:20:13.975 } 00:20:13.975 ] 00:20:13.975 }' 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@657 -- # local timeout=453 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.975 11:30:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.234 [2024-11-26 11:30:32.307181] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:14.234 "name": "raid_bdev1", 00:20:14.234 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:14.234 "strip_size_kb": 0, 00:20:14.234 "state": "online", 00:20:14.234 "raid_level": "raid1", 00:20:14.234 "superblock": true, 00:20:14.234 "num_base_bdevs": 4, 00:20:14.234 "num_base_bdevs_discovered": 3, 00:20:14.234 "num_base_bdevs_operational": 3, 00:20:14.234 "process": { 00:20:14.234 "type": "rebuild", 00:20:14.234 "target": "spare", 00:20:14.234 "progress": { 00:20:14.234 "blocks": 24576, 00:20:14.234 "percent": 38 00:20:14.234 } 00:20:14.234 }, 00:20:14.234 "base_bdevs_list": [ 00:20:14.234 { 00:20:14.234 "name": "spare", 00:20:14.234 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:14.234 "is_configured": true, 00:20:14.234 "data_offset": 2048, 00:20:14.234 "data_size": 63488 00:20:14.234 }, 00:20:14.234 { 00:20:14.234 "name": null, 00:20:14.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.234 "is_configured": false, 00:20:14.234 "data_offset": 2048, 00:20:14.234 "data_size": 63488 00:20:14.234 }, 00:20:14.234 { 00:20:14.234 "name": "BaseBdev3", 00:20:14.234 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:14.234 "is_configured": true, 00:20:14.234 "data_offset": 2048, 00:20:14.234 "data_size": 63488 00:20:14.234 }, 00:20:14.234 { 00:20:14.234 "name": "BaseBdev4", 00:20:14.234 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:14.234 "is_configured": true, 00:20:14.234 "data_offset": 2048, 00:20:14.234 "data_size": 63488 00:20:14.234 } 00:20:14.234 ] 00:20:14.234 }' 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.234 11:30:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:14.493 [2024-11-26 11:30:32.536135] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:14.493 [2024-11-26 11:30:32.536445] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:15.430 "name": "raid_bdev1", 00:20:15.430 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:15.430 "strip_size_kb": 0, 00:20:15.430 "state": "online", 00:20:15.430 "raid_level": "raid1", 00:20:15.430 "superblock": true, 00:20:15.430 "num_base_bdevs": 4, 00:20:15.430 "num_base_bdevs_discovered": 3, 00:20:15.430 "num_base_bdevs_operational": 3, 00:20:15.430 "process": { 00:20:15.430 "type": "rebuild", 00:20:15.430 "target": "spare", 00:20:15.430 "progress": { 00:20:15.430 "blocks": 43008, 00:20:15.430 "percent": 67 00:20:15.430 } 00:20:15.430 }, 00:20:15.430 "base_bdevs_list": [ 00:20:15.430 { 00:20:15.430 "name": "spare", 00:20:15.430 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:15.430 "is_configured": true, 00:20:15.430 "data_offset": 2048, 00:20:15.430 "data_size": 63488 00:20:15.430 }, 00:20:15.430 { 00:20:15.430 "name": null, 00:20:15.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.430 "is_configured": false, 00:20:15.430 "data_offset": 2048, 00:20:15.430 "data_size": 63488 00:20:15.430 }, 00:20:15.430 { 00:20:15.430 "name": "BaseBdev3", 00:20:15.430 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:15.430 "is_configured": true, 00:20:15.430 "data_offset": 2048, 00:20:15.430 "data_size": 63488 00:20:15.430 }, 00:20:15.430 { 00:20:15.430 "name": "BaseBdev4", 00:20:15.430 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:15.430 "is_configured": true, 00:20:15.430 "data_offset": 2048, 00:20:15.430 "data_size": 63488 00:20:15.430 } 00:20:15.430 ] 00:20:15.430 }' 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.430 11:30:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:15.689 [2024-11-26 11:30:33.683496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:15.689 [2024-11-26 11:30:33.683885] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:15.949 [2024-11-26 11:30:34.026200] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:15.949 [2024-11-26 11:30:34.026725] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:16.207 [2024-11-26 11:30:34.377399] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:16.466 [2024-11-26 11:30:34.489083] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.466 11:30:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.726 [2024-11-26 11:30:34.716195] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:16.726 [2024-11-26 11:30:34.824262] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:16.726 [2024-11-26 11:30:34.827301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.726 11:30:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.726 "name": "raid_bdev1", 00:20:16.726 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:16.726 "strip_size_kb": 0, 00:20:16.726 "state": "online", 00:20:16.726 "raid_level": "raid1", 00:20:16.726 "superblock": true, 00:20:16.726 "num_base_bdevs": 4, 00:20:16.726 "num_base_bdevs_discovered": 3, 00:20:16.726 "num_base_bdevs_operational": 3, 00:20:16.726 "base_bdevs_list": [ 00:20:16.726 { 00:20:16.726 "name": "spare", 00:20:16.726 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:16.726 "is_configured": true, 00:20:16.726 "data_offset": 2048, 00:20:16.726 "data_size": 63488 00:20:16.726 }, 00:20:16.726 { 00:20:16.726 "name": null, 00:20:16.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.726 "is_configured": false, 00:20:16.726 "data_offset": 2048, 00:20:16.726 "data_size": 63488 00:20:16.726 }, 00:20:16.726 { 00:20:16.726 "name": "BaseBdev3", 00:20:16.726 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:16.726 "is_configured": true, 00:20:16.726 "data_offset": 2048, 00:20:16.726 "data_size": 63488 00:20:16.727 }, 00:20:16.727 { 00:20:16.727 "name": "BaseBdev4", 00:20:16.727 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:16.727 "is_configured": true, 00:20:16.727 "data_offset": 2048, 00:20:16.727 "data_size": 63488 00:20:16.727 } 00:20:16.727 ] 00:20:16.727 }' 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@660 -- # break 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.727 11:30:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.986 "name": "raid_bdev1", 00:20:16.986 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:16.986 "strip_size_kb": 0, 00:20:16.986 "state": "online", 00:20:16.986 "raid_level": "raid1", 00:20:16.986 "superblock": true, 00:20:16.986 "num_base_bdevs": 4, 00:20:16.986 "num_base_bdevs_discovered": 3, 00:20:16.986 "num_base_bdevs_operational": 3, 00:20:16.986 "base_bdevs_list": [ 00:20:16.986 { 00:20:16.986 "name": "spare", 00:20:16.986 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:16.986 "is_configured": true, 00:20:16.986 "data_offset": 2048, 00:20:16.986 "data_size": 63488 00:20:16.986 }, 00:20:16.986 { 00:20:16.986 "name": null, 00:20:16.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.986 "is_configured": false, 00:20:16.986 "data_offset": 2048, 00:20:16.986 "data_size": 63488 00:20:16.986 }, 00:20:16.986 { 00:20:16.986 "name": "BaseBdev3", 00:20:16.986 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:16.986 "is_configured": true, 00:20:16.986 "data_offset": 2048, 00:20:16.986 "data_size": 63488 00:20:16.986 }, 00:20:16.986 { 00:20:16.986 "name": "BaseBdev4", 00:20:16.986 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:16.986 "is_configured": true, 00:20:16.986 "data_offset": 2048, 00:20:16.986 "data_size": 63488 00:20:16.986 } 00:20:16.986 ] 00:20:16.986 }' 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.986 11:30:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.245 11:30:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.245 "name": "raid_bdev1", 00:20:17.245 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:17.245 "strip_size_kb": 0, 00:20:17.245 "state": "online", 00:20:17.245 "raid_level": "raid1", 00:20:17.245 "superblock": true, 00:20:17.245 "num_base_bdevs": 4, 00:20:17.245 "num_base_bdevs_discovered": 3, 00:20:17.245 "num_base_bdevs_operational": 3, 00:20:17.245 "base_bdevs_list": [ 00:20:17.245 { 00:20:17.245 "name": "spare", 00:20:17.245 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:17.245 "is_configured": true, 00:20:17.245 "data_offset": 2048, 00:20:17.245 "data_size": 63488 00:20:17.245 }, 00:20:17.245 { 00:20:17.245 "name": null, 00:20:17.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.245 "is_configured": false, 00:20:17.245 "data_offset": 2048, 00:20:17.245 "data_size": 63488 00:20:17.245 }, 00:20:17.245 { 00:20:17.245 "name": "BaseBdev3", 00:20:17.245 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:17.245 "is_configured": true, 00:20:17.245 "data_offset": 2048, 00:20:17.245 "data_size": 63488 00:20:17.245 }, 00:20:17.245 { 00:20:17.245 "name": "BaseBdev4", 00:20:17.245 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:17.245 "is_configured": true, 00:20:17.245 "data_offset": 2048, 00:20:17.245 "data_size": 63488 00:20:17.245 } 00:20:17.245 ] 00:20:17.245 }' 00:20:17.245 11:30:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.245 11:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:17.505 11:30:35 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:17.765 [2024-11-26 11:30:35.992930] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.765 [2024-11-26 11:30:35.992990] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.024 00:20:18.024 Latency(us) 00:20:18.024 [2024-11-26T11:30:36.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.024 [2024-11-26T11:30:36.254Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:18.024 raid_bdev1 : 10.27 93.19 279.58 0.00 0.00 13967.32 296.03 121539.49 00:20:18.024 [2024-11-26T11:30:36.254Z] =================================================================================================================== 00:20:18.024 [2024-11-26T11:30:36.254Z] Total : 93.19 279.58 0.00 0.00 13967.32 296.03 121539.49 00:20:18.024 [2024-11-26 11:30:36.065259] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.024 [2024-11-26 11:30:36.065317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.024 0 00:20:18.024 [2024-11-26 11:30:36.065496] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.024 [2024-11-26 11:30:36.065523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:20:18.024 11:30:36 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.024 11:30:36 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:18.283 11:30:36 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:18.283 11:30:36 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:18.283 11:30:36 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.283 11:30:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:18.542 /dev/nbd0 00:20:18.542 11:30:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:18.542 11:30:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:18.542 11:30:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:18.542 11:30:36 -- common/autotest_common.sh@867 -- # local i 00:20:18.542 11:30:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:18.542 11:30:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:18.542 11:30:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:18.542 11:30:36 -- common/autotest_common.sh@871 -- # break 00:20:18.542 11:30:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:18.542 11:30:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:18.542 11:30:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.542 1+0 records in 00:20:18.542 1+0 records out 00:20:18.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249534 s, 16.4 MB/s 00:20:18.542 11:30:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.542 11:30:36 -- common/autotest_common.sh@884 -- # size=4096 00:20:18.542 11:30:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.542 11:30:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:18.542 11:30:36 -- common/autotest_common.sh@887 -- # return 0 00:20:18.542 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.542 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.542 11:30:36 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:18.542 11:30:36 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:20:18.542 11:30:36 -- bdev/bdev_raid.sh@678 -- # continue 00:20:18.543 11:30:36 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:18.543 11:30:36 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:20:18.543 11:30:36 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.543 11:30:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:18.801 /dev/nbd1 00:20:18.801 11:30:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:18.801 11:30:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:18.801 11:30:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:18.801 11:30:36 -- common/autotest_common.sh@867 -- # local i 00:20:18.801 11:30:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:18.801 11:30:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:18.801 11:30:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:18.801 11:30:36 -- common/autotest_common.sh@871 -- # break 00:20:18.801 11:30:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:18.801 11:30:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:18.801 11:30:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.801 1+0 records in 00:20:18.801 1+0 records out 00:20:18.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276184 s, 14.8 MB/s 00:20:18.801 11:30:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.801 11:30:36 -- common/autotest_common.sh@884 -- # size=4096 00:20:18.801 11:30:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.801 11:30:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:18.801 11:30:36 -- common/autotest_common.sh@887 -- # return 0 00:20:18.801 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.801 11:30:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.802 11:30:36 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:19.060 11:30:37 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.060 11:30:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@41 -- # break 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.319 11:30:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:19.319 11:30:37 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:20:19.319 11:30:37 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@12 -- # local i 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.319 11:30:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:19.578 /dev/nbd1 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:19.578 11:30:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:19.578 11:30:37 -- common/autotest_common.sh@867 -- # local i 00:20:19.578 11:30:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:19.578 11:30:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:19.578 11:30:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:19.578 11:30:37 -- common/autotest_common.sh@871 -- # break 00:20:19.578 11:30:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:19.578 11:30:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:19.578 11:30:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.578 1+0 records in 00:20:19.578 1+0 records out 00:20:19.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343176 s, 11.9 MB/s 00:20:19.578 11:30:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.578 11:30:37 -- common/autotest_common.sh@884 -- # size=4096 00:20:19.578 11:30:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.578 11:30:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:19.578 11:30:37 -- common/autotest_common.sh@887 -- # return 0 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.578 11:30:37 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:19.578 11:30:37 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.578 11:30:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@41 -- # break 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.837 11:30:37 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.837 11:30:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@41 -- # break 00:20:20.096 11:30:38 -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.096 11:30:38 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:20.096 11:30:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:20.096 11:30:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:20.096 11:30:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:20.355 11:30:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:20.614 [2024-11-26 11:30:38.784857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:20.614 [2024-11-26 11:30:38.784979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.614 [2024-11-26 11:30:38.785012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:20:20.614 [2024-11-26 11:30:38.785084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.614 [2024-11-26 11:30:38.787674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.615 [2024-11-26 11:30:38.787748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:20.615 [2024-11-26 11:30:38.787826] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:20.615 [2024-11-26 11:30:38.787905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:20.615 BaseBdev1 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@696 -- # continue 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:20:20.615 11:30:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:20:20.873 11:30:39 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:21.132 [2024-11-26 11:30:39.265126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:21.132 [2024-11-26 11:30:39.265214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.132 [2024-11-26 11:30:39.265251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:20:21.132 [2024-11-26 11:30:39.265268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.132 [2024-11-26 11:30:39.265740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.132 [2024-11-26 11:30:39.265783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:21.132 [2024-11-26 11:30:39.265891] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:20:21.132 [2024-11-26 11:30:39.265944] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:20:21.132 [2024-11-26 11:30:39.265957] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.132 [2024-11-26 11:30:39.265990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state configuring 00:20:21.132 [2024-11-26 11:30:39.266048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.132 BaseBdev3 00:20:21.132 11:30:39 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:21.132 11:30:39 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:20:21.132 11:30:39 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:20:21.390 11:30:39 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:21.648 [2024-11-26 11:30:39.709292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:21.648 [2024-11-26 11:30:39.709429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.648 [2024-11-26 11:30:39.709465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:20:21.648 [2024-11-26 11:30:39.709478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.648 [2024-11-26 11:30:39.709930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.648 [2024-11-26 11:30:39.709979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:21.648 [2024-11-26 11:30:39.710062] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:20:21.648 [2024-11-26 11:30:39.710090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:21.648 BaseBdev4 00:20:21.648 11:30:39 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:21.906 11:30:39 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:22.164 [2024-11-26 11:30:40.229592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:22.164 [2024-11-26 11:30:40.229692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.164 [2024-11-26 11:30:40.229725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:20:22.164 [2024-11-26 11:30:40.229737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.164 [2024-11-26 11:30:40.230289] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.164 [2024-11-26 11:30:40.230327] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:22.164 [2024-11-26 11:30:40.230445] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:22.164 [2024-11-26 11:30:40.230491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.164 spare 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.164 11:30:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.164 [2024-11-26 11:30:40.330639] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c380 00:20:22.164 [2024-11-26 11:30:40.330675] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:22.164 [2024-11-26 11:30:40.330814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036870 00:20:22.164 [2024-11-26 11:30:40.331317] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c380 00:20:22.164 [2024-11-26 11:30:40.331352] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c380 00:20:22.164 [2024-11-26 11:30:40.331533] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.423 11:30:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.423 "name": "raid_bdev1", 00:20:22.423 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:22.423 "strip_size_kb": 0, 00:20:22.423 "state": "online", 00:20:22.423 "raid_level": "raid1", 00:20:22.423 "superblock": true, 00:20:22.423 "num_base_bdevs": 4, 00:20:22.423 "num_base_bdevs_discovered": 3, 00:20:22.423 "num_base_bdevs_operational": 3, 00:20:22.423 "base_bdevs_list": [ 00:20:22.423 { 00:20:22.423 "name": "spare", 00:20:22.423 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:22.423 "is_configured": true, 00:20:22.423 "data_offset": 2048, 00:20:22.423 "data_size": 63488 00:20:22.423 }, 00:20:22.423 { 00:20:22.423 "name": null, 00:20:22.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.423 "is_configured": false, 00:20:22.423 "data_offset": 2048, 00:20:22.423 "data_size": 63488 00:20:22.423 }, 00:20:22.423 { 00:20:22.423 "name": "BaseBdev3", 00:20:22.423 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:22.423 "is_configured": true, 00:20:22.423 "data_offset": 2048, 00:20:22.423 "data_size": 63488 00:20:22.423 }, 00:20:22.423 { 00:20:22.423 "name": "BaseBdev4", 00:20:22.423 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:22.423 "is_configured": true, 00:20:22.423 "data_offset": 2048, 00:20:22.423 "data_size": 63488 00:20:22.423 } 00:20:22.423 ] 00:20:22.423 }' 00:20:22.423 11:30:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.423 11:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.682 11:30:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.941 11:30:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:22.941 "name": "raid_bdev1", 00:20:22.941 "uuid": "d1305b37-3ede-4ab6-a208-f651675e864b", 00:20:22.941 "strip_size_kb": 0, 00:20:22.941 "state": "online", 00:20:22.941 "raid_level": "raid1", 00:20:22.941 "superblock": true, 00:20:22.941 "num_base_bdevs": 4, 00:20:22.942 "num_base_bdevs_discovered": 3, 00:20:22.942 "num_base_bdevs_operational": 3, 00:20:22.942 "base_bdevs_list": [ 00:20:22.942 { 00:20:22.942 "name": "spare", 00:20:22.942 "uuid": "b96d97fb-9ca9-555b-9ec2-5a47fd4bc1c7", 00:20:22.942 "is_configured": true, 00:20:22.942 "data_offset": 2048, 00:20:22.942 "data_size": 63488 00:20:22.942 }, 00:20:22.942 { 00:20:22.942 "name": null, 00:20:22.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.942 "is_configured": false, 00:20:22.942 "data_offset": 2048, 00:20:22.942 "data_size": 63488 00:20:22.942 }, 00:20:22.942 { 00:20:22.942 "name": "BaseBdev3", 00:20:22.942 "uuid": "fb82d229-733e-5081-bee1-dc9cd3ae4830", 00:20:22.942 "is_configured": true, 00:20:22.942 "data_offset": 2048, 00:20:22.942 "data_size": 63488 00:20:22.942 }, 00:20:22.942 { 00:20:22.942 "name": "BaseBdev4", 00:20:22.942 "uuid": "f2b1c443-8b5d-5e0b-80a1-ff467aa61cda", 00:20:22.942 "is_configured": true, 00:20:22.942 "data_offset": 2048, 00:20:22.942 "data_size": 63488 00:20:22.942 } 00:20:22.942 ] 00:20:22.942 }' 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.942 11:30:41 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:23.201 11:30:41 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.201 11:30:41 -- bdev/bdev_raid.sh@709 -- # killprocess 91611 00:20:23.201 11:30:41 -- common/autotest_common.sh@936 -- # '[' -z 91611 ']' 00:20:23.201 11:30:41 -- common/autotest_common.sh@940 -- # kill -0 91611 00:20:23.201 11:30:41 -- common/autotest_common.sh@941 -- # uname 00:20:23.201 11:30:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.201 11:30:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91611 00:20:23.201 killing process with pid 91611 00:20:23.201 Received shutdown signal, test time was about 15.578039 seconds 00:20:23.201 00:20:23.201 Latency(us) 00:20:23.201 [2024-11-26T11:30:41.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.201 [2024-11-26T11:30:41.431Z] =================================================================================================================== 00:20:23.201 [2024-11-26T11:30:41.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.201 11:30:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:23.201 11:30:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:23.201 11:30:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91611' 00:20:23.201 11:30:41 -- common/autotest_common.sh@955 -- # kill 91611 00:20:23.201 [2024-11-26 11:30:41.369588] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.201 11:30:41 -- common/autotest_common.sh@960 -- # wait 91611 00:20:23.201 [2024-11-26 11:30:41.369670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.201 [2024-11-26 11:30:41.369760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.201 [2024-11-26 11:30:41.369773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c380 name raid_bdev1, state offline 00:20:23.201 [2024-11-26 11:30:41.399056] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.461 ************************************ 00:20:23.461 END TEST raid_rebuild_test_sb_io 00:20:23.461 ************************************ 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:23.461 00:20:23.461 real 0m20.623s 00:20:23.461 user 0m32.744s 00:20:23.461 sys 0m2.887s 00:20:23.461 11:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.461 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:23.461 11:30:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:23.461 11:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.461 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 ************************************ 00:20:23.461 START TEST raid5f_state_function_test 00:20:23.461 ************************************ 00:20:23.461 11:30:41 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=92163 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 92163' 00:20:23.461 Process raid pid: 92163 00:20:23.461 11:30:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 92163 /var/tmp/spdk-raid.sock 00:20:23.461 11:30:41 -- common/autotest_common.sh@829 -- # '[' -z 92163 ']' 00:20:23.461 11:30:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:23.461 11:30:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:23.461 11:30:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:23.461 11:30:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.461 11:30:41 -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 [2024-11-26 11:30:41.692787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:23.461 [2024-11-26 11:30:41.692996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.720 [2024-11-26 11:30:41.853785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.720 [2024-11-26 11:30:41.895012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.720 [2024-11-26 11:30:41.934252] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.659 11:30:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.659 11:30:42 -- common/autotest_common.sh@862 -- # return 0 00:20:24.659 11:30:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:24.919 [2024-11-26 11:30:42.932369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.919 [2024-11-26 11:30:42.932464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.919 [2024-11-26 11:30:42.932486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.919 [2024-11-26 11:30:42.932529] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.919 [2024-11-26 11:30:42.932540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.919 [2024-11-26 11:30:42.932554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.919 11:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.178 11:30:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.178 "name": "Existed_Raid", 00:20:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.178 "strip_size_kb": 64, 00:20:25.178 "state": "configuring", 00:20:25.178 "raid_level": "raid5f", 00:20:25.178 "superblock": false, 00:20:25.178 "num_base_bdevs": 3, 00:20:25.178 "num_base_bdevs_discovered": 0, 00:20:25.178 "num_base_bdevs_operational": 3, 00:20:25.178 "base_bdevs_list": [ 00:20:25.178 { 00:20:25.178 "name": "BaseBdev1", 00:20:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.178 "is_configured": false, 00:20:25.178 "data_offset": 0, 00:20:25.178 "data_size": 0 00:20:25.178 }, 00:20:25.178 { 00:20:25.178 "name": "BaseBdev2", 00:20:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.178 "is_configured": false, 00:20:25.178 "data_offset": 0, 00:20:25.178 "data_size": 0 00:20:25.178 }, 00:20:25.178 { 00:20:25.178 "name": "BaseBdev3", 00:20:25.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.178 "is_configured": false, 00:20:25.178 "data_offset": 0, 00:20:25.178 "data_size": 0 00:20:25.178 } 00:20:25.178 ] 00:20:25.178 }' 00:20:25.178 11:30:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.178 11:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:25.452 11:30:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:25.725 [2024-11-26 11:30:43.736554] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.725 [2024-11-26 11:30:43.736638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:20:25.725 11:30:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:25.984 [2024-11-26 11:30:43.976641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.984 [2024-11-26 11:30:43.976711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.984 [2024-11-26 11:30:43.976745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.984 [2024-11-26 11:30:43.976756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.984 [2024-11-26 11:30:43.976766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.984 [2024-11-26 11:30:43.976775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.984 11:30:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.984 [2024-11-26 11:30:44.170856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.984 BaseBdev1 00:20:25.984 11:30:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:25.984 11:30:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:25.984 11:30:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:25.984 11:30:44 -- common/autotest_common.sh@899 -- # local i 00:20:25.984 11:30:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:25.984 11:30:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:25.984 11:30:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.243 11:30:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.503 [ 00:20:26.503 { 00:20:26.503 "name": "BaseBdev1", 00:20:26.503 "aliases": [ 00:20:26.503 "9bec068e-8283-41ce-86bf-7d772cc10ca5" 00:20:26.503 ], 00:20:26.503 "product_name": "Malloc disk", 00:20:26.503 "block_size": 512, 00:20:26.503 "num_blocks": 65536, 00:20:26.503 "uuid": "9bec068e-8283-41ce-86bf-7d772cc10ca5", 00:20:26.503 "assigned_rate_limits": { 00:20:26.503 "rw_ios_per_sec": 0, 00:20:26.503 "rw_mbytes_per_sec": 0, 00:20:26.503 "r_mbytes_per_sec": 0, 00:20:26.503 "w_mbytes_per_sec": 0 00:20:26.503 }, 00:20:26.503 "claimed": true, 00:20:26.503 "claim_type": "exclusive_write", 00:20:26.503 "zoned": false, 00:20:26.503 "supported_io_types": { 00:20:26.503 "read": true, 00:20:26.503 "write": true, 00:20:26.503 "unmap": true, 00:20:26.503 "write_zeroes": true, 00:20:26.503 "flush": true, 00:20:26.503 "reset": true, 00:20:26.503 "compare": false, 00:20:26.503 "compare_and_write": false, 00:20:26.503 "abort": true, 00:20:26.503 "nvme_admin": false, 00:20:26.503 "nvme_io": false 00:20:26.503 }, 00:20:26.503 "memory_domains": [ 00:20:26.503 { 00:20:26.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.503 "dma_device_type": 2 00:20:26.503 } 00:20:26.503 ], 00:20:26.503 "driver_specific": {} 00:20:26.503 } 00:20:26.503 ] 00:20:26.503 11:30:44 -- common/autotest_common.sh@905 -- # return 0 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.503 11:30:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.762 11:30:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.762 "name": "Existed_Raid", 00:20:26.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.762 "strip_size_kb": 64, 00:20:26.762 "state": "configuring", 00:20:26.762 "raid_level": "raid5f", 00:20:26.762 "superblock": false, 00:20:26.762 "num_base_bdevs": 3, 00:20:26.762 "num_base_bdevs_discovered": 1, 00:20:26.762 "num_base_bdevs_operational": 3, 00:20:26.762 "base_bdevs_list": [ 00:20:26.762 { 00:20:26.762 "name": "BaseBdev1", 00:20:26.762 "uuid": "9bec068e-8283-41ce-86bf-7d772cc10ca5", 00:20:26.762 "is_configured": true, 00:20:26.762 "data_offset": 0, 00:20:26.762 "data_size": 65536 00:20:26.762 }, 00:20:26.762 { 00:20:26.762 "name": "BaseBdev2", 00:20:26.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.762 "is_configured": false, 00:20:26.762 "data_offset": 0, 00:20:26.762 "data_size": 0 00:20:26.762 }, 00:20:26.762 { 00:20:26.762 "name": "BaseBdev3", 00:20:26.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.762 "is_configured": false, 00:20:26.762 "data_offset": 0, 00:20:26.762 "data_size": 0 00:20:26.762 } 00:20:26.762 ] 00:20:26.762 }' 00:20:26.762 11:30:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.762 11:30:44 -- common/autotest_common.sh@10 -- # set +x 00:20:27.021 11:30:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.280 [2024-11-26 11:30:45.323214] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.280 [2024-11-26 11:30:45.323273] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:20:27.280 11:30:45 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:27.280 11:30:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:27.540 [2024-11-26 11:30:45.567347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.540 [2024-11-26 11:30:45.569498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.540 [2024-11-26 11:30:45.569543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.540 [2024-11-26 11:30:45.569575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.540 [2024-11-26 11:30:45.569586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.540 "name": "Existed_Raid", 00:20:27.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.540 "strip_size_kb": 64, 00:20:27.540 "state": "configuring", 00:20:27.540 "raid_level": "raid5f", 00:20:27.540 "superblock": false, 00:20:27.540 "num_base_bdevs": 3, 00:20:27.540 "num_base_bdevs_discovered": 1, 00:20:27.540 "num_base_bdevs_operational": 3, 00:20:27.540 "base_bdevs_list": [ 00:20:27.540 { 00:20:27.540 "name": "BaseBdev1", 00:20:27.540 "uuid": "9bec068e-8283-41ce-86bf-7d772cc10ca5", 00:20:27.540 "is_configured": true, 00:20:27.540 "data_offset": 0, 00:20:27.540 "data_size": 65536 00:20:27.540 }, 00:20:27.540 { 00:20:27.540 "name": "BaseBdev2", 00:20:27.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.540 "is_configured": false, 00:20:27.540 "data_offset": 0, 00:20:27.540 "data_size": 0 00:20:27.540 }, 00:20:27.540 { 00:20:27.540 "name": "BaseBdev3", 00:20:27.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.540 "is_configured": false, 00:20:27.540 "data_offset": 0, 00:20:27.540 "data_size": 0 00:20:27.540 } 00:20:27.540 ] 00:20:27.540 }' 00:20:27.540 11:30:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.540 11:30:45 -- common/autotest_common.sh@10 -- # set +x 00:20:28.108 11:30:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:28.108 [2024-11-26 11:30:46.277723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.108 BaseBdev2 00:20:28.108 11:30:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:28.108 11:30:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:28.108 11:30:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.108 11:30:46 -- common/autotest_common.sh@899 -- # local i 00:20:28.108 11:30:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.108 11:30:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.108 11:30:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.366 11:30:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:28.625 [ 00:20:28.625 { 00:20:28.625 "name": "BaseBdev2", 00:20:28.625 "aliases": [ 00:20:28.625 "ec6c4565-150d-4b0d-864e-363e9211a1e8" 00:20:28.625 ], 00:20:28.625 "product_name": "Malloc disk", 00:20:28.625 "block_size": 512, 00:20:28.625 "num_blocks": 65536, 00:20:28.625 "uuid": "ec6c4565-150d-4b0d-864e-363e9211a1e8", 00:20:28.625 "assigned_rate_limits": { 00:20:28.625 "rw_ios_per_sec": 0, 00:20:28.625 "rw_mbytes_per_sec": 0, 00:20:28.625 "r_mbytes_per_sec": 0, 00:20:28.625 "w_mbytes_per_sec": 0 00:20:28.625 }, 00:20:28.625 "claimed": true, 00:20:28.625 "claim_type": "exclusive_write", 00:20:28.625 "zoned": false, 00:20:28.625 "supported_io_types": { 00:20:28.625 "read": true, 00:20:28.625 "write": true, 00:20:28.625 "unmap": true, 00:20:28.625 "write_zeroes": true, 00:20:28.625 "flush": true, 00:20:28.625 "reset": true, 00:20:28.625 "compare": false, 00:20:28.625 "compare_and_write": false, 00:20:28.625 "abort": true, 00:20:28.625 "nvme_admin": false, 00:20:28.625 "nvme_io": false 00:20:28.625 }, 00:20:28.625 "memory_domains": [ 00:20:28.625 { 00:20:28.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.625 "dma_device_type": 2 00:20:28.625 } 00:20:28.625 ], 00:20:28.625 "driver_specific": {} 00:20:28.625 } 00:20:28.625 ] 00:20:28.625 11:30:46 -- common/autotest_common.sh@905 -- # return 0 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.625 11:30:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.884 11:30:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.884 "name": "Existed_Raid", 00:20:28.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.884 "strip_size_kb": 64, 00:20:28.884 "state": "configuring", 00:20:28.884 "raid_level": "raid5f", 00:20:28.884 "superblock": false, 00:20:28.884 "num_base_bdevs": 3, 00:20:28.884 "num_base_bdevs_discovered": 2, 00:20:28.884 "num_base_bdevs_operational": 3, 00:20:28.884 "base_bdevs_list": [ 00:20:28.884 { 00:20:28.884 "name": "BaseBdev1", 00:20:28.884 "uuid": "9bec068e-8283-41ce-86bf-7d772cc10ca5", 00:20:28.884 "is_configured": true, 00:20:28.884 "data_offset": 0, 00:20:28.884 "data_size": 65536 00:20:28.884 }, 00:20:28.884 { 00:20:28.884 "name": "BaseBdev2", 00:20:28.884 "uuid": "ec6c4565-150d-4b0d-864e-363e9211a1e8", 00:20:28.884 "is_configured": true, 00:20:28.884 "data_offset": 0, 00:20:28.884 "data_size": 65536 00:20:28.884 }, 00:20:28.884 { 00:20:28.884 "name": "BaseBdev3", 00:20:28.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.884 "is_configured": false, 00:20:28.884 "data_offset": 0, 00:20:28.884 "data_size": 0 00:20:28.884 } 00:20:28.884 ] 00:20:28.884 }' 00:20:28.884 11:30:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.884 11:30:46 -- common/autotest_common.sh@10 -- # set +x 00:20:29.143 11:30:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:29.409 [2024-11-26 11:30:47.394321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.409 [2024-11-26 11:30:47.394621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:20:29.409 [2024-11-26 11:30:47.394751] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:29.409 [2024-11-26 11:30:47.394999] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:29.409 [2024-11-26 11:30:47.395740] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:20:29.409 [2024-11-26 11:30:47.395897] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:20:29.409 [2024-11-26 11:30:47.396258] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.409 BaseBdev3 00:20:29.409 11:30:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:29.409 11:30:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:29.409 11:30:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:29.409 11:30:47 -- common/autotest_common.sh@899 -- # local i 00:20:29.409 11:30:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:29.409 11:30:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:29.409 11:30:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.668 11:30:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:29.668 [ 00:20:29.668 { 00:20:29.668 "name": "BaseBdev3", 00:20:29.668 "aliases": [ 00:20:29.668 "1b02d969-6ef6-4c8c-9e2c-5698df3cfac5" 00:20:29.668 ], 00:20:29.668 "product_name": "Malloc disk", 00:20:29.668 "block_size": 512, 00:20:29.668 "num_blocks": 65536, 00:20:29.668 "uuid": "1b02d969-6ef6-4c8c-9e2c-5698df3cfac5", 00:20:29.668 "assigned_rate_limits": { 00:20:29.668 "rw_ios_per_sec": 0, 00:20:29.668 "rw_mbytes_per_sec": 0, 00:20:29.668 "r_mbytes_per_sec": 0, 00:20:29.668 "w_mbytes_per_sec": 0 00:20:29.668 }, 00:20:29.668 "claimed": true, 00:20:29.668 "claim_type": "exclusive_write", 00:20:29.668 "zoned": false, 00:20:29.668 "supported_io_types": { 00:20:29.668 "read": true, 00:20:29.668 "write": true, 00:20:29.668 "unmap": true, 00:20:29.668 "write_zeroes": true, 00:20:29.668 "flush": true, 00:20:29.668 "reset": true, 00:20:29.668 "compare": false, 00:20:29.668 "compare_and_write": false, 00:20:29.668 "abort": true, 00:20:29.668 "nvme_admin": false, 00:20:29.668 "nvme_io": false 00:20:29.668 }, 00:20:29.668 "memory_domains": [ 00:20:29.668 { 00:20:29.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.668 "dma_device_type": 2 00:20:29.668 } 00:20:29.668 ], 00:20:29.668 "driver_specific": {} 00:20:29.668 } 00:20:29.668 ] 00:20:29.668 11:30:47 -- common/autotest_common.sh@905 -- # return 0 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.668 11:30:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.927 11:30:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.927 "name": "Existed_Raid", 00:20:29.927 "uuid": "db884712-99ce-41ab-b729-e8de4188b3dd", 00:20:29.927 "strip_size_kb": 64, 00:20:29.927 "state": "online", 00:20:29.927 "raid_level": "raid5f", 00:20:29.927 "superblock": false, 00:20:29.927 "num_base_bdevs": 3, 00:20:29.927 "num_base_bdevs_discovered": 3, 00:20:29.927 "num_base_bdevs_operational": 3, 00:20:29.927 "base_bdevs_list": [ 00:20:29.927 { 00:20:29.927 "name": "BaseBdev1", 00:20:29.927 "uuid": "9bec068e-8283-41ce-86bf-7d772cc10ca5", 00:20:29.927 "is_configured": true, 00:20:29.927 "data_offset": 0, 00:20:29.927 "data_size": 65536 00:20:29.927 }, 00:20:29.927 { 00:20:29.927 "name": "BaseBdev2", 00:20:29.927 "uuid": "ec6c4565-150d-4b0d-864e-363e9211a1e8", 00:20:29.927 "is_configured": true, 00:20:29.927 "data_offset": 0, 00:20:29.927 "data_size": 65536 00:20:29.927 }, 00:20:29.927 { 00:20:29.927 "name": "BaseBdev3", 00:20:29.927 "uuid": "1b02d969-6ef6-4c8c-9e2c-5698df3cfac5", 00:20:29.927 "is_configured": true, 00:20:29.927 "data_offset": 0, 00:20:29.927 "data_size": 65536 00:20:29.927 } 00:20:29.927 ] 00:20:29.927 }' 00:20:29.927 11:30:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.927 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:20:30.186 11:30:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:30.446 [2024-11-26 11:30:48.538802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.446 11:30:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.705 11:30:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.705 "name": "Existed_Raid", 00:20:30.705 "uuid": "db884712-99ce-41ab-b729-e8de4188b3dd", 00:20:30.705 "strip_size_kb": 64, 00:20:30.705 "state": "online", 00:20:30.705 "raid_level": "raid5f", 00:20:30.705 "superblock": false, 00:20:30.705 "num_base_bdevs": 3, 00:20:30.705 "num_base_bdevs_discovered": 2, 00:20:30.705 "num_base_bdevs_operational": 2, 00:20:30.705 "base_bdevs_list": [ 00:20:30.705 { 00:20:30.705 "name": null, 00:20:30.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.705 "is_configured": false, 00:20:30.705 "data_offset": 0, 00:20:30.705 "data_size": 65536 00:20:30.705 }, 00:20:30.705 { 00:20:30.705 "name": "BaseBdev2", 00:20:30.705 "uuid": "ec6c4565-150d-4b0d-864e-363e9211a1e8", 00:20:30.705 "is_configured": true, 00:20:30.705 "data_offset": 0, 00:20:30.705 "data_size": 65536 00:20:30.705 }, 00:20:30.705 { 00:20:30.705 "name": "BaseBdev3", 00:20:30.705 "uuid": "1b02d969-6ef6-4c8c-9e2c-5698df3cfac5", 00:20:30.705 "is_configured": true, 00:20:30.705 "data_offset": 0, 00:20:30.705 "data_size": 65536 00:20:30.705 } 00:20:30.705 ] 00:20:30.705 }' 00:20:30.705 11:30:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.705 11:30:48 -- common/autotest_common.sh@10 -- # set +x 00:20:30.963 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:30.963 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:30.964 11:30:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.964 11:30:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.222 11:30:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.222 11:30:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.222 11:30:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:31.482 [2024-11-26 11:30:49.529759] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.482 [2024-11-26 11:30:49.530036] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.482 [2024-11-26 11:30:49.530328] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.482 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.482 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.482 11:30:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.482 11:30:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:31.741 [2024-11-26 11:30:49.921817] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:31.741 [2024-11-26 11:30:49.922173] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.741 11:30:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:32.000 11:30:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:32.000 11:30:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:32.000 11:30:50 -- bdev/bdev_raid.sh@287 -- # killprocess 92163 00:20:32.000 11:30:50 -- common/autotest_common.sh@936 -- # '[' -z 92163 ']' 00:20:32.000 11:30:50 -- common/autotest_common.sh@940 -- # kill -0 92163 00:20:32.000 11:30:50 -- common/autotest_common.sh@941 -- # uname 00:20:32.000 11:30:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.000 11:30:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92163 00:20:32.000 killing process with pid 92163 00:20:32.000 11:30:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.000 11:30:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.000 11:30:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92163' 00:20:32.000 11:30:50 -- common/autotest_common.sh@955 -- # kill 92163 00:20:32.000 [2024-11-26 11:30:50.182134] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.000 11:30:50 -- common/autotest_common.sh@960 -- # wait 92163 00:20:32.000 [2024-11-26 11:30:50.182225] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:32.259 00:20:32.259 real 0m8.726s 00:20:32.259 user 0m15.218s 00:20:32.259 sys 0m1.385s 00:20:32.259 11:30:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:32.259 11:30:50 -- common/autotest_common.sh@10 -- # set +x 00:20:32.259 ************************************ 00:20:32.259 END TEST raid5f_state_function_test 00:20:32.259 ************************************ 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:32.259 11:30:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:32.259 11:30:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:32.259 11:30:50 -- common/autotest_common.sh@10 -- # set +x 00:20:32.259 ************************************ 00:20:32.259 START TEST raid5f_state_function_test_sb 00:20:32.259 ************************************ 00:20:32.259 11:30:50 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:32.259 Process raid pid: 92481 00:20:32.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=92481 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 92481' 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 92481 /var/tmp/spdk-raid.sock 00:20:32.259 11:30:50 -- common/autotest_common.sh@829 -- # '[' -z 92481 ']' 00:20:32.259 11:30:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.259 11:30:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:32.259 11:30:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.259 11:30:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.259 11:30:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.259 11:30:50 -- common/autotest_common.sh@10 -- # set +x 00:20:32.259 [2024-11-26 11:30:50.483393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:32.259 [2024-11-26 11:30:50.483755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.519 [2024-11-26 11:30:50.651825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.519 [2024-11-26 11:30:50.685622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.519 [2024-11-26 11:30:50.719443] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.456 11:30:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.456 11:30:51 -- common/autotest_common.sh@862 -- # return 0 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:33.456 [2024-11-26 11:30:51.587269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:33.456 [2024-11-26 11:30:51.587337] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:33.456 [2024-11-26 11:30:51.587354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.456 [2024-11-26 11:30:51.587365] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.456 [2024-11-26 11:30:51.587375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.456 [2024-11-26 11:30:51.587387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.456 11:30:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.715 11:30:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.715 "name": "Existed_Raid", 00:20:33.715 "uuid": "e4e28bb9-1b73-439a-94a8-4212be9b1361", 00:20:33.715 "strip_size_kb": 64, 00:20:33.715 "state": "configuring", 00:20:33.715 "raid_level": "raid5f", 00:20:33.715 "superblock": true, 00:20:33.715 "num_base_bdevs": 3, 00:20:33.715 "num_base_bdevs_discovered": 0, 00:20:33.715 "num_base_bdevs_operational": 3, 00:20:33.715 "base_bdevs_list": [ 00:20:33.715 { 00:20:33.715 "name": "BaseBdev1", 00:20:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.715 "is_configured": false, 00:20:33.715 "data_offset": 0, 00:20:33.715 "data_size": 0 00:20:33.715 }, 00:20:33.715 { 00:20:33.715 "name": "BaseBdev2", 00:20:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.715 "is_configured": false, 00:20:33.715 "data_offset": 0, 00:20:33.715 "data_size": 0 00:20:33.715 }, 00:20:33.715 { 00:20:33.715 "name": "BaseBdev3", 00:20:33.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.715 "is_configured": false, 00:20:33.715 "data_offset": 0, 00:20:33.715 "data_size": 0 00:20:33.715 } 00:20:33.715 ] 00:20:33.715 }' 00:20:33.715 11:30:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.715 11:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:33.974 11:30:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.232 [2024-11-26 11:30:52.319306] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.232 [2024-11-26 11:30:52.319355] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:20:34.232 11:30:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:34.491 [2024-11-26 11:30:52.519467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.491 [2024-11-26 11:30:52.519532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.491 [2024-11-26 11:30:52.519567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.491 [2024-11-26 11:30:52.519579] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.491 [2024-11-26 11:30:52.519605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.491 [2024-11-26 11:30:52.519615] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.491 11:30:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:34.491 [2024-11-26 11:30:52.717579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.491 BaseBdev1 00:20:34.750 11:30:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:34.750 11:30:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:34.750 11:30:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:34.750 11:30:52 -- common/autotest_common.sh@899 -- # local i 00:20:34.751 11:30:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:34.751 11:30:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:34.751 11:30:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.751 11:30:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:35.009 [ 00:20:35.009 { 00:20:35.009 "name": "BaseBdev1", 00:20:35.009 "aliases": [ 00:20:35.009 "7fb495b5-cfdb-470e-9f28-386ae7efd809" 00:20:35.009 ], 00:20:35.009 "product_name": "Malloc disk", 00:20:35.009 "block_size": 512, 00:20:35.009 "num_blocks": 65536, 00:20:35.009 "uuid": "7fb495b5-cfdb-470e-9f28-386ae7efd809", 00:20:35.010 "assigned_rate_limits": { 00:20:35.010 "rw_ios_per_sec": 0, 00:20:35.010 "rw_mbytes_per_sec": 0, 00:20:35.010 "r_mbytes_per_sec": 0, 00:20:35.010 "w_mbytes_per_sec": 0 00:20:35.010 }, 00:20:35.010 "claimed": true, 00:20:35.010 "claim_type": "exclusive_write", 00:20:35.010 "zoned": false, 00:20:35.010 "supported_io_types": { 00:20:35.010 "read": true, 00:20:35.010 "write": true, 00:20:35.010 "unmap": true, 00:20:35.010 "write_zeroes": true, 00:20:35.010 "flush": true, 00:20:35.010 "reset": true, 00:20:35.010 "compare": false, 00:20:35.010 "compare_and_write": false, 00:20:35.010 "abort": true, 00:20:35.010 "nvme_admin": false, 00:20:35.010 "nvme_io": false 00:20:35.010 }, 00:20:35.010 "memory_domains": [ 00:20:35.010 { 00:20:35.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.010 "dma_device_type": 2 00:20:35.010 } 00:20:35.010 ], 00:20:35.010 "driver_specific": {} 00:20:35.010 } 00:20:35.010 ] 00:20:35.010 11:30:53 -- common/autotest_common.sh@905 -- # return 0 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.010 11:30:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.269 11:30:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.269 "name": "Existed_Raid", 00:20:35.269 "uuid": "268e216f-dd81-4246-a7e9-8bab7d010e69", 00:20:35.269 "strip_size_kb": 64, 00:20:35.269 "state": "configuring", 00:20:35.269 "raid_level": "raid5f", 00:20:35.269 "superblock": true, 00:20:35.269 "num_base_bdevs": 3, 00:20:35.269 "num_base_bdevs_discovered": 1, 00:20:35.269 "num_base_bdevs_operational": 3, 00:20:35.269 "base_bdevs_list": [ 00:20:35.269 { 00:20:35.269 "name": "BaseBdev1", 00:20:35.269 "uuid": "7fb495b5-cfdb-470e-9f28-386ae7efd809", 00:20:35.269 "is_configured": true, 00:20:35.269 "data_offset": 2048, 00:20:35.269 "data_size": 63488 00:20:35.269 }, 00:20:35.269 { 00:20:35.269 "name": "BaseBdev2", 00:20:35.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.269 "is_configured": false, 00:20:35.269 "data_offset": 0, 00:20:35.269 "data_size": 0 00:20:35.269 }, 00:20:35.269 { 00:20:35.269 "name": "BaseBdev3", 00:20:35.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.269 "is_configured": false, 00:20:35.269 "data_offset": 0, 00:20:35.269 "data_size": 0 00:20:35.269 } 00:20:35.269 ] 00:20:35.269 }' 00:20:35.269 11:30:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.269 11:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:35.528 11:30:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:35.788 [2024-11-26 11:30:53.797884] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.788 [2024-11-26 11:30:53.797979] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:20:35.788 11:30:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:35.788 11:30:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:35.788 11:30:54 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:36.048 BaseBdev1 00:20:36.048 11:30:54 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:36.048 11:30:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:36.048 11:30:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.048 11:30:54 -- common/autotest_common.sh@899 -- # local i 00:20:36.048 11:30:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.048 11:30:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.048 11:30:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.307 11:30:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:36.567 [ 00:20:36.567 { 00:20:36.567 "name": "BaseBdev1", 00:20:36.567 "aliases": [ 00:20:36.567 "1e8e3290-bdf7-4fc6-be29-ee9a5d86f4fe" 00:20:36.567 ], 00:20:36.567 "product_name": "Malloc disk", 00:20:36.567 "block_size": 512, 00:20:36.567 "num_blocks": 65536, 00:20:36.567 "uuid": "1e8e3290-bdf7-4fc6-be29-ee9a5d86f4fe", 00:20:36.567 "assigned_rate_limits": { 00:20:36.567 "rw_ios_per_sec": 0, 00:20:36.567 "rw_mbytes_per_sec": 0, 00:20:36.567 "r_mbytes_per_sec": 0, 00:20:36.567 "w_mbytes_per_sec": 0 00:20:36.567 }, 00:20:36.567 "claimed": false, 00:20:36.567 "zoned": false, 00:20:36.567 "supported_io_types": { 00:20:36.567 "read": true, 00:20:36.567 "write": true, 00:20:36.567 "unmap": true, 00:20:36.567 "write_zeroes": true, 00:20:36.567 "flush": true, 00:20:36.567 "reset": true, 00:20:36.567 "compare": false, 00:20:36.567 "compare_and_write": false, 00:20:36.567 "abort": true, 00:20:36.567 "nvme_admin": false, 00:20:36.567 "nvme_io": false 00:20:36.567 }, 00:20:36.567 "memory_domains": [ 00:20:36.567 { 00:20:36.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.567 "dma_device_type": 2 00:20:36.567 } 00:20:36.567 ], 00:20:36.567 "driver_specific": {} 00:20:36.567 } 00:20:36.567 ] 00:20:36.567 11:30:54 -- common/autotest_common.sh@905 -- # return 0 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:36.567 [2024-11-26 11:30:54.762418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.567 [2024-11-26 11:30:54.764427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.567 [2024-11-26 11:30:54.764475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.567 [2024-11-26 11:30:54.764507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.567 [2024-11-26 11:30:54.764518] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.567 11:30:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.826 11:30:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.827 "name": "Existed_Raid", 00:20:36.827 "uuid": "ca06b911-310c-4abb-90a0-03697e7c7825", 00:20:36.827 "strip_size_kb": 64, 00:20:36.827 "state": "configuring", 00:20:36.827 "raid_level": "raid5f", 00:20:36.827 "superblock": true, 00:20:36.827 "num_base_bdevs": 3, 00:20:36.827 "num_base_bdevs_discovered": 1, 00:20:36.827 "num_base_bdevs_operational": 3, 00:20:36.827 "base_bdevs_list": [ 00:20:36.827 { 00:20:36.827 "name": "BaseBdev1", 00:20:36.827 "uuid": "1e8e3290-bdf7-4fc6-be29-ee9a5d86f4fe", 00:20:36.827 "is_configured": true, 00:20:36.827 "data_offset": 2048, 00:20:36.827 "data_size": 63488 00:20:36.827 }, 00:20:36.827 { 00:20:36.827 "name": "BaseBdev2", 00:20:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.827 "is_configured": false, 00:20:36.827 "data_offset": 0, 00:20:36.827 "data_size": 0 00:20:36.827 }, 00:20:36.827 { 00:20:36.827 "name": "BaseBdev3", 00:20:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.827 "is_configured": false, 00:20:36.827 "data_offset": 0, 00:20:36.827 "data_size": 0 00:20:36.827 } 00:20:36.827 ] 00:20:36.827 }' 00:20:36.827 11:30:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.827 11:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:37.394 11:30:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.394 [2024-11-26 11:30:55.598711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.394 BaseBdev2 00:20:37.394 11:30:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:37.394 11:30:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:37.394 11:30:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:37.394 11:30:55 -- common/autotest_common.sh@899 -- # local i 00:20:37.394 11:30:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:37.394 11:30:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:37.394 11:30:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.653 11:30:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:37.911 [ 00:20:37.911 { 00:20:37.911 "name": "BaseBdev2", 00:20:37.911 "aliases": [ 00:20:37.911 "7c79c0d1-8736-4be5-ae8f-f45309d94c83" 00:20:37.911 ], 00:20:37.912 "product_name": "Malloc disk", 00:20:37.912 "block_size": 512, 00:20:37.912 "num_blocks": 65536, 00:20:37.912 "uuid": "7c79c0d1-8736-4be5-ae8f-f45309d94c83", 00:20:37.912 "assigned_rate_limits": { 00:20:37.912 "rw_ios_per_sec": 0, 00:20:37.912 "rw_mbytes_per_sec": 0, 00:20:37.912 "r_mbytes_per_sec": 0, 00:20:37.912 "w_mbytes_per_sec": 0 00:20:37.912 }, 00:20:37.912 "claimed": true, 00:20:37.912 "claim_type": "exclusive_write", 00:20:37.912 "zoned": false, 00:20:37.912 "supported_io_types": { 00:20:37.912 "read": true, 00:20:37.912 "write": true, 00:20:37.912 "unmap": true, 00:20:37.912 "write_zeroes": true, 00:20:37.912 "flush": true, 00:20:37.912 "reset": true, 00:20:37.912 "compare": false, 00:20:37.912 "compare_and_write": false, 00:20:37.912 "abort": true, 00:20:37.912 "nvme_admin": false, 00:20:37.912 "nvme_io": false 00:20:37.912 }, 00:20:37.912 "memory_domains": [ 00:20:37.912 { 00:20:37.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.912 "dma_device_type": 2 00:20:37.912 } 00:20:37.912 ], 00:20:37.912 "driver_specific": {} 00:20:37.912 } 00:20:37.912 ] 00:20:37.912 11:30:56 -- common/autotest_common.sh@905 -- # return 0 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.912 11:30:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.170 11:30:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.170 "name": "Existed_Raid", 00:20:38.170 "uuid": "ca06b911-310c-4abb-90a0-03697e7c7825", 00:20:38.170 "strip_size_kb": 64, 00:20:38.170 "state": "configuring", 00:20:38.170 "raid_level": "raid5f", 00:20:38.170 "superblock": true, 00:20:38.170 "num_base_bdevs": 3, 00:20:38.170 "num_base_bdevs_discovered": 2, 00:20:38.170 "num_base_bdevs_operational": 3, 00:20:38.170 "base_bdevs_list": [ 00:20:38.170 { 00:20:38.170 "name": "BaseBdev1", 00:20:38.170 "uuid": "1e8e3290-bdf7-4fc6-be29-ee9a5d86f4fe", 00:20:38.170 "is_configured": true, 00:20:38.170 "data_offset": 2048, 00:20:38.170 "data_size": 63488 00:20:38.170 }, 00:20:38.170 { 00:20:38.170 "name": "BaseBdev2", 00:20:38.170 "uuid": "7c79c0d1-8736-4be5-ae8f-f45309d94c83", 00:20:38.170 "is_configured": true, 00:20:38.170 "data_offset": 2048, 00:20:38.170 "data_size": 63488 00:20:38.170 }, 00:20:38.170 { 00:20:38.170 "name": "BaseBdev3", 00:20:38.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.170 "is_configured": false, 00:20:38.170 "data_offset": 0, 00:20:38.170 "data_size": 0 00:20:38.170 } 00:20:38.170 ] 00:20:38.170 }' 00:20:38.170 11:30:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.170 11:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:38.429 11:30:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:38.688 [2024-11-26 11:30:56.739378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:38.688 [2024-11-26 11:30:56.739829] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:20:38.688 [2024-11-26 11:30:56.739988] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:38.688 [2024-11-26 11:30:56.740150] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:38.688 BaseBdev3 00:20:38.688 [2024-11-26 11:30:56.740849] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:20:38.688 [2024-11-26 11:30:56.740871] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:20:38.688 [2024-11-26 11:30:56.741095] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.688 11:30:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:38.688 11:30:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:38.688 11:30:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:38.688 11:30:56 -- common/autotest_common.sh@899 -- # local i 00:20:38.688 11:30:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:38.688 11:30:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:38.688 11:30:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.948 11:30:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:38.948 [ 00:20:38.948 { 00:20:38.948 "name": "BaseBdev3", 00:20:38.948 "aliases": [ 00:20:38.948 "b2f44921-1d97-4f3e-9cbe-ad7c82916248" 00:20:38.948 ], 00:20:38.948 "product_name": "Malloc disk", 00:20:38.948 "block_size": 512, 00:20:38.948 "num_blocks": 65536, 00:20:38.948 "uuid": "b2f44921-1d97-4f3e-9cbe-ad7c82916248", 00:20:38.948 "assigned_rate_limits": { 00:20:38.948 "rw_ios_per_sec": 0, 00:20:38.948 "rw_mbytes_per_sec": 0, 00:20:38.948 "r_mbytes_per_sec": 0, 00:20:38.948 "w_mbytes_per_sec": 0 00:20:38.948 }, 00:20:38.948 "claimed": true, 00:20:38.948 "claim_type": "exclusive_write", 00:20:38.948 "zoned": false, 00:20:38.948 "supported_io_types": { 00:20:38.948 "read": true, 00:20:38.948 "write": true, 00:20:38.948 "unmap": true, 00:20:38.948 "write_zeroes": true, 00:20:38.948 "flush": true, 00:20:38.948 "reset": true, 00:20:38.948 "compare": false, 00:20:38.948 "compare_and_write": false, 00:20:38.948 "abort": true, 00:20:38.948 "nvme_admin": false, 00:20:38.948 "nvme_io": false 00:20:38.948 }, 00:20:38.948 "memory_domains": [ 00:20:38.948 { 00:20:38.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.948 "dma_device_type": 2 00:20:38.948 } 00:20:38.948 ], 00:20:38.948 "driver_specific": {} 00:20:38.948 } 00:20:38.948 ] 00:20:38.948 11:30:57 -- common/autotest_common.sh@905 -- # return 0 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.948 11:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.206 11:30:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.206 "name": "Existed_Raid", 00:20:39.206 "uuid": "ca06b911-310c-4abb-90a0-03697e7c7825", 00:20:39.206 "strip_size_kb": 64, 00:20:39.206 "state": "online", 00:20:39.206 "raid_level": "raid5f", 00:20:39.206 "superblock": true, 00:20:39.206 "num_base_bdevs": 3, 00:20:39.206 "num_base_bdevs_discovered": 3, 00:20:39.206 "num_base_bdevs_operational": 3, 00:20:39.206 "base_bdevs_list": [ 00:20:39.206 { 00:20:39.206 "name": "BaseBdev1", 00:20:39.206 "uuid": "1e8e3290-bdf7-4fc6-be29-ee9a5d86f4fe", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 2048, 00:20:39.206 "data_size": 63488 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev2", 00:20:39.206 "uuid": "7c79c0d1-8736-4be5-ae8f-f45309d94c83", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 2048, 00:20:39.206 "data_size": 63488 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev3", 00:20:39.206 "uuid": "b2f44921-1d97-4f3e-9cbe-ad7c82916248", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 2048, 00:20:39.206 "data_size": 63488 00:20:39.206 } 00:20:39.206 ] 00:20:39.206 }' 00:20:39.206 11:30:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.206 11:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:39.465 11:30:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:39.722 [2024-11-26 11:30:57.815784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.722 11:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.980 11:30:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.980 "name": "Existed_Raid", 00:20:39.980 "uuid": "ca06b911-310c-4abb-90a0-03697e7c7825", 00:20:39.980 "strip_size_kb": 64, 00:20:39.980 "state": "online", 00:20:39.980 "raid_level": "raid5f", 00:20:39.980 "superblock": true, 00:20:39.980 "num_base_bdevs": 3, 00:20:39.980 "num_base_bdevs_discovered": 2, 00:20:39.980 "num_base_bdevs_operational": 2, 00:20:39.980 "base_bdevs_list": [ 00:20:39.980 { 00:20:39.980 "name": null, 00:20:39.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.980 "is_configured": false, 00:20:39.980 "data_offset": 2048, 00:20:39.980 "data_size": 63488 00:20:39.980 }, 00:20:39.980 { 00:20:39.980 "name": "BaseBdev2", 00:20:39.980 "uuid": "7c79c0d1-8736-4be5-ae8f-f45309d94c83", 00:20:39.980 "is_configured": true, 00:20:39.980 "data_offset": 2048, 00:20:39.980 "data_size": 63488 00:20:39.980 }, 00:20:39.980 { 00:20:39.980 "name": "BaseBdev3", 00:20:39.980 "uuid": "b2f44921-1d97-4f3e-9cbe-ad7c82916248", 00:20:39.980 "is_configured": true, 00:20:39.980 "data_offset": 2048, 00:20:39.980 "data_size": 63488 00:20:39.980 } 00:20:39.980 ] 00:20:39.980 }' 00:20:39.980 11:30:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.980 11:30:58 -- common/autotest_common.sh@10 -- # set +x 00:20:40.239 11:30:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:40.239 11:30:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:40.239 11:30:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.239 11:30:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:40.497 11:30:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:40.497 11:30:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:40.497 11:30:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:40.755 [2024-11-26 11:30:58.771156] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:40.755 [2024-11-26 11:30:58.771194] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.755 [2024-11-26 11:30:58.771284] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.755 11:30:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:40.755 11:30:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:40.755 11:30:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.755 11:30:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:41.014 11:30:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:41.014 11:30:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.014 11:30:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:41.272 [2024-11-26 11:30:59.306387] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:41.272 [2024-11-26 11:30:59.306460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:20:41.272 11:30:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:41.272 11:30:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:41.272 11:30:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.272 11:30:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:41.531 11:30:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:41.531 11:30:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:41.531 11:30:59 -- bdev/bdev_raid.sh@287 -- # killprocess 92481 00:20:41.531 11:30:59 -- common/autotest_common.sh@936 -- # '[' -z 92481 ']' 00:20:41.531 11:30:59 -- common/autotest_common.sh@940 -- # kill -0 92481 00:20:41.531 11:30:59 -- common/autotest_common.sh@941 -- # uname 00:20:41.531 11:30:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.531 11:30:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92481 00:20:41.531 killing process with pid 92481 00:20:41.531 11:30:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:41.531 11:30:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:41.531 11:30:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92481' 00:20:41.531 11:30:59 -- common/autotest_common.sh@955 -- # kill 92481 00:20:41.531 [2024-11-26 11:30:59.613204] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.531 11:30:59 -- common/autotest_common.sh@960 -- # wait 92481 00:20:41.531 [2024-11-26 11:30:59.613317] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.789 ************************************ 00:20:41.789 END TEST raid5f_state_function_test_sb 00:20:41.789 ************************************ 00:20:41.789 11:30:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:41.789 00:20:41.789 real 0m9.389s 00:20:41.789 user 0m16.405s 00:20:41.789 sys 0m1.471s 00:20:41.789 11:30:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:41.789 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.789 11:30:59 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:41.789 11:30:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:41.789 11:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.789 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.789 ************************************ 00:20:41.789 START TEST raid5f_superblock_test 00:20:41.790 ************************************ 00:20:41.790 11:30:59 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=92820 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 92820 /var/tmp/spdk-raid.sock 00:20:41.790 11:30:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:41.790 11:30:59 -- common/autotest_common.sh@829 -- # '[' -z 92820 ']' 00:20:41.790 11:30:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:41.790 11:30:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.790 11:30:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:41.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:41.790 11:30:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.790 11:30:59 -- common/autotest_common.sh@10 -- # set +x 00:20:41.790 [2024-11-26 11:30:59.918560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:41.790 [2024-11-26 11:30:59.918909] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92820 ] 00:20:42.048 [2024-11-26 11:31:00.074884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.048 [2024-11-26 11:31:00.108392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.048 [2024-11-26 11:31:00.139847] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.616 11:31:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.616 11:31:00 -- common/autotest_common.sh@862 -- # return 0 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:42.616 11:31:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:42.875 malloc1 00:20:42.875 11:31:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:43.135 [2024-11-26 11:31:01.331611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:43.135 [2024-11-26 11:31:01.331887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.135 [2024-11-26 11:31:01.332007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:43.135 [2024-11-26 11:31:01.332228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.135 [2024-11-26 11:31:01.334904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.135 [2024-11-26 11:31:01.335127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:43.135 pt1 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.135 11:31:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:43.395 malloc2 00:20:43.395 11:31:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:43.654 [2024-11-26 11:31:01.785747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:43.654 [2024-11-26 11:31:01.786039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.654 [2024-11-26 11:31:01.786087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:43.654 [2024-11-26 11:31:01.786103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.654 [2024-11-26 11:31:01.788455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.654 [2024-11-26 11:31:01.788494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:43.654 pt2 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.654 11:31:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:43.914 malloc3 00:20:43.914 11:31:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:44.173 [2024-11-26 11:31:02.247560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:44.173 [2024-11-26 11:31:02.247833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.173 [2024-11-26 11:31:02.248028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:20:44.173 [2024-11-26 11:31:02.248167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.173 [2024-11-26 11:31:02.250603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.173 [2024-11-26 11:31:02.250644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:44.173 pt3 00:20:44.173 11:31:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:44.173 11:31:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:44.173 11:31:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:44.432 [2024-11-26 11:31:02.439648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:44.432 [2024-11-26 11:31:02.442001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:44.432 [2024-11-26 11:31:02.442074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:44.432 [2024-11-26 11:31:02.442268] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:20:44.432 [2024-11-26 11:31:02.442292] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:44.432 [2024-11-26 11:31:02.442404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:44.432 [2024-11-26 11:31:02.443028] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:20:44.432 [2024-11-26 11:31:02.443045] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:20:44.432 [2024-11-26 11:31:02.443180] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.432 11:31:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.433 11:31:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.691 11:31:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.692 "name": "raid_bdev1", 00:20:44.692 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:44.692 "strip_size_kb": 64, 00:20:44.692 "state": "online", 00:20:44.692 "raid_level": "raid5f", 00:20:44.692 "superblock": true, 00:20:44.692 "num_base_bdevs": 3, 00:20:44.692 "num_base_bdevs_discovered": 3, 00:20:44.692 "num_base_bdevs_operational": 3, 00:20:44.692 "base_bdevs_list": [ 00:20:44.692 { 00:20:44.692 "name": "pt1", 00:20:44.692 "uuid": "f895e3af-c559-5850-af10-4995b6fad751", 00:20:44.692 "is_configured": true, 00:20:44.692 "data_offset": 2048, 00:20:44.692 "data_size": 63488 00:20:44.692 }, 00:20:44.692 { 00:20:44.692 "name": "pt2", 00:20:44.692 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:44.692 "is_configured": true, 00:20:44.692 "data_offset": 2048, 00:20:44.692 "data_size": 63488 00:20:44.692 }, 00:20:44.692 { 00:20:44.692 "name": "pt3", 00:20:44.692 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:44.692 "is_configured": true, 00:20:44.692 "data_offset": 2048, 00:20:44.692 "data_size": 63488 00:20:44.692 } 00:20:44.692 ] 00:20:44.692 }' 00:20:44.692 11:31:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.692 11:31:02 -- common/autotest_common.sh@10 -- # set +x 00:20:44.951 11:31:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:44.951 11:31:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:44.951 [2024-11-26 11:31:03.160554] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.951 11:31:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=dabafc64-7a50-4af6-801e-5a7c2d591ee0 00:20:44.951 11:31:03 -- bdev/bdev_raid.sh@380 -- # '[' -z dabafc64-7a50-4af6-801e-5a7c2d591ee0 ']' 00:20:44.951 11:31:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:45.209 [2024-11-26 11:31:03.344356] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.209 [2024-11-26 11:31:03.344389] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.209 [2024-11-26 11:31:03.344474] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.209 [2024-11-26 11:31:03.344552] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.209 [2024-11-26 11:31:03.344570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:20:45.209 11:31:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:45.209 11:31:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.468 11:31:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:45.468 11:31:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:45.468 11:31:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.468 11:31:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:45.727 11:31:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.727 11:31:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:45.727 11:31:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.727 11:31:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:45.986 11:31:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:45.986 11:31:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:46.245 11:31:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:46.245 11:31:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:46.245 11:31:04 -- common/autotest_common.sh@650 -- # local es=0 00:20:46.245 11:31:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:46.245 11:31:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.245 11:31:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.245 11:31:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.245 11:31:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.245 11:31:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.245 11:31:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.245 11:31:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.245 11:31:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:46.245 11:31:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:46.504 [2024-11-26 11:31:04.544697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:46.504 [2024-11-26 11:31:04.546701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:46.504 [2024-11-26 11:31:04.546752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:46.504 [2024-11-26 11:31:04.546808] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:46.504 [2024-11-26 11:31:04.546899] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:46.504 [2024-11-26 11:31:04.546933] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:46.504 [2024-11-26 11:31:04.546951] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:46.504 [2024-11-26 11:31:04.546964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:20:46.504 request: 00:20:46.504 { 00:20:46.504 "name": "raid_bdev1", 00:20:46.504 "raid_level": "raid5f", 00:20:46.504 "base_bdevs": [ 00:20:46.504 "malloc1", 00:20:46.504 "malloc2", 00:20:46.504 "malloc3" 00:20:46.504 ], 00:20:46.504 "superblock": false, 00:20:46.504 "strip_size_kb": 64, 00:20:46.504 "method": "bdev_raid_create", 00:20:46.504 "req_id": 1 00:20:46.504 } 00:20:46.504 Got JSON-RPC error response 00:20:46.504 response: 00:20:46.504 { 00:20:46.504 "code": -17, 00:20:46.504 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:46.504 } 00:20:46.504 11:31:04 -- common/autotest_common.sh@653 -- # es=1 00:20:46.504 11:31:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:46.504 11:31:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:46.504 11:31:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:46.504 11:31:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.504 11:31:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:46.763 [2024-11-26 11:31:04.936777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:46.763 [2024-11-26 11:31:04.937051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.763 [2024-11-26 11:31:04.937221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:20:46.763 [2024-11-26 11:31:04.937362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.763 [2024-11-26 11:31:04.939850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.763 [2024-11-26 11:31:04.940067] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:46.763 [2024-11-26 11:31:04.940291] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:46.763 [2024-11-26 11:31:04.940512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:46.763 pt1 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.763 11:31:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.023 11:31:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.023 "name": "raid_bdev1", 00:20:47.023 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:47.023 "strip_size_kb": 64, 00:20:47.023 "state": "configuring", 00:20:47.023 "raid_level": "raid5f", 00:20:47.023 "superblock": true, 00:20:47.023 "num_base_bdevs": 3, 00:20:47.023 "num_base_bdevs_discovered": 1, 00:20:47.023 "num_base_bdevs_operational": 3, 00:20:47.023 "base_bdevs_list": [ 00:20:47.023 { 00:20:47.023 "name": "pt1", 00:20:47.023 "uuid": "f895e3af-c559-5850-af10-4995b6fad751", 00:20:47.023 "is_configured": true, 00:20:47.023 "data_offset": 2048, 00:20:47.023 "data_size": 63488 00:20:47.023 }, 00:20:47.023 { 00:20:47.023 "name": null, 00:20:47.023 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:47.023 "is_configured": false, 00:20:47.023 "data_offset": 2048, 00:20:47.023 "data_size": 63488 00:20:47.023 }, 00:20:47.023 { 00:20:47.023 "name": null, 00:20:47.023 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:47.023 "is_configured": false, 00:20:47.023 "data_offset": 2048, 00:20:47.023 "data_size": 63488 00:20:47.023 } 00:20:47.023 ] 00:20:47.023 }' 00:20:47.023 11:31:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.023 11:31:05 -- common/autotest_common.sh@10 -- # set +x 00:20:47.288 11:31:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:20:47.288 11:31:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:47.551 [2024-11-26 11:31:05.636977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:47.551 [2024-11-26 11:31:05.637086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.551 [2024-11-26 11:31:05.637116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:20:47.551 [2024-11-26 11:31:05.637133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.551 [2024-11-26 11:31:05.637557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.551 [2024-11-26 11:31:05.637586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:47.551 [2024-11-26 11:31:05.637658] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:47.551 [2024-11-26 11:31:05.637688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.551 pt2 00:20:47.551 11:31:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:47.810 [2024-11-26 11:31:05.837024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.810 11:31:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.070 11:31:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.070 "name": "raid_bdev1", 00:20:48.070 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:48.070 "strip_size_kb": 64, 00:20:48.070 "state": "configuring", 00:20:48.070 "raid_level": "raid5f", 00:20:48.070 "superblock": true, 00:20:48.070 "num_base_bdevs": 3, 00:20:48.070 "num_base_bdevs_discovered": 1, 00:20:48.070 "num_base_bdevs_operational": 3, 00:20:48.070 "base_bdevs_list": [ 00:20:48.070 { 00:20:48.070 "name": "pt1", 00:20:48.070 "uuid": "f895e3af-c559-5850-af10-4995b6fad751", 00:20:48.070 "is_configured": true, 00:20:48.070 "data_offset": 2048, 00:20:48.070 "data_size": 63488 00:20:48.070 }, 00:20:48.070 { 00:20:48.070 "name": null, 00:20:48.070 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:48.070 "is_configured": false, 00:20:48.070 "data_offset": 2048, 00:20:48.070 "data_size": 63488 00:20:48.070 }, 00:20:48.070 { 00:20:48.070 "name": null, 00:20:48.070 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:48.070 "is_configured": false, 00:20:48.070 "data_offset": 2048, 00:20:48.070 "data_size": 63488 00:20:48.070 } 00:20:48.070 ] 00:20:48.070 }' 00:20:48.070 11:31:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.070 11:31:06 -- common/autotest_common.sh@10 -- # set +x 00:20:48.331 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:48.331 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:48.331 11:31:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:48.610 [2024-11-26 11:31:06.673222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:48.610 [2024-11-26 11:31:06.673540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.610 [2024-11-26 11:31:06.673582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:48.610 [2024-11-26 11:31:06.673596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.611 [2024-11-26 11:31:06.674104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.611 [2024-11-26 11:31:06.674127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:48.611 [2024-11-26 11:31:06.674204] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:48.611 [2024-11-26 11:31:06.674229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:48.611 pt2 00:20:48.611 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:48.611 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:48.611 11:31:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:48.885 [2024-11-26 11:31:06.925326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:48.885 [2024-11-26 11:31:06.925582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.885 [2024-11-26 11:31:06.925657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:20:48.885 [2024-11-26 11:31:06.925850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.885 [2024-11-26 11:31:06.926388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.885 [2024-11-26 11:31:06.926544] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:48.885 [2024-11-26 11:31:06.926745] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:48.885 [2024-11-26 11:31:06.926906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:48.885 [2024-11-26 11:31:06.927243] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:20:48.885 [2024-11-26 11:31:06.927377] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:48.885 [2024-11-26 11:31:06.927503] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:48.885 [2024-11-26 11:31:06.928286] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:20:48.885 [2024-11-26 11:31:06.928436] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:20:48.885 [2024-11-26 11:31:06.928688] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.885 pt3 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.885 11:31:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.143 11:31:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.143 "name": "raid_bdev1", 00:20:49.143 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:49.143 "strip_size_kb": 64, 00:20:49.143 "state": "online", 00:20:49.143 "raid_level": "raid5f", 00:20:49.143 "superblock": true, 00:20:49.143 "num_base_bdevs": 3, 00:20:49.143 "num_base_bdevs_discovered": 3, 00:20:49.143 "num_base_bdevs_operational": 3, 00:20:49.143 "base_bdevs_list": [ 00:20:49.143 { 00:20:49.143 "name": "pt1", 00:20:49.143 "uuid": "f895e3af-c559-5850-af10-4995b6fad751", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.143 }, 00:20:49.143 { 00:20:49.143 "name": "pt2", 00:20:49.143 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.143 }, 00:20:49.143 { 00:20:49.143 "name": "pt3", 00:20:49.143 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.143 } 00:20:49.143 ] 00:20:49.143 }' 00:20:49.143 11:31:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.143 11:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:49.402 11:31:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:49.402 11:31:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:49.661 [2024-11-26 11:31:07.706112] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.661 11:31:07 -- bdev/bdev_raid.sh@430 -- # '[' dabafc64-7a50-4af6-801e-5a7c2d591ee0 '!=' dabafc64-7a50-4af6-801e-5a7c2d591ee0 ']' 00:20:49.661 11:31:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:20:49.661 11:31:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:49.661 11:31:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:49.661 11:31:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:49.921 [2024-11-26 11:31:07.906047] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.921 11:31:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.181 11:31:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.181 "name": "raid_bdev1", 00:20:50.181 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:50.181 "strip_size_kb": 64, 00:20:50.181 "state": "online", 00:20:50.181 "raid_level": "raid5f", 00:20:50.181 "superblock": true, 00:20:50.181 "num_base_bdevs": 3, 00:20:50.182 "num_base_bdevs_discovered": 2, 00:20:50.182 "num_base_bdevs_operational": 2, 00:20:50.182 "base_bdevs_list": [ 00:20:50.182 { 00:20:50.182 "name": null, 00:20:50.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.182 "is_configured": false, 00:20:50.182 "data_offset": 2048, 00:20:50.182 "data_size": 63488 00:20:50.182 }, 00:20:50.182 { 00:20:50.182 "name": "pt2", 00:20:50.182 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:50.182 "is_configured": true, 00:20:50.182 "data_offset": 2048, 00:20:50.182 "data_size": 63488 00:20:50.182 }, 00:20:50.182 { 00:20:50.182 "name": "pt3", 00:20:50.182 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:50.182 "is_configured": true, 00:20:50.182 "data_offset": 2048, 00:20:50.182 "data_size": 63488 00:20:50.182 } 00:20:50.182 ] 00:20:50.182 }' 00:20:50.182 11:31:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.182 11:31:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.441 11:31:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:50.700 [2024-11-26 11:31:08.710458] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.700 [2024-11-26 11:31:08.710511] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.700 [2024-11-26 11:31:08.710594] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.701 [2024-11-26 11:31:08.710686] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.701 [2024-11-26 11:31:08.710707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:20:50.701 11:31:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.701 11:31:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:50.960 11:31:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:50.960 11:31:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:50.960 11:31:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:50.960 11:31:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:50.960 11:31:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:50.960 11:31:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:50.960 11:31:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:50.960 11:31:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:51.217 11:31:09 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:51.217 11:31:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:51.217 11:31:09 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:51.217 11:31:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:51.217 11:31:09 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.475 [2024-11-26 11:31:09.554677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.475 [2024-11-26 11:31:09.554765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.475 [2024-11-26 11:31:09.554792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:20:51.475 [2024-11-26 11:31:09.554823] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.475 [2024-11-26 11:31:09.557568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.475 pt2 00:20:51.475 [2024-11-26 11:31:09.557806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.475 [2024-11-26 11:31:09.557934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:51.475 [2024-11-26 11:31:09.558010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.476 11:31:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.735 11:31:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.735 "name": "raid_bdev1", 00:20:51.735 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:51.735 "strip_size_kb": 64, 00:20:51.735 "state": "configuring", 00:20:51.735 "raid_level": "raid5f", 00:20:51.735 "superblock": true, 00:20:51.735 "num_base_bdevs": 3, 00:20:51.735 "num_base_bdevs_discovered": 1, 00:20:51.735 "num_base_bdevs_operational": 2, 00:20:51.735 "base_bdevs_list": [ 00:20:51.735 { 00:20:51.735 "name": null, 00:20:51.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.735 "is_configured": false, 00:20:51.735 "data_offset": 2048, 00:20:51.735 "data_size": 63488 00:20:51.735 }, 00:20:51.735 { 00:20:51.735 "name": "pt2", 00:20:51.735 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:51.735 "is_configured": true, 00:20:51.735 "data_offset": 2048, 00:20:51.735 "data_size": 63488 00:20:51.735 }, 00:20:51.735 { 00:20:51.735 "name": null, 00:20:51.735 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:51.735 "is_configured": false, 00:20:51.735 "data_offset": 2048, 00:20:51.735 "data_size": 63488 00:20:51.735 } 00:20:51.735 ] 00:20:51.735 }' 00:20:51.735 11:31:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.735 11:31:09 -- common/autotest_common.sh@10 -- # set +x 00:20:51.994 11:31:10 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:51.995 11:31:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:51.995 11:31:10 -- bdev/bdev_raid.sh@462 -- # i=2 00:20:51.995 11:31:10 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:52.255 [2024-11-26 11:31:10.250852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:52.255 [2024-11-26 11:31:10.251203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.255 [2024-11-26 11:31:10.251245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:20:52.255 [2024-11-26 11:31:10.251262] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.255 [2024-11-26 11:31:10.251739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.255 [2024-11-26 11:31:10.251766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:52.255 [2024-11-26 11:31:10.251835] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:52.255 [2024-11-26 11:31:10.251866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:52.255 [2024-11-26 11:31:10.251992] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:20:52.255 [2024-11-26 11:31:10.252011] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:52.255 [2024-11-26 11:31:10.252075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:52.255 [2024-11-26 11:31:10.252851] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:20:52.255 [2024-11-26 11:31:10.252876] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:20:52.255 [2024-11-26 11:31:10.253168] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.255 pt3 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.255 11:31:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.513 11:31:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.513 "name": "raid_bdev1", 00:20:52.513 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:52.513 "strip_size_kb": 64, 00:20:52.513 "state": "online", 00:20:52.513 "raid_level": "raid5f", 00:20:52.513 "superblock": true, 00:20:52.513 "num_base_bdevs": 3, 00:20:52.513 "num_base_bdevs_discovered": 2, 00:20:52.513 "num_base_bdevs_operational": 2, 00:20:52.513 "base_bdevs_list": [ 00:20:52.513 { 00:20:52.513 "name": null, 00:20:52.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.513 "is_configured": false, 00:20:52.513 "data_offset": 2048, 00:20:52.513 "data_size": 63488 00:20:52.513 }, 00:20:52.513 { 00:20:52.513 "name": "pt2", 00:20:52.513 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:52.513 "is_configured": true, 00:20:52.513 "data_offset": 2048, 00:20:52.513 "data_size": 63488 00:20:52.513 }, 00:20:52.513 { 00:20:52.513 "name": "pt3", 00:20:52.513 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:52.513 "is_configured": true, 00:20:52.513 "data_offset": 2048, 00:20:52.513 "data_size": 63488 00:20:52.513 } 00:20:52.513 ] 00:20:52.513 }' 00:20:52.513 11:31:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.513 11:31:10 -- common/autotest_common.sh@10 -- # set +x 00:20:52.772 11:31:10 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:20:52.772 11:31:10 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:53.031 [2024-11-26 11:31:11.163322] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.031 [2024-11-26 11:31:11.163380] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.031 [2024-11-26 11:31:11.163488] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.031 [2024-11-26 11:31:11.163579] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.031 [2024-11-26 11:31:11.163595] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:20:53.031 11:31:11 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.031 11:31:11 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:53.290 11:31:11 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:53.290 11:31:11 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:53.290 11:31:11 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:53.549 [2024-11-26 11:31:11.603475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:53.549 [2024-11-26 11:31:11.603554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.549 [2024-11-26 11:31:11.603588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:20:53.549 [2024-11-26 11:31:11.603601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.549 [2024-11-26 11:31:11.606149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.549 pt1 00:20:53.549 [2024-11-26 11:31:11.606368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:53.549 [2024-11-26 11:31:11.606469] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:53.549 [2024-11-26 11:31:11.606517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.549 11:31:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.808 11:31:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.808 "name": "raid_bdev1", 00:20:53.808 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:53.808 "strip_size_kb": 64, 00:20:53.808 "state": "configuring", 00:20:53.808 "raid_level": "raid5f", 00:20:53.808 "superblock": true, 00:20:53.808 "num_base_bdevs": 3, 00:20:53.808 "num_base_bdevs_discovered": 1, 00:20:53.808 "num_base_bdevs_operational": 3, 00:20:53.808 "base_bdevs_list": [ 00:20:53.808 { 00:20:53.808 "name": "pt1", 00:20:53.808 "uuid": "f895e3af-c559-5850-af10-4995b6fad751", 00:20:53.808 "is_configured": true, 00:20:53.808 "data_offset": 2048, 00:20:53.808 "data_size": 63488 00:20:53.808 }, 00:20:53.808 { 00:20:53.808 "name": null, 00:20:53.808 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:53.808 "is_configured": false, 00:20:53.808 "data_offset": 2048, 00:20:53.808 "data_size": 63488 00:20:53.808 }, 00:20:53.808 { 00:20:53.808 "name": null, 00:20:53.808 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:53.808 "is_configured": false, 00:20:53.808 "data_offset": 2048, 00:20:53.808 "data_size": 63488 00:20:53.808 } 00:20:53.808 ] 00:20:53.808 }' 00:20:53.808 11:31:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.808 11:31:11 -- common/autotest_common.sh@10 -- # set +x 00:20:54.067 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:54.067 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:54.067 11:31:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:54.326 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:54.326 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:54.326 11:31:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@489 -- # i=2 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:54.586 [2024-11-26 11:31:12.775772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:54.586 [2024-11-26 11:31:12.775853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.586 [2024-11-26 11:31:12.775884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:20:54.586 [2024-11-26 11:31:12.775929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.586 [2024-11-26 11:31:12.776414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.586 [2024-11-26 11:31:12.776444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:54.586 [2024-11-26 11:31:12.776536] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:54.586 [2024-11-26 11:31:12.776566] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:54.586 [2024-11-26 11:31:12.776589] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:54.586 [2024-11-26 11:31:12.776613] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:20:54.586 [2024-11-26 11:31:12.776697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:54.586 pt3 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.586 11:31:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.846 11:31:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.846 "name": "raid_bdev1", 00:20:54.846 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:54.846 "strip_size_kb": 64, 00:20:54.846 "state": "configuring", 00:20:54.846 "raid_level": "raid5f", 00:20:54.846 "superblock": true, 00:20:54.846 "num_base_bdevs": 3, 00:20:54.846 "num_base_bdevs_discovered": 1, 00:20:54.846 "num_base_bdevs_operational": 2, 00:20:54.846 "base_bdevs_list": [ 00:20:54.846 { 00:20:54.846 "name": null, 00:20:54.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.846 "is_configured": false, 00:20:54.846 "data_offset": 2048, 00:20:54.846 "data_size": 63488 00:20:54.846 }, 00:20:54.846 { 00:20:54.846 "name": null, 00:20:54.846 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:54.846 "is_configured": false, 00:20:54.846 "data_offset": 2048, 00:20:54.846 "data_size": 63488 00:20:54.846 }, 00:20:54.846 { 00:20:54.846 "name": "pt3", 00:20:54.846 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:54.846 "is_configured": true, 00:20:54.846 "data_offset": 2048, 00:20:54.846 "data_size": 63488 00:20:54.846 } 00:20:54.846 ] 00:20:54.846 }' 00:20:54.846 11:31:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.846 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:20:55.105 11:31:13 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:55.105 11:31:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:55.105 11:31:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:55.365 [2024-11-26 11:31:13.532052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:55.365 [2024-11-26 11:31:13.532164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.365 [2024-11-26 11:31:13.532196] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:20:55.365 [2024-11-26 11:31:13.532227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.365 [2024-11-26 11:31:13.532776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.365 [2024-11-26 11:31:13.532810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:55.365 [2024-11-26 11:31:13.532919] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:55.365 [2024-11-26 11:31:13.532980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:55.365 [2024-11-26 11:31:13.533133] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:20:55.365 [2024-11-26 11:31:13.533169] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:55.365 [2024-11-26 11:31:13.533255] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:20:55.365 pt2 00:20:55.365 [2024-11-26 11:31:13.534157] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:20:55.365 [2024-11-26 11:31:13.534180] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:20:55.365 [2024-11-26 11:31:13.534405] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.365 11:31:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.625 11:31:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.625 "name": "raid_bdev1", 00:20:55.625 "uuid": "dabafc64-7a50-4af6-801e-5a7c2d591ee0", 00:20:55.625 "strip_size_kb": 64, 00:20:55.625 "state": "online", 00:20:55.625 "raid_level": "raid5f", 00:20:55.625 "superblock": true, 00:20:55.625 "num_base_bdevs": 3, 00:20:55.625 "num_base_bdevs_discovered": 2, 00:20:55.625 "num_base_bdevs_operational": 2, 00:20:55.625 "base_bdevs_list": [ 00:20:55.625 { 00:20:55.625 "name": null, 00:20:55.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.625 "is_configured": false, 00:20:55.625 "data_offset": 2048, 00:20:55.625 "data_size": 63488 00:20:55.625 }, 00:20:55.625 { 00:20:55.625 "name": "pt2", 00:20:55.625 "uuid": "add0de13-fcde-5684-a246-95a4affbdc33", 00:20:55.625 "is_configured": true, 00:20:55.625 "data_offset": 2048, 00:20:55.625 "data_size": 63488 00:20:55.625 }, 00:20:55.625 { 00:20:55.625 "name": "pt3", 00:20:55.625 "uuid": "058bb150-337f-5779-b023-fb5551e87e61", 00:20:55.625 "is_configured": true, 00:20:55.625 "data_offset": 2048, 00:20:55.625 "data_size": 63488 00:20:55.625 } 00:20:55.625 ] 00:20:55.625 }' 00:20:55.625 11:31:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.625 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:20:55.885 11:31:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:55.885 11:31:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:56.144 [2024-11-26 11:31:14.308489] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.144 11:31:14 -- bdev/bdev_raid.sh@506 -- # '[' dabafc64-7a50-4af6-801e-5a7c2d591ee0 '!=' dabafc64-7a50-4af6-801e-5a7c2d591ee0 ']' 00:20:56.144 11:31:14 -- bdev/bdev_raid.sh@511 -- # killprocess 92820 00:20:56.144 11:31:14 -- common/autotest_common.sh@936 -- # '[' -z 92820 ']' 00:20:56.144 11:31:14 -- common/autotest_common.sh@940 -- # kill -0 92820 00:20:56.144 11:31:14 -- common/autotest_common.sh@941 -- # uname 00:20:56.144 11:31:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.144 11:31:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92820 00:20:56.144 killing process with pid 92820 00:20:56.144 11:31:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:56.144 11:31:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:56.144 11:31:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92820' 00:20:56.144 11:31:14 -- common/autotest_common.sh@955 -- # kill 92820 00:20:56.144 [2024-11-26 11:31:14.352589] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.144 11:31:14 -- common/autotest_common.sh@960 -- # wait 92820 00:20:56.144 [2024-11-26 11:31:14.352713] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.144 [2024-11-26 11:31:14.352790] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.144 [2024-11-26 11:31:14.352803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:20:56.144 [2024-11-26 11:31:14.374477] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.404 ************************************ 00:20:56.404 END TEST raid5f_superblock_test 00:20:56.404 ************************************ 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:56.404 00:20:56.404 real 0m14.686s 00:20:56.404 user 0m26.127s 00:20:56.404 sys 0m2.373s 00:20:56.404 11:31:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:56.404 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:20:56.404 11:31:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:56.404 11:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:56.404 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:56.404 ************************************ 00:20:56.404 START TEST raid5f_rebuild_test 00:20:56.404 ************************************ 00:20:56.404 11:31:14 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=93351 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 93351 /var/tmp/spdk-raid.sock 00:20:56.404 11:31:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:56.404 11:31:14 -- common/autotest_common.sh@829 -- # '[' -z 93351 ']' 00:20:56.404 11:31:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:56.404 11:31:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.404 11:31:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:56.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:56.404 11:31:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.404 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:56.663 [2024-11-26 11:31:14.671001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:56.663 [2024-11-26 11:31:14.671331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93351 ] 00:20:56.663 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.663 Zero copy mechanism will not be used. 00:20:56.663 [2024-11-26 11:31:14.826727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.663 [2024-11-26 11:31:14.868406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.922 [2024-11-26 11:31:14.906615] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.488 11:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.488 11:31:15 -- common/autotest_common.sh@862 -- # return 0 00:20:57.488 11:31:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.488 11:31:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:57.488 11:31:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.746 BaseBdev1 00:20:57.746 11:31:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.746 11:31:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:57.746 11:31:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:58.004 BaseBdev2 00:20:58.004 11:31:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:58.005 11:31:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:58.005 11:31:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:58.263 BaseBdev3 00:20:58.263 11:31:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:58.521 spare_malloc 00:20:58.522 11:31:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:58.780 spare_delay 00:20:58.780 11:31:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:58.780 [2024-11-26 11:31:16.950020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.780 [2024-11-26 11:31:16.950121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.780 [2024-11-26 11:31:16.950155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:20:58.780 [2024-11-26 11:31:16.950170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.780 [2024-11-26 11:31:16.952555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.780 [2024-11-26 11:31:16.952645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.780 spare 00:20:58.780 11:31:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:20:59.040 [2024-11-26 11:31:17.154097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.040 [2024-11-26 11:31:17.156890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.040 [2024-11-26 11:31:17.157183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.040 [2024-11-26 11:31:17.157371] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:20:59.040 [2024-11-26 11:31:17.157434] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:59.040 [2024-11-26 11:31:17.157733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:59.040 [2024-11-26 11:31:17.158617] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:20:59.040 [2024-11-26 11:31:17.158781] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:20:59.040 [2024-11-26 11:31:17.159208] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.040 11:31:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.299 11:31:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.299 "name": "raid_bdev1", 00:20:59.299 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:20:59.299 "strip_size_kb": 64, 00:20:59.299 "state": "online", 00:20:59.299 "raid_level": "raid5f", 00:20:59.299 "superblock": false, 00:20:59.299 "num_base_bdevs": 3, 00:20:59.299 "num_base_bdevs_discovered": 3, 00:20:59.299 "num_base_bdevs_operational": 3, 00:20:59.299 "base_bdevs_list": [ 00:20:59.299 { 00:20:59.299 "name": "BaseBdev1", 00:20:59.299 "uuid": "e0a6bea6-11df-4c18-8909-99d6e00bfe01", 00:20:59.299 "is_configured": true, 00:20:59.299 "data_offset": 0, 00:20:59.299 "data_size": 65536 00:20:59.299 }, 00:20:59.299 { 00:20:59.299 "name": "BaseBdev2", 00:20:59.299 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:20:59.299 "is_configured": true, 00:20:59.299 "data_offset": 0, 00:20:59.299 "data_size": 65536 00:20:59.299 }, 00:20:59.299 { 00:20:59.299 "name": "BaseBdev3", 00:20:59.299 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:20:59.299 "is_configured": true, 00:20:59.299 "data_offset": 0, 00:20:59.299 "data_size": 65536 00:20:59.299 } 00:20:59.299 ] 00:20:59.299 }' 00:20:59.299 11:31:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.299 11:31:17 -- common/autotest_common.sh@10 -- # set +x 00:20:59.558 11:31:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:59.558 11:31:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:59.816 [2024-11-26 11:31:17.959545] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.816 11:31:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:20:59.816 11:31:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.816 11:31:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:00.075 11:31:18 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:00.075 11:31:18 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:00.075 11:31:18 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:00.075 11:31:18 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@12 -- # local i 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:00.075 11:31:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:00.333 [2024-11-26 11:31:18.523793] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:00.333 /dev/nbd0 00:21:00.333 11:31:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:00.333 11:31:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:00.333 11:31:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:00.333 11:31:18 -- common/autotest_common.sh@867 -- # local i 00:21:00.333 11:31:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:00.333 11:31:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:00.333 11:31:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:00.333 11:31:18 -- common/autotest_common.sh@871 -- # break 00:21:00.333 11:31:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:00.333 11:31:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:00.333 11:31:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.333 1+0 records in 00:21:00.333 1+0 records out 00:21:00.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443918 s, 9.2 MB/s 00:21:00.333 11:31:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.592 11:31:18 -- common/autotest_common.sh@884 -- # size=4096 00:21:00.592 11:31:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.592 11:31:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:00.592 11:31:18 -- common/autotest_common.sh@887 -- # return 0 00:21:00.592 11:31:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.592 11:31:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:00.592 11:31:18 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:21:00.592 11:31:18 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:21:00.592 11:31:18 -- bdev/bdev_raid.sh@582 -- # echo 128 00:21:00.592 11:31:18 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:00.850 512+0 records in 00:21:00.850 512+0 records out 00:21:00.850 67108864 bytes (67 MB, 64 MiB) copied, 0.389894 s, 172 MB/s 00:21:00.850 11:31:18 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@51 -- # local i 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.850 11:31:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:01.108 [2024-11-26 11:31:19.214473] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@41 -- # break 00:21:01.108 11:31:19 -- bdev/nbd_common.sh@45 -- # return 0 00:21:01.108 11:31:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:01.367 [2024-11-26 11:31:19.479764] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.367 11:31:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.625 11:31:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.625 "name": "raid_bdev1", 00:21:01.625 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:01.625 "strip_size_kb": 64, 00:21:01.625 "state": "online", 00:21:01.625 "raid_level": "raid5f", 00:21:01.625 "superblock": false, 00:21:01.625 "num_base_bdevs": 3, 00:21:01.625 "num_base_bdevs_discovered": 2, 00:21:01.625 "num_base_bdevs_operational": 2, 00:21:01.625 "base_bdevs_list": [ 00:21:01.625 { 00:21:01.625 "name": null, 00:21:01.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.625 "is_configured": false, 00:21:01.625 "data_offset": 0, 00:21:01.625 "data_size": 65536 00:21:01.625 }, 00:21:01.625 { 00:21:01.625 "name": "BaseBdev2", 00:21:01.625 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:01.625 "is_configured": true, 00:21:01.625 "data_offset": 0, 00:21:01.625 "data_size": 65536 00:21:01.625 }, 00:21:01.625 { 00:21:01.625 "name": "BaseBdev3", 00:21:01.625 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:01.625 "is_configured": true, 00:21:01.625 "data_offset": 0, 00:21:01.625 "data_size": 65536 00:21:01.625 } 00:21:01.625 ] 00:21:01.625 }' 00:21:01.625 11:31:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.625 11:31:19 -- common/autotest_common.sh@10 -- # set +x 00:21:01.884 11:31:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:02.144 [2024-11-26 11:31:20.255984] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:02.144 [2024-11-26 11:31:20.256028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.144 [2024-11-26 11:31:20.258689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002af30 00:21:02.144 [2024-11-26 11:31:20.260943] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:02.144 11:31:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.080 11:31:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.338 11:31:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.338 "name": "raid_bdev1", 00:21:03.338 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:03.338 "strip_size_kb": 64, 00:21:03.339 "state": "online", 00:21:03.339 "raid_level": "raid5f", 00:21:03.339 "superblock": false, 00:21:03.339 "num_base_bdevs": 3, 00:21:03.339 "num_base_bdevs_discovered": 3, 00:21:03.339 "num_base_bdevs_operational": 3, 00:21:03.339 "process": { 00:21:03.339 "type": "rebuild", 00:21:03.339 "target": "spare", 00:21:03.339 "progress": { 00:21:03.339 "blocks": 24576, 00:21:03.339 "percent": 18 00:21:03.339 } 00:21:03.339 }, 00:21:03.339 "base_bdevs_list": [ 00:21:03.339 { 00:21:03.339 "name": "spare", 00:21:03.339 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:03.339 "is_configured": true, 00:21:03.339 "data_offset": 0, 00:21:03.339 "data_size": 65536 00:21:03.339 }, 00:21:03.339 { 00:21:03.339 "name": "BaseBdev2", 00:21:03.339 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:03.339 "is_configured": true, 00:21:03.339 "data_offset": 0, 00:21:03.339 "data_size": 65536 00:21:03.339 }, 00:21:03.339 { 00:21:03.339 "name": "BaseBdev3", 00:21:03.339 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:03.339 "is_configured": true, 00:21:03.339 "data_offset": 0, 00:21:03.339 "data_size": 65536 00:21:03.339 } 00:21:03.339 ] 00:21:03.339 }' 00:21:03.339 11:31:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.339 11:31:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.339 11:31:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.339 11:31:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.339 11:31:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:03.597 [2024-11-26 11:31:21.798734] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.856 [2024-11-26 11:31:21.875443] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:03.856 [2024-11-26 11:31:21.875541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.856 11:31:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.114 11:31:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.114 "name": "raid_bdev1", 00:21:04.114 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:04.114 "strip_size_kb": 64, 00:21:04.114 "state": "online", 00:21:04.114 "raid_level": "raid5f", 00:21:04.114 "superblock": false, 00:21:04.114 "num_base_bdevs": 3, 00:21:04.114 "num_base_bdevs_discovered": 2, 00:21:04.114 "num_base_bdevs_operational": 2, 00:21:04.114 "base_bdevs_list": [ 00:21:04.114 { 00:21:04.114 "name": null, 00:21:04.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.114 "is_configured": false, 00:21:04.114 "data_offset": 0, 00:21:04.114 "data_size": 65536 00:21:04.114 }, 00:21:04.114 { 00:21:04.114 "name": "BaseBdev2", 00:21:04.114 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:04.114 "is_configured": true, 00:21:04.114 "data_offset": 0, 00:21:04.114 "data_size": 65536 00:21:04.114 }, 00:21:04.114 { 00:21:04.114 "name": "BaseBdev3", 00:21:04.114 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:04.114 "is_configured": true, 00:21:04.115 "data_offset": 0, 00:21:04.115 "data_size": 65536 00:21:04.115 } 00:21:04.115 ] 00:21:04.115 }' 00:21:04.115 11:31:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.115 11:31:22 -- common/autotest_common.sh@10 -- # set +x 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.373 11:31:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.632 "name": "raid_bdev1", 00:21:04.632 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:04.632 "strip_size_kb": 64, 00:21:04.632 "state": "online", 00:21:04.632 "raid_level": "raid5f", 00:21:04.632 "superblock": false, 00:21:04.632 "num_base_bdevs": 3, 00:21:04.632 "num_base_bdevs_discovered": 2, 00:21:04.632 "num_base_bdevs_operational": 2, 00:21:04.632 "base_bdevs_list": [ 00:21:04.632 { 00:21:04.632 "name": null, 00:21:04.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.632 "is_configured": false, 00:21:04.632 "data_offset": 0, 00:21:04.632 "data_size": 65536 00:21:04.632 }, 00:21:04.632 { 00:21:04.632 "name": "BaseBdev2", 00:21:04.632 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:04.632 "is_configured": true, 00:21:04.632 "data_offset": 0, 00:21:04.632 "data_size": 65536 00:21:04.632 }, 00:21:04.632 { 00:21:04.632 "name": "BaseBdev3", 00:21:04.632 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:04.632 "is_configured": true, 00:21:04.632 "data_offset": 0, 00:21:04.632 "data_size": 65536 00:21:04.632 } 00:21:04.632 ] 00:21:04.632 }' 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:04.632 11:31:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.892 [2024-11-26 11:31:22.996373] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:04.892 [2024-11-26 11:31:22.996684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.892 [2024-11-26 11:31:22.999292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:21:04.892 [2024-11-26 11:31:23.001628] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.892 11:31:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.827 11:31:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.085 "name": "raid_bdev1", 00:21:06.085 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:06.085 "strip_size_kb": 64, 00:21:06.085 "state": "online", 00:21:06.085 "raid_level": "raid5f", 00:21:06.085 "superblock": false, 00:21:06.085 "num_base_bdevs": 3, 00:21:06.085 "num_base_bdevs_discovered": 3, 00:21:06.085 "num_base_bdevs_operational": 3, 00:21:06.085 "process": { 00:21:06.085 "type": "rebuild", 00:21:06.085 "target": "spare", 00:21:06.085 "progress": { 00:21:06.085 "blocks": 24576, 00:21:06.085 "percent": 18 00:21:06.085 } 00:21:06.085 }, 00:21:06.085 "base_bdevs_list": [ 00:21:06.085 { 00:21:06.085 "name": "spare", 00:21:06.085 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 0, 00:21:06.085 "data_size": 65536 00:21:06.085 }, 00:21:06.085 { 00:21:06.085 "name": "BaseBdev2", 00:21:06.085 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 0, 00:21:06.085 "data_size": 65536 00:21:06.085 }, 00:21:06.085 { 00:21:06.085 "name": "BaseBdev3", 00:21:06.085 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 0, 00:21:06.085 "data_size": 65536 00:21:06.085 } 00:21:06.085 ] 00:21:06.085 }' 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.085 11:31:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.343 "name": "raid_bdev1", 00:21:06.343 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:06.343 "strip_size_kb": 64, 00:21:06.343 "state": "online", 00:21:06.343 "raid_level": "raid5f", 00:21:06.343 "superblock": false, 00:21:06.343 "num_base_bdevs": 3, 00:21:06.343 "num_base_bdevs_discovered": 3, 00:21:06.343 "num_base_bdevs_operational": 3, 00:21:06.343 "process": { 00:21:06.343 "type": "rebuild", 00:21:06.343 "target": "spare", 00:21:06.343 "progress": { 00:21:06.343 "blocks": 30720, 00:21:06.343 "percent": 23 00:21:06.343 } 00:21:06.343 }, 00:21:06.343 "base_bdevs_list": [ 00:21:06.343 { 00:21:06.343 "name": "spare", 00:21:06.343 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:06.343 "is_configured": true, 00:21:06.343 "data_offset": 0, 00:21:06.343 "data_size": 65536 00:21:06.343 }, 00:21:06.343 { 00:21:06.343 "name": "BaseBdev2", 00:21:06.343 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:06.343 "is_configured": true, 00:21:06.343 "data_offset": 0, 00:21:06.343 "data_size": 65536 00:21:06.343 }, 00:21:06.343 { 00:21:06.343 "name": "BaseBdev3", 00:21:06.343 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:06.343 "is_configured": true, 00:21:06.343 "data_offset": 0, 00:21:06.343 "data_size": 65536 00:21:06.343 } 00:21:06.343 ] 00:21:06.343 }' 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.343 11:31:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:07.719 11:31:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:07.719 11:31:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.719 11:31:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:07.719 11:31:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.720 "name": "raid_bdev1", 00:21:07.720 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:07.720 "strip_size_kb": 64, 00:21:07.720 "state": "online", 00:21:07.720 "raid_level": "raid5f", 00:21:07.720 "superblock": false, 00:21:07.720 "num_base_bdevs": 3, 00:21:07.720 "num_base_bdevs_discovered": 3, 00:21:07.720 "num_base_bdevs_operational": 3, 00:21:07.720 "process": { 00:21:07.720 "type": "rebuild", 00:21:07.720 "target": "spare", 00:21:07.720 "progress": { 00:21:07.720 "blocks": 55296, 00:21:07.720 "percent": 42 00:21:07.720 } 00:21:07.720 }, 00:21:07.720 "base_bdevs_list": [ 00:21:07.720 { 00:21:07.720 "name": "spare", 00:21:07.720 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:07.720 "is_configured": true, 00:21:07.720 "data_offset": 0, 00:21:07.720 "data_size": 65536 00:21:07.720 }, 00:21:07.720 { 00:21:07.720 "name": "BaseBdev2", 00:21:07.720 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:07.720 "is_configured": true, 00:21:07.720 "data_offset": 0, 00:21:07.720 "data_size": 65536 00:21:07.720 }, 00:21:07.720 { 00:21:07.720 "name": "BaseBdev3", 00:21:07.720 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:07.720 "is_configured": true, 00:21:07.720 "data_offset": 0, 00:21:07.720 "data_size": 65536 00:21:07.720 } 00:21:07.720 ] 00:21:07.720 }' 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.720 11:31:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.656 11:31:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.914 "name": "raid_bdev1", 00:21:08.914 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:08.914 "strip_size_kb": 64, 00:21:08.914 "state": "online", 00:21:08.914 "raid_level": "raid5f", 00:21:08.914 "superblock": false, 00:21:08.914 "num_base_bdevs": 3, 00:21:08.914 "num_base_bdevs_discovered": 3, 00:21:08.914 "num_base_bdevs_operational": 3, 00:21:08.914 "process": { 00:21:08.914 "type": "rebuild", 00:21:08.914 "target": "spare", 00:21:08.914 "progress": { 00:21:08.914 "blocks": 81920, 00:21:08.914 "percent": 62 00:21:08.914 } 00:21:08.914 }, 00:21:08.914 "base_bdevs_list": [ 00:21:08.914 { 00:21:08.914 "name": "spare", 00:21:08.914 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:08.914 "is_configured": true, 00:21:08.914 "data_offset": 0, 00:21:08.914 "data_size": 65536 00:21:08.914 }, 00:21:08.914 { 00:21:08.914 "name": "BaseBdev2", 00:21:08.914 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:08.914 "is_configured": true, 00:21:08.914 "data_offset": 0, 00:21:08.914 "data_size": 65536 00:21:08.914 }, 00:21:08.914 { 00:21:08.914 "name": "BaseBdev3", 00:21:08.914 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:08.914 "is_configured": true, 00:21:08.914 "data_offset": 0, 00:21:08.914 "data_size": 65536 00:21:08.914 } 00:21:08.914 ] 00:21:08.914 }' 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.914 11:31:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.290 11:31:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.290 "name": "raid_bdev1", 00:21:10.290 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:10.290 "strip_size_kb": 64, 00:21:10.290 "state": "online", 00:21:10.290 "raid_level": "raid5f", 00:21:10.290 "superblock": false, 00:21:10.290 "num_base_bdevs": 3, 00:21:10.290 "num_base_bdevs_discovered": 3, 00:21:10.290 "num_base_bdevs_operational": 3, 00:21:10.290 "process": { 00:21:10.290 "type": "rebuild", 00:21:10.290 "target": "spare", 00:21:10.290 "progress": { 00:21:10.290 "blocks": 106496, 00:21:10.290 "percent": 81 00:21:10.290 } 00:21:10.290 }, 00:21:10.290 "base_bdevs_list": [ 00:21:10.290 { 00:21:10.290 "name": "spare", 00:21:10.291 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:10.291 "is_configured": true, 00:21:10.291 "data_offset": 0, 00:21:10.291 "data_size": 65536 00:21:10.291 }, 00:21:10.291 { 00:21:10.291 "name": "BaseBdev2", 00:21:10.291 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:10.291 "is_configured": true, 00:21:10.291 "data_offset": 0, 00:21:10.291 "data_size": 65536 00:21:10.291 }, 00:21:10.291 { 00:21:10.291 "name": "BaseBdev3", 00:21:10.291 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:10.291 "is_configured": true, 00:21:10.291 "data_offset": 0, 00:21:10.291 "data_size": 65536 00:21:10.291 } 00:21:10.291 ] 00:21:10.291 }' 00:21:10.291 11:31:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.291 11:31:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.291 11:31:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.291 11:31:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.291 11:31:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.225 11:31:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.225 [2024-11-26 11:31:29.453892] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:11.225 [2024-11-26 11:31:29.453986] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:11.225 [2024-11-26 11:31:29.454039] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.484 "name": "raid_bdev1", 00:21:11.484 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:11.484 "strip_size_kb": 64, 00:21:11.484 "state": "online", 00:21:11.484 "raid_level": "raid5f", 00:21:11.484 "superblock": false, 00:21:11.484 "num_base_bdevs": 3, 00:21:11.484 "num_base_bdevs_discovered": 3, 00:21:11.484 "num_base_bdevs_operational": 3, 00:21:11.484 "base_bdevs_list": [ 00:21:11.484 { 00:21:11.484 "name": "spare", 00:21:11.484 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:11.484 "is_configured": true, 00:21:11.484 "data_offset": 0, 00:21:11.484 "data_size": 65536 00:21:11.484 }, 00:21:11.484 { 00:21:11.484 "name": "BaseBdev2", 00:21:11.484 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:11.484 "is_configured": true, 00:21:11.484 "data_offset": 0, 00:21:11.484 "data_size": 65536 00:21:11.484 }, 00:21:11.484 { 00:21:11.484 "name": "BaseBdev3", 00:21:11.484 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:11.484 "is_configured": true, 00:21:11.484 "data_offset": 0, 00:21:11.484 "data_size": 65536 00:21:11.484 } 00:21:11.484 ] 00:21:11.484 }' 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@660 -- # break 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.484 11:31:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.742 "name": "raid_bdev1", 00:21:11.742 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:11.742 "strip_size_kb": 64, 00:21:11.742 "state": "online", 00:21:11.742 "raid_level": "raid5f", 00:21:11.742 "superblock": false, 00:21:11.742 "num_base_bdevs": 3, 00:21:11.742 "num_base_bdevs_discovered": 3, 00:21:11.742 "num_base_bdevs_operational": 3, 00:21:11.742 "base_bdevs_list": [ 00:21:11.742 { 00:21:11.742 "name": "spare", 00:21:11.742 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:11.742 "is_configured": true, 00:21:11.742 "data_offset": 0, 00:21:11.742 "data_size": 65536 00:21:11.742 }, 00:21:11.742 { 00:21:11.742 "name": "BaseBdev2", 00:21:11.742 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:11.742 "is_configured": true, 00:21:11.742 "data_offset": 0, 00:21:11.742 "data_size": 65536 00:21:11.742 }, 00:21:11.742 { 00:21:11.742 "name": "BaseBdev3", 00:21:11.742 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:11.742 "is_configured": true, 00:21:11.742 "data_offset": 0, 00:21:11.742 "data_size": 65536 00:21:11.742 } 00:21:11.742 ] 00:21:11.742 }' 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.742 11:31:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.002 11:31:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.002 "name": "raid_bdev1", 00:21:12.002 "uuid": "2254adb7-c826-44f1-8b32-f09b8dbf4714", 00:21:12.002 "strip_size_kb": 64, 00:21:12.002 "state": "online", 00:21:12.002 "raid_level": "raid5f", 00:21:12.002 "superblock": false, 00:21:12.002 "num_base_bdevs": 3, 00:21:12.002 "num_base_bdevs_discovered": 3, 00:21:12.002 "num_base_bdevs_operational": 3, 00:21:12.002 "base_bdevs_list": [ 00:21:12.002 { 00:21:12.002 "name": "spare", 00:21:12.002 "uuid": "8c3d32c6-7d74-54a6-a098-f035466a5db6", 00:21:12.002 "is_configured": true, 00:21:12.002 "data_offset": 0, 00:21:12.002 "data_size": 65536 00:21:12.002 }, 00:21:12.002 { 00:21:12.002 "name": "BaseBdev2", 00:21:12.002 "uuid": "28771383-45d1-462b-8f50-38bb96e30378", 00:21:12.002 "is_configured": true, 00:21:12.002 "data_offset": 0, 00:21:12.002 "data_size": 65536 00:21:12.002 }, 00:21:12.002 { 00:21:12.002 "name": "BaseBdev3", 00:21:12.002 "uuid": "178df20b-80d9-469a-bad2-297bf61477dc", 00:21:12.002 "is_configured": true, 00:21:12.002 "data_offset": 0, 00:21:12.002 "data_size": 65536 00:21:12.002 } 00:21:12.002 ] 00:21:12.002 }' 00:21:12.002 11:31:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.002 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.261 11:31:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:12.519 [2024-11-26 11:31:30.582024] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.519 [2024-11-26 11:31:30.582056] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:12.519 [2024-11-26 11:31:30.582134] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.519 [2024-11-26 11:31:30.582204] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.519 [2024-11-26 11:31:30.582234] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:21:12.519 11:31:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.519 11:31:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:12.778 11:31:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:12.778 11:31:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:12.778 11:31:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@12 -- # local i 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:12.778 11:31:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:13.037 /dev/nbd0 00:21:13.037 11:31:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:13.037 11:31:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:13.037 11:31:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:13.037 11:31:31 -- common/autotest_common.sh@867 -- # local i 00:21:13.037 11:31:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:13.037 11:31:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:13.037 11:31:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:13.037 11:31:31 -- common/autotest_common.sh@871 -- # break 00:21:13.037 11:31:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:13.037 11:31:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:13.037 11:31:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.037 1+0 records in 00:21:13.037 1+0 records out 00:21:13.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245615 s, 16.7 MB/s 00:21:13.037 11:31:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.037 11:31:31 -- common/autotest_common.sh@884 -- # size=4096 00:21:13.037 11:31:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.037 11:31:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:13.037 11:31:31 -- common/autotest_common.sh@887 -- # return 0 00:21:13.037 11:31:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:13.037 11:31:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:13.037 11:31:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:13.295 /dev/nbd1 00:21:13.295 11:31:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:13.295 11:31:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:13.295 11:31:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:13.295 11:31:31 -- common/autotest_common.sh@867 -- # local i 00:21:13.295 11:31:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:13.295 11:31:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:13.295 11:31:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:13.295 11:31:31 -- common/autotest_common.sh@871 -- # break 00:21:13.295 11:31:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:13.295 11:31:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:13.296 11:31:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.296 1+0 records in 00:21:13.296 1+0 records out 00:21:13.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357072 s, 11.5 MB/s 00:21:13.296 11:31:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.296 11:31:31 -- common/autotest_common.sh@884 -- # size=4096 00:21:13.296 11:31:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.296 11:31:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:13.296 11:31:31 -- common/autotest_common.sh@887 -- # return 0 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:13.296 11:31:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:13.296 11:31:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:13.296 11:31:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@41 -- # break 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:13.554 11:31:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@41 -- # break 00:21:13.812 11:31:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:13.812 11:31:31 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:13.812 11:31:31 -- bdev/bdev_raid.sh@709 -- # killprocess 93351 00:21:13.812 11:31:31 -- common/autotest_common.sh@936 -- # '[' -z 93351 ']' 00:21:13.813 11:31:31 -- common/autotest_common.sh@940 -- # kill -0 93351 00:21:13.813 11:31:31 -- common/autotest_common.sh@941 -- # uname 00:21:13.813 11:31:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.813 11:31:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93351 00:21:13.813 killing process with pid 93351 00:21:13.813 Received shutdown signal, test time was about 60.000000 seconds 00:21:13.813 00:21:13.813 Latency(us) 00:21:13.813 [2024-11-26T11:31:32.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.813 [2024-11-26T11:31:32.043Z] =================================================================================================================== 00:21:13.813 [2024-11-26T11:31:32.043Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.813 11:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.813 11:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.813 11:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93351' 00:21:13.813 11:31:31 -- common/autotest_common.sh@955 -- # kill 93351 00:21:13.813 [2024-11-26 11:31:31.872382] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:13.813 11:31:31 -- common/autotest_common.sh@960 -- # wait 93351 00:21:13.813 [2024-11-26 11:31:31.897713] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:14.071 ************************************ 00:21:14.071 END TEST raid5f_rebuild_test 00:21:14.071 ************************************ 00:21:14.071 00:21:14.071 real 0m17.462s 00:21:14.071 user 0m25.249s 00:21:14.071 sys 0m2.452s 00:21:14.071 11:31:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:14.071 11:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:21:14.071 11:31:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:14.071 11:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.071 11:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 ************************************ 00:21:14.071 START TEST raid5f_rebuild_test_sb 00:21:14.071 ************************************ 00:21:14.071 11:31:32 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@544 -- # raid_pid=93832 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@545 -- # waitforlisten 93832 /var/tmp/spdk-raid.sock 00:21:14.071 11:31:32 -- common/autotest_common.sh@829 -- # '[' -z 93832 ']' 00:21:14.071 11:31:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:14.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:14.071 11:31:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.071 11:31:32 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:14.071 11:31:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:14.071 11:31:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.071 11:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 [2024-11-26 11:31:32.193705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:14.071 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:14.071 Zero copy mechanism will not be used. 00:21:14.071 [2024-11-26 11:31:32.194028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93832 ] 00:21:14.330 [2024-11-26 11:31:32.357577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.330 [2024-11-26 11:31:32.389330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.330 [2024-11-26 11:31:32.419670] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.896 11:31:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.896 11:31:33 -- common/autotest_common.sh@862 -- # return 0 00:21:14.896 11:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:14.896 11:31:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:14.896 11:31:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:15.155 BaseBdev1_malloc 00:21:15.155 11:31:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:15.413 [2024-11-26 11:31:33.501985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:15.413 [2024-11-26 11:31:33.502074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.413 [2024-11-26 11:31:33.502105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:21:15.413 [2024-11-26 11:31:33.502125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.413 [2024-11-26 11:31:33.505028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.413 [2024-11-26 11:31:33.505072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:15.413 BaseBdev1 00:21:15.413 11:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:15.413 11:31:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:15.413 11:31:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:15.671 BaseBdev2_malloc 00:21:15.671 11:31:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:15.671 [2024-11-26 11:31:33.880225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:15.671 [2024-11-26 11:31:33.880306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.672 [2024-11-26 11:31:33.880341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:21:15.672 [2024-11-26 11:31:33.880358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.672 [2024-11-26 11:31:33.882670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.672 [2024-11-26 11:31:33.882726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:15.672 BaseBdev2 00:21:15.672 11:31:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:15.672 11:31:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:15.672 11:31:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:15.930 BaseBdev3_malloc 00:21:15.930 11:31:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:16.189 [2024-11-26 11:31:34.317887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:16.189 [2024-11-26 11:31:34.317984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.189 [2024-11-26 11:31:34.318016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:21:16.189 [2024-11-26 11:31:34.318033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.189 [2024-11-26 11:31:34.320518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.189 [2024-11-26 11:31:34.320568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:16.189 BaseBdev3 00:21:16.189 11:31:34 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:16.448 spare_malloc 00:21:16.448 11:31:34 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:16.707 spare_delay 00:21:16.707 11:31:34 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:16.965 [2024-11-26 11:31:35.032139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:16.965 [2024-11-26 11:31:35.032225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.965 [2024-11-26 11:31:35.032254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:21:16.965 [2024-11-26 11:31:35.032271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.965 [2024-11-26 11:31:35.034740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.965 [2024-11-26 11:31:35.034792] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:16.965 spare 00:21:16.965 11:31:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:21:17.223 [2024-11-26 11:31:35.232268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.223 [2024-11-26 11:31:35.234418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.223 [2024-11-26 11:31:35.234516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.223 [2024-11-26 11:31:35.234767] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:21:17.223 [2024-11-26 11:31:35.234785] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:17.223 [2024-11-26 11:31:35.234919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:21:17.223 [2024-11-26 11:31:35.235579] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:21:17.223 [2024-11-26 11:31:35.235608] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:21:17.223 [2024-11-26 11:31:35.235741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:17.223 11:31:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.224 11:31:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.483 11:31:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.483 "name": "raid_bdev1", 00:21:17.483 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:17.483 "strip_size_kb": 64, 00:21:17.483 "state": "online", 00:21:17.483 "raid_level": "raid5f", 00:21:17.483 "superblock": true, 00:21:17.483 "num_base_bdevs": 3, 00:21:17.483 "num_base_bdevs_discovered": 3, 00:21:17.483 "num_base_bdevs_operational": 3, 00:21:17.483 "base_bdevs_list": [ 00:21:17.483 { 00:21:17.483 "name": "BaseBdev1", 00:21:17.483 "uuid": "1eb3f51b-a823-5752-835a-3205fd934bf7", 00:21:17.483 "is_configured": true, 00:21:17.483 "data_offset": 2048, 00:21:17.483 "data_size": 63488 00:21:17.483 }, 00:21:17.483 { 00:21:17.483 "name": "BaseBdev2", 00:21:17.483 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:17.483 "is_configured": true, 00:21:17.483 "data_offset": 2048, 00:21:17.483 "data_size": 63488 00:21:17.483 }, 00:21:17.483 { 00:21:17.483 "name": "BaseBdev3", 00:21:17.483 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:17.483 "is_configured": true, 00:21:17.483 "data_offset": 2048, 00:21:17.483 "data_size": 63488 00:21:17.483 } 00:21:17.483 ] 00:21:17.483 }' 00:21:17.483 11:31:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.483 11:31:35 -- common/autotest_common.sh@10 -- # set +x 00:21:17.741 11:31:35 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:17.741 11:31:35 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:17.741 [2024-11-26 11:31:35.977340] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.000 11:31:35 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:21:18.000 11:31:35 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:18.000 11:31:35 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@12 -- # local i 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:18.258 [2024-11-26 11:31:36.413288] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:18.258 /dev/nbd0 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:18.258 11:31:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:18.258 11:31:36 -- common/autotest_common.sh@867 -- # local i 00:21:18.258 11:31:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:18.258 11:31:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:18.258 11:31:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:18.258 11:31:36 -- common/autotest_common.sh@871 -- # break 00:21:18.258 11:31:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:18.258 11:31:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:18.258 11:31:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.258 1+0 records in 00:21:18.258 1+0 records out 00:21:18.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027826 s, 14.7 MB/s 00:21:18.258 11:31:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.258 11:31:36 -- common/autotest_common.sh@884 -- # size=4096 00:21:18.258 11:31:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.258 11:31:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:18.258 11:31:36 -- common/autotest_common.sh@887 -- # return 0 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.258 11:31:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@582 -- # echo 128 00:21:18.258 11:31:36 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:21:18.824 496+0 records in 00:21:18.824 496+0 records out 00:21:18.824 65011712 bytes (65 MB, 62 MiB) copied, 0.370338 s, 176 MB/s 00:21:18.824 11:31:36 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@51 -- # local i 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.824 11:31:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:18.824 [2024-11-26 11:31:37.041318] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@41 -- # break 00:21:18.824 11:31:37 -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.824 11:31:37 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:19.083 [2024-11-26 11:31:37.220034] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.083 11:31:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.342 11:31:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.342 "name": "raid_bdev1", 00:21:19.342 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:19.342 "strip_size_kb": 64, 00:21:19.342 "state": "online", 00:21:19.342 "raid_level": "raid5f", 00:21:19.342 "superblock": true, 00:21:19.342 "num_base_bdevs": 3, 00:21:19.342 "num_base_bdevs_discovered": 2, 00:21:19.342 "num_base_bdevs_operational": 2, 00:21:19.342 "base_bdevs_list": [ 00:21:19.342 { 00:21:19.342 "name": null, 00:21:19.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.342 "is_configured": false, 00:21:19.342 "data_offset": 2048, 00:21:19.342 "data_size": 63488 00:21:19.342 }, 00:21:19.342 { 00:21:19.342 "name": "BaseBdev2", 00:21:19.342 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:19.342 "is_configured": true, 00:21:19.342 "data_offset": 2048, 00:21:19.342 "data_size": 63488 00:21:19.342 }, 00:21:19.342 { 00:21:19.342 "name": "BaseBdev3", 00:21:19.342 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:19.342 "is_configured": true, 00:21:19.342 "data_offset": 2048, 00:21:19.342 "data_size": 63488 00:21:19.342 } 00:21:19.342 ] 00:21:19.342 }' 00:21:19.342 11:31:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.342 11:31:37 -- common/autotest_common.sh@10 -- # set +x 00:21:19.601 11:31:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:19.860 [2024-11-26 11:31:37.988297] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:19.860 [2024-11-26 11:31:37.988383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.860 [2024-11-26 11:31:37.991206] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028830 00:21:19.860 [2024-11-26 11:31:37.993648] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.860 11:31:38 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.796 11:31:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.057 "name": "raid_bdev1", 00:21:21.057 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:21.057 "strip_size_kb": 64, 00:21:21.057 "state": "online", 00:21:21.057 "raid_level": "raid5f", 00:21:21.057 "superblock": true, 00:21:21.057 "num_base_bdevs": 3, 00:21:21.057 "num_base_bdevs_discovered": 3, 00:21:21.057 "num_base_bdevs_operational": 3, 00:21:21.057 "process": { 00:21:21.057 "type": "rebuild", 00:21:21.057 "target": "spare", 00:21:21.057 "progress": { 00:21:21.057 "blocks": 24576, 00:21:21.057 "percent": 19 00:21:21.057 } 00:21:21.057 }, 00:21:21.057 "base_bdevs_list": [ 00:21:21.057 { 00:21:21.057 "name": "spare", 00:21:21.057 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:21.057 "is_configured": true, 00:21:21.057 "data_offset": 2048, 00:21:21.057 "data_size": 63488 00:21:21.057 }, 00:21:21.057 { 00:21:21.057 "name": "BaseBdev2", 00:21:21.057 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:21.057 "is_configured": true, 00:21:21.057 "data_offset": 2048, 00:21:21.057 "data_size": 63488 00:21:21.057 }, 00:21:21.057 { 00:21:21.057 "name": "BaseBdev3", 00:21:21.057 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:21.057 "is_configured": true, 00:21:21.057 "data_offset": 2048, 00:21:21.057 "data_size": 63488 00:21:21.057 } 00:21:21.057 ] 00:21:21.057 }' 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.057 11:31:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:21.316 [2024-11-26 11:31:39.495081] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.316 [2024-11-26 11:31:39.506481] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.316 [2024-11-26 11:31:39.506559] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.316 11:31:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.575 11:31:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.575 "name": "raid_bdev1", 00:21:21.575 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:21.575 "strip_size_kb": 64, 00:21:21.575 "state": "online", 00:21:21.575 "raid_level": "raid5f", 00:21:21.575 "superblock": true, 00:21:21.575 "num_base_bdevs": 3, 00:21:21.575 "num_base_bdevs_discovered": 2, 00:21:21.575 "num_base_bdevs_operational": 2, 00:21:21.575 "base_bdevs_list": [ 00:21:21.575 { 00:21:21.575 "name": null, 00:21:21.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.575 "is_configured": false, 00:21:21.575 "data_offset": 2048, 00:21:21.575 "data_size": 63488 00:21:21.575 }, 00:21:21.575 { 00:21:21.575 "name": "BaseBdev2", 00:21:21.575 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:21.575 "is_configured": true, 00:21:21.575 "data_offset": 2048, 00:21:21.575 "data_size": 63488 00:21:21.575 }, 00:21:21.575 { 00:21:21.575 "name": "BaseBdev3", 00:21:21.575 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:21.575 "is_configured": true, 00:21:21.575 "data_offset": 2048, 00:21:21.575 "data_size": 63488 00:21:21.575 } 00:21:21.575 ] 00:21:21.575 }' 00:21:21.575 11:31:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.575 11:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.835 11:31:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.094 "name": "raid_bdev1", 00:21:22.094 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:22.094 "strip_size_kb": 64, 00:21:22.094 "state": "online", 00:21:22.094 "raid_level": "raid5f", 00:21:22.094 "superblock": true, 00:21:22.094 "num_base_bdevs": 3, 00:21:22.094 "num_base_bdevs_discovered": 2, 00:21:22.094 "num_base_bdevs_operational": 2, 00:21:22.094 "base_bdevs_list": [ 00:21:22.094 { 00:21:22.094 "name": null, 00:21:22.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.094 "is_configured": false, 00:21:22.094 "data_offset": 2048, 00:21:22.094 "data_size": 63488 00:21:22.094 }, 00:21:22.094 { 00:21:22.094 "name": "BaseBdev2", 00:21:22.094 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:22.094 "is_configured": true, 00:21:22.094 "data_offset": 2048, 00:21:22.094 "data_size": 63488 00:21:22.094 }, 00:21:22.094 { 00:21:22.094 "name": "BaseBdev3", 00:21:22.094 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:22.094 "is_configured": true, 00:21:22.094 "data_offset": 2048, 00:21:22.094 "data_size": 63488 00:21:22.094 } 00:21:22.094 ] 00:21:22.094 }' 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:22.094 11:31:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.370 [2024-11-26 11:31:40.438873] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:22.370 [2024-11-26 11:31:40.438952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.370 [2024-11-26 11:31:40.441555] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028900 00:21:22.370 [2024-11-26 11:31:40.443874] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.370 11:31:40 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.363 11:31:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.622 "name": "raid_bdev1", 00:21:23.622 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:23.622 "strip_size_kb": 64, 00:21:23.622 "state": "online", 00:21:23.622 "raid_level": "raid5f", 00:21:23.622 "superblock": true, 00:21:23.622 "num_base_bdevs": 3, 00:21:23.622 "num_base_bdevs_discovered": 3, 00:21:23.622 "num_base_bdevs_operational": 3, 00:21:23.622 "process": { 00:21:23.622 "type": "rebuild", 00:21:23.622 "target": "spare", 00:21:23.622 "progress": { 00:21:23.622 "blocks": 24576, 00:21:23.622 "percent": 19 00:21:23.622 } 00:21:23.622 }, 00:21:23.622 "base_bdevs_list": [ 00:21:23.622 { 00:21:23.622 "name": "spare", 00:21:23.622 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:23.622 "is_configured": true, 00:21:23.622 "data_offset": 2048, 00:21:23.622 "data_size": 63488 00:21:23.622 }, 00:21:23.622 { 00:21:23.622 "name": "BaseBdev2", 00:21:23.622 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:23.622 "is_configured": true, 00:21:23.622 "data_offset": 2048, 00:21:23.622 "data_size": 63488 00:21:23.622 }, 00:21:23.622 { 00:21:23.622 "name": "BaseBdev3", 00:21:23.622 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:23.622 "is_configured": true, 00:21:23.622 "data_offset": 2048, 00:21:23.622 "data_size": 63488 00:21:23.622 } 00:21:23.622 ] 00:21:23.622 }' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:23.622 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@657 -- # local timeout=522 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.622 11:31:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.890 "name": "raid_bdev1", 00:21:23.890 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:23.890 "strip_size_kb": 64, 00:21:23.890 "state": "online", 00:21:23.890 "raid_level": "raid5f", 00:21:23.890 "superblock": true, 00:21:23.890 "num_base_bdevs": 3, 00:21:23.890 "num_base_bdevs_discovered": 3, 00:21:23.890 "num_base_bdevs_operational": 3, 00:21:23.890 "process": { 00:21:23.890 "type": "rebuild", 00:21:23.890 "target": "spare", 00:21:23.890 "progress": { 00:21:23.890 "blocks": 28672, 00:21:23.890 "percent": 22 00:21:23.890 } 00:21:23.890 }, 00:21:23.890 "base_bdevs_list": [ 00:21:23.890 { 00:21:23.890 "name": "spare", 00:21:23.890 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:23.890 "is_configured": true, 00:21:23.890 "data_offset": 2048, 00:21:23.890 "data_size": 63488 00:21:23.890 }, 00:21:23.890 { 00:21:23.890 "name": "BaseBdev2", 00:21:23.890 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:23.890 "is_configured": true, 00:21:23.890 "data_offset": 2048, 00:21:23.890 "data_size": 63488 00:21:23.890 }, 00:21:23.890 { 00:21:23.890 "name": "BaseBdev3", 00:21:23.890 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:23.890 "is_configured": true, 00:21:23.890 "data_offset": 2048, 00:21:23.890 "data_size": 63488 00:21:23.890 } 00:21:23.890 ] 00:21:23.890 }' 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.890 11:31:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.830 11:31:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.088 11:31:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.088 "name": "raid_bdev1", 00:21:25.088 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:25.088 "strip_size_kb": 64, 00:21:25.088 "state": "online", 00:21:25.088 "raid_level": "raid5f", 00:21:25.088 "superblock": true, 00:21:25.088 "num_base_bdevs": 3, 00:21:25.088 "num_base_bdevs_discovered": 3, 00:21:25.088 "num_base_bdevs_operational": 3, 00:21:25.088 "process": { 00:21:25.088 "type": "rebuild", 00:21:25.088 "target": "spare", 00:21:25.088 "progress": { 00:21:25.088 "blocks": 55296, 00:21:25.088 "percent": 43 00:21:25.088 } 00:21:25.088 }, 00:21:25.088 "base_bdevs_list": [ 00:21:25.088 { 00:21:25.088 "name": "spare", 00:21:25.088 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:25.088 "is_configured": true, 00:21:25.088 "data_offset": 2048, 00:21:25.088 "data_size": 63488 00:21:25.088 }, 00:21:25.088 { 00:21:25.088 "name": "BaseBdev2", 00:21:25.088 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:25.088 "is_configured": true, 00:21:25.088 "data_offset": 2048, 00:21:25.088 "data_size": 63488 00:21:25.088 }, 00:21:25.088 { 00:21:25.088 "name": "BaseBdev3", 00:21:25.088 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:25.088 "is_configured": true, 00:21:25.088 "data_offset": 2048, 00:21:25.088 "data_size": 63488 00:21:25.088 } 00:21:25.088 ] 00:21:25.088 }' 00:21:25.088 11:31:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.088 11:31:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.089 11:31:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.089 11:31:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.089 11:31:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.023 11:31:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.282 11:31:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.282 11:31:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.282 "name": "raid_bdev1", 00:21:26.282 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:26.282 "strip_size_kb": 64, 00:21:26.282 "state": "online", 00:21:26.282 "raid_level": "raid5f", 00:21:26.282 "superblock": true, 00:21:26.282 "num_base_bdevs": 3, 00:21:26.282 "num_base_bdevs_discovered": 3, 00:21:26.282 "num_base_bdevs_operational": 3, 00:21:26.282 "process": { 00:21:26.282 "type": "rebuild", 00:21:26.282 "target": "spare", 00:21:26.282 "progress": { 00:21:26.282 "blocks": 79872, 00:21:26.282 "percent": 62 00:21:26.282 } 00:21:26.282 }, 00:21:26.282 "base_bdevs_list": [ 00:21:26.282 { 00:21:26.282 "name": "spare", 00:21:26.282 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:26.282 "is_configured": true, 00:21:26.282 "data_offset": 2048, 00:21:26.282 "data_size": 63488 00:21:26.282 }, 00:21:26.283 { 00:21:26.283 "name": "BaseBdev2", 00:21:26.283 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:26.283 "is_configured": true, 00:21:26.283 "data_offset": 2048, 00:21:26.283 "data_size": 63488 00:21:26.283 }, 00:21:26.283 { 00:21:26.283 "name": "BaseBdev3", 00:21:26.283 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:26.283 "is_configured": true, 00:21:26.283 "data_offset": 2048, 00:21:26.283 "data_size": 63488 00:21:26.283 } 00:21:26.283 ] 00:21:26.283 }' 00:21:26.283 11:31:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.283 11:31:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.283 11:31:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.283 11:31:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.283 11:31:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.662 "name": "raid_bdev1", 00:21:27.662 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:27.662 "strip_size_kb": 64, 00:21:27.662 "state": "online", 00:21:27.662 "raid_level": "raid5f", 00:21:27.662 "superblock": true, 00:21:27.662 "num_base_bdevs": 3, 00:21:27.662 "num_base_bdevs_discovered": 3, 00:21:27.662 "num_base_bdevs_operational": 3, 00:21:27.662 "process": { 00:21:27.662 "type": "rebuild", 00:21:27.662 "target": "spare", 00:21:27.662 "progress": { 00:21:27.662 "blocks": 106496, 00:21:27.662 "percent": 83 00:21:27.662 } 00:21:27.662 }, 00:21:27.662 "base_bdevs_list": [ 00:21:27.662 { 00:21:27.662 "name": "spare", 00:21:27.662 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:27.662 "is_configured": true, 00:21:27.662 "data_offset": 2048, 00:21:27.662 "data_size": 63488 00:21:27.662 }, 00:21:27.662 { 00:21:27.662 "name": "BaseBdev2", 00:21:27.662 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:27.662 "is_configured": true, 00:21:27.662 "data_offset": 2048, 00:21:27.662 "data_size": 63488 00:21:27.662 }, 00:21:27.662 { 00:21:27.662 "name": "BaseBdev3", 00:21:27.662 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:27.662 "is_configured": true, 00:21:27.662 "data_offset": 2048, 00:21:27.662 "data_size": 63488 00:21:27.662 } 00:21:27.662 ] 00:21:27.662 }' 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.662 11:31:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:28.600 [2024-11-26 11:31:46.693568] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:28.600 [2024-11-26 11:31:46.693655] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:28.600 [2024-11-26 11:31:46.693778] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.600 11:31:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.859 "name": "raid_bdev1", 00:21:28.859 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:28.859 "strip_size_kb": 64, 00:21:28.859 "state": "online", 00:21:28.859 "raid_level": "raid5f", 00:21:28.859 "superblock": true, 00:21:28.859 "num_base_bdevs": 3, 00:21:28.859 "num_base_bdevs_discovered": 3, 00:21:28.859 "num_base_bdevs_operational": 3, 00:21:28.859 "base_bdevs_list": [ 00:21:28.859 { 00:21:28.859 "name": "spare", 00:21:28.859 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:28.859 "is_configured": true, 00:21:28.859 "data_offset": 2048, 00:21:28.859 "data_size": 63488 00:21:28.859 }, 00:21:28.859 { 00:21:28.859 "name": "BaseBdev2", 00:21:28.859 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:28.859 "is_configured": true, 00:21:28.859 "data_offset": 2048, 00:21:28.859 "data_size": 63488 00:21:28.859 }, 00:21:28.859 { 00:21:28.859 "name": "BaseBdev3", 00:21:28.859 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:28.859 "is_configured": true, 00:21:28.859 "data_offset": 2048, 00:21:28.859 "data_size": 63488 00:21:28.859 } 00:21:28.859 ] 00:21:28.859 }' 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@660 -- # break 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.859 11:31:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.118 11:31:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.118 "name": "raid_bdev1", 00:21:29.118 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:29.118 "strip_size_kb": 64, 00:21:29.118 "state": "online", 00:21:29.118 "raid_level": "raid5f", 00:21:29.118 "superblock": true, 00:21:29.119 "num_base_bdevs": 3, 00:21:29.119 "num_base_bdevs_discovered": 3, 00:21:29.119 "num_base_bdevs_operational": 3, 00:21:29.119 "base_bdevs_list": [ 00:21:29.119 { 00:21:29.119 "name": "spare", 00:21:29.119 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:29.119 "is_configured": true, 00:21:29.119 "data_offset": 2048, 00:21:29.119 "data_size": 63488 00:21:29.119 }, 00:21:29.119 { 00:21:29.119 "name": "BaseBdev2", 00:21:29.119 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:29.119 "is_configured": true, 00:21:29.119 "data_offset": 2048, 00:21:29.119 "data_size": 63488 00:21:29.119 }, 00:21:29.119 { 00:21:29.119 "name": "BaseBdev3", 00:21:29.119 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:29.119 "is_configured": true, 00:21:29.119 "data_offset": 2048, 00:21:29.119 "data_size": 63488 00:21:29.119 } 00:21:29.119 ] 00:21:29.119 }' 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.119 11:31:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.377 11:31:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.377 "name": "raid_bdev1", 00:21:29.377 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:29.377 "strip_size_kb": 64, 00:21:29.378 "state": "online", 00:21:29.378 "raid_level": "raid5f", 00:21:29.378 "superblock": true, 00:21:29.378 "num_base_bdevs": 3, 00:21:29.378 "num_base_bdevs_discovered": 3, 00:21:29.378 "num_base_bdevs_operational": 3, 00:21:29.378 "base_bdevs_list": [ 00:21:29.378 { 00:21:29.378 "name": "spare", 00:21:29.378 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:29.378 "is_configured": true, 00:21:29.378 "data_offset": 2048, 00:21:29.378 "data_size": 63488 00:21:29.378 }, 00:21:29.378 { 00:21:29.378 "name": "BaseBdev2", 00:21:29.378 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:29.378 "is_configured": true, 00:21:29.378 "data_offset": 2048, 00:21:29.378 "data_size": 63488 00:21:29.378 }, 00:21:29.378 { 00:21:29.378 "name": "BaseBdev3", 00:21:29.378 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:29.378 "is_configured": true, 00:21:29.378 "data_offset": 2048, 00:21:29.378 "data_size": 63488 00:21:29.378 } 00:21:29.378 ] 00:21:29.378 }' 00:21:29.378 11:31:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.378 11:31:47 -- common/autotest_common.sh@10 -- # set +x 00:21:29.637 11:31:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:29.896 [2024-11-26 11:31:48.013719] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.896 [2024-11-26 11:31:48.013770] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.896 [2024-11-26 11:31:48.013863] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.896 [2024-11-26 11:31:48.013978] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.896 [2024-11-26 11:31:48.013998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:21:29.896 11:31:48 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.896 11:31:48 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:30.154 11:31:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:30.155 11:31:48 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:30.155 11:31:48 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@12 -- # local i 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:30.155 11:31:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:30.413 /dev/nbd0 00:21:30.414 11:31:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:30.414 11:31:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:30.414 11:31:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:30.414 11:31:48 -- common/autotest_common.sh@867 -- # local i 00:21:30.414 11:31:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:30.414 11:31:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:30.414 11:31:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:30.414 11:31:48 -- common/autotest_common.sh@871 -- # break 00:21:30.414 11:31:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:30.414 11:31:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:30.414 11:31:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:30.414 1+0 records in 00:21:30.414 1+0 records out 00:21:30.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279683 s, 14.6 MB/s 00:21:30.414 11:31:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.414 11:31:48 -- common/autotest_common.sh@884 -- # size=4096 00:21:30.414 11:31:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.414 11:31:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:30.414 11:31:48 -- common/autotest_common.sh@887 -- # return 0 00:21:30.414 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:30.414 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:30.414 11:31:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:30.673 /dev/nbd1 00:21:30.673 11:31:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:30.673 11:31:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:30.673 11:31:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:30.673 11:31:48 -- common/autotest_common.sh@867 -- # local i 00:21:30.673 11:31:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:30.673 11:31:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:30.673 11:31:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:30.673 11:31:48 -- common/autotest_common.sh@871 -- # break 00:21:30.674 11:31:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:30.674 11:31:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:30.674 11:31:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:30.674 1+0 records in 00:21:30.674 1+0 records out 00:21:30.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316111 s, 13.0 MB/s 00:21:30.674 11:31:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.674 11:31:48 -- common/autotest_common.sh@884 -- # size=4096 00:21:30.674 11:31:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:30.674 11:31:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:30.674 11:31:48 -- common/autotest_common.sh@887 -- # return 0 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:30.674 11:31:48 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:30.674 11:31:48 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@51 -- # local i 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.674 11:31:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@41 -- # break 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.932 11:31:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:31.189 11:31:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@41 -- # break 00:21:31.190 11:31:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:31.190 11:31:49 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:31.190 11:31:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:31.190 11:31:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:31.190 11:31:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:31.447 11:31:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:31.706 [2024-11-26 11:31:49.821068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:31.707 [2024-11-26 11:31:49.821161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.707 [2024-11-26 11:31:49.821191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:21:31.707 [2024-11-26 11:31:49.821206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.707 [2024-11-26 11:31:49.823630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.707 [2024-11-26 11:31:49.823688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:31.707 [2024-11-26 11:31:49.823777] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:31.707 [2024-11-26 11:31:49.823864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.707 BaseBdev1 00:21:31.707 11:31:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:31.707 11:31:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:31.707 11:31:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:31.966 11:31:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:31.966 [2024-11-26 11:31:50.185132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:31.966 [2024-11-26 11:31:50.185215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.966 [2024-11-26 11:31:50.185244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:21:31.966 [2024-11-26 11:31:50.185259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.966 [2024-11-26 11:31:50.185708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.966 [2024-11-26 11:31:50.185749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:31.966 [2024-11-26 11:31:50.185819] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:31.966 [2024-11-26 11:31:50.185839] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:31.966 [2024-11-26 11:31:50.185857] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:31.966 [2024-11-26 11:31:50.185906] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state configuring 00:21:31.966 [2024-11-26 11:31:50.185979] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:31.966 BaseBdev2 00:21:31.966 11:31:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:31.966 11:31:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:31.966 11:31:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:32.225 11:31:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:32.485 [2024-11-26 11:31:50.597196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:32.485 [2024-11-26 11:31:50.597279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.485 [2024-11-26 11:31:50.597317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:21:32.485 [2024-11-26 11:31:50.597330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.485 [2024-11-26 11:31:50.597743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.485 [2024-11-26 11:31:50.597776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:32.485 [2024-11-26 11:31:50.597862] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:32.485 [2024-11-26 11:31:50.597905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.485 BaseBdev3 00:21:32.485 11:31:50 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:32.743 11:31:50 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:32.743 [2024-11-26 11:31:50.953319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:32.743 [2024-11-26 11:31:50.953402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.743 [2024-11-26 11:31:50.953434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:21:32.743 [2024-11-26 11:31:50.953447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.743 [2024-11-26 11:31:50.953904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.743 [2024-11-26 11:31:50.953950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:32.743 [2024-11-26 11:31:50.954046] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:32.743 [2024-11-26 11:31:50.954095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.743 spare 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.003 11:31:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.003 [2024-11-26 11:31:51.054241] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b480 00:21:33.003 [2024-11-26 11:31:51.054276] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:33.003 [2024-11-26 11:31:51.054434] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000046fb0 00:21:33.003 [2024-11-26 11:31:51.055225] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b480 00:21:33.003 [2024-11-26 11:31:51.055272] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b480 00:21:33.003 [2024-11-26 11:31:51.055443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.003 11:31:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:33.003 "name": "raid_bdev1", 00:21:33.003 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:33.003 "strip_size_kb": 64, 00:21:33.003 "state": "online", 00:21:33.003 "raid_level": "raid5f", 00:21:33.003 "superblock": true, 00:21:33.003 "num_base_bdevs": 3, 00:21:33.003 "num_base_bdevs_discovered": 3, 00:21:33.003 "num_base_bdevs_operational": 3, 00:21:33.003 "base_bdevs_list": [ 00:21:33.003 { 00:21:33.003 "name": "spare", 00:21:33.003 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:33.003 "is_configured": true, 00:21:33.003 "data_offset": 2048, 00:21:33.003 "data_size": 63488 00:21:33.003 }, 00:21:33.003 { 00:21:33.003 "name": "BaseBdev2", 00:21:33.003 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:33.003 "is_configured": true, 00:21:33.003 "data_offset": 2048, 00:21:33.003 "data_size": 63488 00:21:33.003 }, 00:21:33.003 { 00:21:33.003 "name": "BaseBdev3", 00:21:33.003 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:33.003 "is_configured": true, 00:21:33.003 "data_offset": 2048, 00:21:33.003 "data_size": 63488 00:21:33.003 } 00:21:33.003 ] 00:21:33.003 }' 00:21:33.003 11:31:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:33.003 11:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.262 11:31:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.521 "name": "raid_bdev1", 00:21:33.521 "uuid": "b8175715-fe2c-4dec-9ec3-6406b65854d9", 00:21:33.521 "strip_size_kb": 64, 00:21:33.521 "state": "online", 00:21:33.521 "raid_level": "raid5f", 00:21:33.521 "superblock": true, 00:21:33.521 "num_base_bdevs": 3, 00:21:33.521 "num_base_bdevs_discovered": 3, 00:21:33.521 "num_base_bdevs_operational": 3, 00:21:33.521 "base_bdevs_list": [ 00:21:33.521 { 00:21:33.521 "name": "spare", 00:21:33.521 "uuid": "aef7b87e-26a4-57ab-81a1-c1c57c2c14da", 00:21:33.521 "is_configured": true, 00:21:33.521 "data_offset": 2048, 00:21:33.521 "data_size": 63488 00:21:33.521 }, 00:21:33.521 { 00:21:33.521 "name": "BaseBdev2", 00:21:33.521 "uuid": "a342da4c-8c09-525c-a65c-f8777cf25c7d", 00:21:33.521 "is_configured": true, 00:21:33.521 "data_offset": 2048, 00:21:33.521 "data_size": 63488 00:21:33.521 }, 00:21:33.521 { 00:21:33.521 "name": "BaseBdev3", 00:21:33.521 "uuid": "b290f208-52b0-5b0d-ae55-1be4d093c524", 00:21:33.521 "is_configured": true, 00:21:33.521 "data_offset": 2048, 00:21:33.521 "data_size": 63488 00:21:33.521 } 00:21:33.521 ] 00:21:33.521 }' 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.521 11:31:51 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:33.781 11:31:51 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.781 11:31:51 -- bdev/bdev_raid.sh@709 -- # killprocess 93832 00:21:33.781 11:31:51 -- common/autotest_common.sh@936 -- # '[' -z 93832 ']' 00:21:33.781 11:31:51 -- common/autotest_common.sh@940 -- # kill -0 93832 00:21:33.781 11:31:51 -- common/autotest_common.sh@941 -- # uname 00:21:33.781 11:31:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.781 11:31:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93832 00:21:33.781 11:31:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.781 11:31:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.781 killing process with pid 93832 00:21:33.781 11:31:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93832' 00:21:33.781 Received shutdown signal, test time was about 60.000000 seconds 00:21:33.781 00:21:33.781 Latency(us) 00:21:33.781 [2024-11-26T11:31:52.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.781 [2024-11-26T11:31:52.011Z] =================================================================================================================== 00:21:33.781 [2024-11-26T11:31:52.011Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.781 11:31:51 -- common/autotest_common.sh@955 -- # kill 93832 00:21:33.781 [2024-11-26 11:31:51.901325] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.781 [2024-11-26 11:31:51.901421] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.781 [2024-11-26 11:31:51.901518] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.781 [2024-11-26 11:31:51.901541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b480 name raid_bdev1, state offline 00:21:33.781 11:31:51 -- common/autotest_common.sh@960 -- # wait 93832 00:21:33.781 [2024-11-26 11:31:51.924612] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.040 11:31:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:34.040 00:21:34.040 real 0m19.957s 00:21:34.041 user 0m29.913s 00:21:34.041 sys 0m2.881s 00:21:34.041 11:31:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:34.041 11:31:52 -- common/autotest_common.sh@10 -- # set +x 00:21:34.041 ************************************ 00:21:34.041 END TEST raid5f_rebuild_test_sb 00:21:34.041 ************************************ 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:34.041 11:31:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:34.041 11:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:34.041 11:31:52 -- common/autotest_common.sh@10 -- # set +x 00:21:34.041 ************************************ 00:21:34.041 START TEST raid5f_state_function_test 00:21:34.041 ************************************ 00:21:34.041 11:31:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=94389 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 94389' 00:21:34.041 Process raid pid: 94389 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:34.041 11:31:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 94389 /var/tmp/spdk-raid.sock 00:21:34.041 11:31:52 -- common/autotest_common.sh@829 -- # '[' -z 94389 ']' 00:21:34.041 11:31:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:34.041 11:31:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:34.041 11:31:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:34.041 11:31:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.041 11:31:52 -- common/autotest_common.sh@10 -- # set +x 00:21:34.041 [2024-11-26 11:31:52.208950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:34.041 [2024-11-26 11:31:52.209114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.301 [2024-11-26 11:31:52.374720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.301 [2024-11-26 11:31:52.409520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.301 [2024-11-26 11:31:52.440994] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.239 11:31:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.239 11:31:53 -- common/autotest_common.sh@862 -- # return 0 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:35.239 [2024-11-26 11:31:53.323686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:35.239 [2024-11-26 11:31:53.323754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:35.239 [2024-11-26 11:31:53.323777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.239 [2024-11-26 11:31:53.323790] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.239 [2024-11-26 11:31:53.323800] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:35.239 [2024-11-26 11:31:53.323811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:35.239 [2024-11-26 11:31:53.323823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:35.239 [2024-11-26 11:31:53.323832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.239 11:31:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.498 11:31:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.498 "name": "Existed_Raid", 00:21:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.498 "strip_size_kb": 64, 00:21:35.498 "state": "configuring", 00:21:35.498 "raid_level": "raid5f", 00:21:35.498 "superblock": false, 00:21:35.498 "num_base_bdevs": 4, 00:21:35.498 "num_base_bdevs_discovered": 0, 00:21:35.498 "num_base_bdevs_operational": 4, 00:21:35.498 "base_bdevs_list": [ 00:21:35.498 { 00:21:35.498 "name": "BaseBdev1", 00:21:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.498 "is_configured": false, 00:21:35.498 "data_offset": 0, 00:21:35.498 "data_size": 0 00:21:35.498 }, 00:21:35.498 { 00:21:35.498 "name": "BaseBdev2", 00:21:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.498 "is_configured": false, 00:21:35.498 "data_offset": 0, 00:21:35.498 "data_size": 0 00:21:35.498 }, 00:21:35.498 { 00:21:35.498 "name": "BaseBdev3", 00:21:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.498 "is_configured": false, 00:21:35.498 "data_offset": 0, 00:21:35.498 "data_size": 0 00:21:35.498 }, 00:21:35.498 { 00:21:35.498 "name": "BaseBdev4", 00:21:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.498 "is_configured": false, 00:21:35.498 "data_offset": 0, 00:21:35.498 "data_size": 0 00:21:35.498 } 00:21:35.498 ] 00:21:35.498 }' 00:21:35.498 11:31:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.498 11:31:53 -- common/autotest_common.sh@10 -- # set +x 00:21:35.757 11:31:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:36.017 [2024-11-26 11:31:54.055721] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:36.017 [2024-11-26 11:31:54.055767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:36.017 11:31:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:36.017 [2024-11-26 11:31:54.231785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.017 [2024-11-26 11:31:54.231863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.017 [2024-11-26 11:31:54.231895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.017 [2024-11-26 11:31:54.231939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.017 [2024-11-26 11:31:54.231953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.017 [2024-11-26 11:31:54.231966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.017 [2024-11-26 11:31:54.231976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.017 [2024-11-26 11:31:54.231985] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.017 11:31:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:36.276 [2024-11-26 11:31:54.477679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.276 BaseBdev1 00:21:36.276 11:31:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:36.276 11:31:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:36.276 11:31:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.276 11:31:54 -- common/autotest_common.sh@899 -- # local i 00:21:36.276 11:31:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.276 11:31:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.276 11:31:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.535 11:31:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:36.795 [ 00:21:36.795 { 00:21:36.795 "name": "BaseBdev1", 00:21:36.795 "aliases": [ 00:21:36.795 "4d525a95-64a6-4cac-bc43-d645aba34076" 00:21:36.795 ], 00:21:36.795 "product_name": "Malloc disk", 00:21:36.795 "block_size": 512, 00:21:36.795 "num_blocks": 65536, 00:21:36.795 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:36.795 "assigned_rate_limits": { 00:21:36.795 "rw_ios_per_sec": 0, 00:21:36.795 "rw_mbytes_per_sec": 0, 00:21:36.795 "r_mbytes_per_sec": 0, 00:21:36.795 "w_mbytes_per_sec": 0 00:21:36.795 }, 00:21:36.795 "claimed": true, 00:21:36.795 "claim_type": "exclusive_write", 00:21:36.795 "zoned": false, 00:21:36.795 "supported_io_types": { 00:21:36.795 "read": true, 00:21:36.795 "write": true, 00:21:36.795 "unmap": true, 00:21:36.795 "write_zeroes": true, 00:21:36.795 "flush": true, 00:21:36.795 "reset": true, 00:21:36.795 "compare": false, 00:21:36.795 "compare_and_write": false, 00:21:36.795 "abort": true, 00:21:36.795 "nvme_admin": false, 00:21:36.795 "nvme_io": false 00:21:36.795 }, 00:21:36.795 "memory_domains": [ 00:21:36.795 { 00:21:36.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.795 "dma_device_type": 2 00:21:36.795 } 00:21:36.795 ], 00:21:36.795 "driver_specific": {} 00:21:36.795 } 00:21:36.795 ] 00:21:36.795 11:31:54 -- common/autotest_common.sh@905 -- # return 0 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.795 11:31:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.054 11:31:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:37.054 "name": "Existed_Raid", 00:21:37.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.054 "strip_size_kb": 64, 00:21:37.054 "state": "configuring", 00:21:37.054 "raid_level": "raid5f", 00:21:37.054 "superblock": false, 00:21:37.054 "num_base_bdevs": 4, 00:21:37.054 "num_base_bdevs_discovered": 1, 00:21:37.054 "num_base_bdevs_operational": 4, 00:21:37.054 "base_bdevs_list": [ 00:21:37.054 { 00:21:37.054 "name": "BaseBdev1", 00:21:37.054 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:37.054 "is_configured": true, 00:21:37.054 "data_offset": 0, 00:21:37.054 "data_size": 65536 00:21:37.054 }, 00:21:37.054 { 00:21:37.054 "name": "BaseBdev2", 00:21:37.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.054 "is_configured": false, 00:21:37.054 "data_offset": 0, 00:21:37.054 "data_size": 0 00:21:37.054 }, 00:21:37.054 { 00:21:37.054 "name": "BaseBdev3", 00:21:37.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.054 "is_configured": false, 00:21:37.054 "data_offset": 0, 00:21:37.054 "data_size": 0 00:21:37.054 }, 00:21:37.054 { 00:21:37.054 "name": "BaseBdev4", 00:21:37.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.054 "is_configured": false, 00:21:37.054 "data_offset": 0, 00:21:37.054 "data_size": 0 00:21:37.054 } 00:21:37.054 ] 00:21:37.054 }' 00:21:37.054 11:31:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:37.054 11:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:37.313 11:31:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:37.313 [2024-11-26 11:31:55.513992] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.313 [2024-11-26 11:31:55.514054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:37.313 11:31:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:37.313 11:31:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:37.571 [2024-11-26 11:31:55.762086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.571 [2024-11-26 11:31:55.764641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.571 [2024-11-26 11:31:55.764707] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.571 [2024-11-26 11:31:55.764757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.572 [2024-11-26 11:31:55.764770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.572 [2024-11-26 11:31:55.764794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.572 [2024-11-26 11:31:55.764804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.572 11:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.831 11:31:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:37.831 "name": "Existed_Raid", 00:21:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.831 "strip_size_kb": 64, 00:21:37.831 "state": "configuring", 00:21:37.831 "raid_level": "raid5f", 00:21:37.831 "superblock": false, 00:21:37.831 "num_base_bdevs": 4, 00:21:37.831 "num_base_bdevs_discovered": 1, 00:21:37.831 "num_base_bdevs_operational": 4, 00:21:37.831 "base_bdevs_list": [ 00:21:37.831 { 00:21:37.831 "name": "BaseBdev1", 00:21:37.831 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:37.831 "is_configured": true, 00:21:37.831 "data_offset": 0, 00:21:37.831 "data_size": 65536 00:21:37.831 }, 00:21:37.831 { 00:21:37.831 "name": "BaseBdev2", 00:21:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.831 "is_configured": false, 00:21:37.831 "data_offset": 0, 00:21:37.831 "data_size": 0 00:21:37.831 }, 00:21:37.831 { 00:21:37.831 "name": "BaseBdev3", 00:21:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.831 "is_configured": false, 00:21:37.831 "data_offset": 0, 00:21:37.831 "data_size": 0 00:21:37.831 }, 00:21:37.831 { 00:21:37.831 "name": "BaseBdev4", 00:21:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.831 "is_configured": false, 00:21:37.831 "data_offset": 0, 00:21:37.831 "data_size": 0 00:21:37.831 } 00:21:37.831 ] 00:21:37.831 }' 00:21:37.831 11:31:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:37.831 11:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:38.091 11:31:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.350 BaseBdev2 00:21:38.350 [2024-11-26 11:31:56.535221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.350 11:31:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:38.350 11:31:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:38.350 11:31:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:38.350 11:31:56 -- common/autotest_common.sh@899 -- # local i 00:21:38.350 11:31:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:38.350 11:31:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:38.350 11:31:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.609 11:31:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.868 [ 00:21:38.868 { 00:21:38.868 "name": "BaseBdev2", 00:21:38.868 "aliases": [ 00:21:38.868 "b493e320-b0b5-421e-abfa-77f41e630e9a" 00:21:38.868 ], 00:21:38.868 "product_name": "Malloc disk", 00:21:38.868 "block_size": 512, 00:21:38.868 "num_blocks": 65536, 00:21:38.868 "uuid": "b493e320-b0b5-421e-abfa-77f41e630e9a", 00:21:38.868 "assigned_rate_limits": { 00:21:38.868 "rw_ios_per_sec": 0, 00:21:38.868 "rw_mbytes_per_sec": 0, 00:21:38.868 "r_mbytes_per_sec": 0, 00:21:38.868 "w_mbytes_per_sec": 0 00:21:38.868 }, 00:21:38.868 "claimed": true, 00:21:38.868 "claim_type": "exclusive_write", 00:21:38.868 "zoned": false, 00:21:38.868 "supported_io_types": { 00:21:38.868 "read": true, 00:21:38.868 "write": true, 00:21:38.868 "unmap": true, 00:21:38.868 "write_zeroes": true, 00:21:38.868 "flush": true, 00:21:38.868 "reset": true, 00:21:38.868 "compare": false, 00:21:38.868 "compare_and_write": false, 00:21:38.868 "abort": true, 00:21:38.868 "nvme_admin": false, 00:21:38.868 "nvme_io": false 00:21:38.868 }, 00:21:38.868 "memory_domains": [ 00:21:38.868 { 00:21:38.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.868 "dma_device_type": 2 00:21:38.868 } 00:21:38.868 ], 00:21:38.868 "driver_specific": {} 00:21:38.868 } 00:21:38.868 ] 00:21:38.868 11:31:56 -- common/autotest_common.sh@905 -- # return 0 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.868 11:31:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.868 11:31:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.868 "name": "Existed_Raid", 00:21:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.868 "strip_size_kb": 64, 00:21:38.868 "state": "configuring", 00:21:38.868 "raid_level": "raid5f", 00:21:38.868 "superblock": false, 00:21:38.868 "num_base_bdevs": 4, 00:21:38.868 "num_base_bdevs_discovered": 2, 00:21:38.868 "num_base_bdevs_operational": 4, 00:21:38.868 "base_bdevs_list": [ 00:21:38.868 { 00:21:38.868 "name": "BaseBdev1", 00:21:38.868 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:38.868 "is_configured": true, 00:21:38.868 "data_offset": 0, 00:21:38.868 "data_size": 65536 00:21:38.868 }, 00:21:38.868 { 00:21:38.868 "name": "BaseBdev2", 00:21:38.868 "uuid": "b493e320-b0b5-421e-abfa-77f41e630e9a", 00:21:38.868 "is_configured": true, 00:21:38.868 "data_offset": 0, 00:21:38.868 "data_size": 65536 00:21:38.868 }, 00:21:38.868 { 00:21:38.868 "name": "BaseBdev3", 00:21:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.868 "is_configured": false, 00:21:38.868 "data_offset": 0, 00:21:38.868 "data_size": 0 00:21:38.868 }, 00:21:38.868 { 00:21:38.868 "name": "BaseBdev4", 00:21:38.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.868 "is_configured": false, 00:21:38.868 "data_offset": 0, 00:21:38.868 "data_size": 0 00:21:38.868 } 00:21:38.868 ] 00:21:38.868 }' 00:21:38.868 11:31:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.868 11:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:39.436 11:31:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:39.436 [2024-11-26 11:31:57.583607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.436 BaseBdev3 00:21:39.436 11:31:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:39.436 11:31:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:39.436 11:31:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:39.436 11:31:57 -- common/autotest_common.sh@899 -- # local i 00:21:39.436 11:31:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:39.436 11:31:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:39.436 11:31:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.695 11:31:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.954 [ 00:21:39.954 { 00:21:39.954 "name": "BaseBdev3", 00:21:39.954 "aliases": [ 00:21:39.954 "e4bf6c1a-a08c-4b35-9d19-d4b823a18b86" 00:21:39.954 ], 00:21:39.954 "product_name": "Malloc disk", 00:21:39.954 "block_size": 512, 00:21:39.954 "num_blocks": 65536, 00:21:39.954 "uuid": "e4bf6c1a-a08c-4b35-9d19-d4b823a18b86", 00:21:39.954 "assigned_rate_limits": { 00:21:39.954 "rw_ios_per_sec": 0, 00:21:39.954 "rw_mbytes_per_sec": 0, 00:21:39.954 "r_mbytes_per_sec": 0, 00:21:39.954 "w_mbytes_per_sec": 0 00:21:39.954 }, 00:21:39.954 "claimed": true, 00:21:39.954 "claim_type": "exclusive_write", 00:21:39.954 "zoned": false, 00:21:39.954 "supported_io_types": { 00:21:39.954 "read": true, 00:21:39.954 "write": true, 00:21:39.954 "unmap": true, 00:21:39.954 "write_zeroes": true, 00:21:39.954 "flush": true, 00:21:39.954 "reset": true, 00:21:39.954 "compare": false, 00:21:39.954 "compare_and_write": false, 00:21:39.954 "abort": true, 00:21:39.954 "nvme_admin": false, 00:21:39.954 "nvme_io": false 00:21:39.954 }, 00:21:39.954 "memory_domains": [ 00:21:39.954 { 00:21:39.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.954 "dma_device_type": 2 00:21:39.954 } 00:21:39.954 ], 00:21:39.954 "driver_specific": {} 00:21:39.954 } 00:21:39.954 ] 00:21:39.954 11:31:57 -- common/autotest_common.sh@905 -- # return 0 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.954 11:31:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.213 11:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.213 "name": "Existed_Raid", 00:21:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.213 "strip_size_kb": 64, 00:21:40.213 "state": "configuring", 00:21:40.213 "raid_level": "raid5f", 00:21:40.213 "superblock": false, 00:21:40.213 "num_base_bdevs": 4, 00:21:40.213 "num_base_bdevs_discovered": 3, 00:21:40.213 "num_base_bdevs_operational": 4, 00:21:40.213 "base_bdevs_list": [ 00:21:40.213 { 00:21:40.213 "name": "BaseBdev1", 00:21:40.213 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:40.213 "is_configured": true, 00:21:40.213 "data_offset": 0, 00:21:40.213 "data_size": 65536 00:21:40.213 }, 00:21:40.213 { 00:21:40.213 "name": "BaseBdev2", 00:21:40.213 "uuid": "b493e320-b0b5-421e-abfa-77f41e630e9a", 00:21:40.213 "is_configured": true, 00:21:40.213 "data_offset": 0, 00:21:40.213 "data_size": 65536 00:21:40.213 }, 00:21:40.213 { 00:21:40.213 "name": "BaseBdev3", 00:21:40.213 "uuid": "e4bf6c1a-a08c-4b35-9d19-d4b823a18b86", 00:21:40.213 "is_configured": true, 00:21:40.213 "data_offset": 0, 00:21:40.213 "data_size": 65536 00:21:40.213 }, 00:21:40.213 { 00:21:40.213 "name": "BaseBdev4", 00:21:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.213 "is_configured": false, 00:21:40.213 "data_offset": 0, 00:21:40.213 "data_size": 0 00:21:40.213 } 00:21:40.213 ] 00:21:40.213 }' 00:21:40.213 11:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.213 11:31:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.472 11:31:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:40.472 [2024-11-26 11:31:58.708190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:40.472 [2024-11-26 11:31:58.708476] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:21:40.472 [2024-11-26 11:31:58.708540] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:40.472 [2024-11-26 11:31:58.708797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:21:40.472 [2024-11-26 11:31:58.709791] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:21:40.472 [2024-11-26 11:31:58.709971] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:21:40.472 [2024-11-26 11:31:58.710367] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.472 BaseBdev4 00:21:40.731 11:31:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:40.731 11:31:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:40.731 11:31:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:40.731 11:31:58 -- common/autotest_common.sh@899 -- # local i 00:21:40.731 11:31:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:40.731 11:31:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:40.731 11:31:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.991 11:31:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:40.991 [ 00:21:40.991 { 00:21:40.991 "name": "BaseBdev4", 00:21:40.991 "aliases": [ 00:21:40.991 "3d643cfa-162f-433b-830b-559843d8b053" 00:21:40.991 ], 00:21:40.991 "product_name": "Malloc disk", 00:21:40.991 "block_size": 512, 00:21:40.991 "num_blocks": 65536, 00:21:40.991 "uuid": "3d643cfa-162f-433b-830b-559843d8b053", 00:21:40.991 "assigned_rate_limits": { 00:21:40.991 "rw_ios_per_sec": 0, 00:21:40.991 "rw_mbytes_per_sec": 0, 00:21:40.991 "r_mbytes_per_sec": 0, 00:21:40.991 "w_mbytes_per_sec": 0 00:21:40.991 }, 00:21:40.991 "claimed": true, 00:21:40.991 "claim_type": "exclusive_write", 00:21:40.991 "zoned": false, 00:21:40.991 "supported_io_types": { 00:21:40.991 "read": true, 00:21:40.991 "write": true, 00:21:40.991 "unmap": true, 00:21:40.991 "write_zeroes": true, 00:21:40.991 "flush": true, 00:21:40.991 "reset": true, 00:21:40.991 "compare": false, 00:21:40.991 "compare_and_write": false, 00:21:40.991 "abort": true, 00:21:40.991 "nvme_admin": false, 00:21:40.991 "nvme_io": false 00:21:40.991 }, 00:21:40.991 "memory_domains": [ 00:21:40.991 { 00:21:40.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.991 "dma_device_type": 2 00:21:40.991 } 00:21:40.991 ], 00:21:40.991 "driver_specific": {} 00:21:40.991 } 00:21:40.991 ] 00:21:40.991 11:31:59 -- common/autotest_common.sh@905 -- # return 0 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.991 11:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.250 11:31:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.250 "name": "Existed_Raid", 00:21:41.250 "uuid": "ed803dfc-85a4-49c5-80f4-1eed1bc8db0c", 00:21:41.250 "strip_size_kb": 64, 00:21:41.250 "state": "online", 00:21:41.250 "raid_level": "raid5f", 00:21:41.250 "superblock": false, 00:21:41.250 "num_base_bdevs": 4, 00:21:41.250 "num_base_bdevs_discovered": 4, 00:21:41.250 "num_base_bdevs_operational": 4, 00:21:41.250 "base_bdevs_list": [ 00:21:41.250 { 00:21:41.250 "name": "BaseBdev1", 00:21:41.250 "uuid": "4d525a95-64a6-4cac-bc43-d645aba34076", 00:21:41.250 "is_configured": true, 00:21:41.250 "data_offset": 0, 00:21:41.250 "data_size": 65536 00:21:41.250 }, 00:21:41.250 { 00:21:41.250 "name": "BaseBdev2", 00:21:41.250 "uuid": "b493e320-b0b5-421e-abfa-77f41e630e9a", 00:21:41.250 "is_configured": true, 00:21:41.250 "data_offset": 0, 00:21:41.250 "data_size": 65536 00:21:41.250 }, 00:21:41.250 { 00:21:41.250 "name": "BaseBdev3", 00:21:41.250 "uuid": "e4bf6c1a-a08c-4b35-9d19-d4b823a18b86", 00:21:41.250 "is_configured": true, 00:21:41.250 "data_offset": 0, 00:21:41.250 "data_size": 65536 00:21:41.250 }, 00:21:41.250 { 00:21:41.250 "name": "BaseBdev4", 00:21:41.250 "uuid": "3d643cfa-162f-433b-830b-559843d8b053", 00:21:41.250 "is_configured": true, 00:21:41.250 "data_offset": 0, 00:21:41.250 "data_size": 65536 00:21:41.250 } 00:21:41.250 ] 00:21:41.250 }' 00:21:41.250 11:31:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.250 11:31:59 -- common/autotest_common.sh@10 -- # set +x 00:21:41.508 11:31:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:41.767 [2024-11-26 11:31:59.900647] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.767 11:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.027 11:32:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.027 "name": "Existed_Raid", 00:21:42.027 "uuid": "ed803dfc-85a4-49c5-80f4-1eed1bc8db0c", 00:21:42.027 "strip_size_kb": 64, 00:21:42.027 "state": "online", 00:21:42.027 "raid_level": "raid5f", 00:21:42.027 "superblock": false, 00:21:42.027 "num_base_bdevs": 4, 00:21:42.027 "num_base_bdevs_discovered": 3, 00:21:42.027 "num_base_bdevs_operational": 3, 00:21:42.027 "base_bdevs_list": [ 00:21:42.027 { 00:21:42.027 "name": null, 00:21:42.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.027 "is_configured": false, 00:21:42.027 "data_offset": 0, 00:21:42.027 "data_size": 65536 00:21:42.027 }, 00:21:42.027 { 00:21:42.027 "name": "BaseBdev2", 00:21:42.027 "uuid": "b493e320-b0b5-421e-abfa-77f41e630e9a", 00:21:42.027 "is_configured": true, 00:21:42.027 "data_offset": 0, 00:21:42.027 "data_size": 65536 00:21:42.027 }, 00:21:42.027 { 00:21:42.027 "name": "BaseBdev3", 00:21:42.027 "uuid": "e4bf6c1a-a08c-4b35-9d19-d4b823a18b86", 00:21:42.027 "is_configured": true, 00:21:42.027 "data_offset": 0, 00:21:42.027 "data_size": 65536 00:21:42.027 }, 00:21:42.027 { 00:21:42.027 "name": "BaseBdev4", 00:21:42.027 "uuid": "3d643cfa-162f-433b-830b-559843d8b053", 00:21:42.027 "is_configured": true, 00:21:42.027 "data_offset": 0, 00:21:42.027 "data_size": 65536 00:21:42.027 } 00:21:42.027 ] 00:21:42.027 }' 00:21:42.027 11:32:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.027 11:32:00 -- common/autotest_common.sh@10 -- # set +x 00:21:42.287 11:32:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:42.287 11:32:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:42.287 11:32:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.287 11:32:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:42.546 11:32:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:42.546 11:32:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:42.546 11:32:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:42.805 [2024-11-26 11:32:00.819033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:42.805 [2024-11-26 11:32:00.819062] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.805 [2024-11-26 11:32:00.819134] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.805 11:32:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:42.805 11:32:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:42.805 11:32:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.805 11:32:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:43.065 11:32:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:43.065 11:32:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.065 11:32:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:43.324 [2024-11-26 11:32:01.313530] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:43.324 11:32:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:43.324 11:32:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:43.324 11:32:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:43.324 11:32:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.582 11:32:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:43.582 11:32:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.582 11:32:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:43.582 [2024-11-26 11:32:01.735804] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:43.582 [2024-11-26 11:32:01.736061] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:21:43.582 11:32:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:43.583 11:32:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:43.583 11:32:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.583 11:32:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:43.842 11:32:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:43.842 11:32:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:43.842 11:32:02 -- bdev/bdev_raid.sh@287 -- # killprocess 94389 00:21:43.842 11:32:02 -- common/autotest_common.sh@936 -- # '[' -z 94389 ']' 00:21:43.842 11:32:02 -- common/autotest_common.sh@940 -- # kill -0 94389 00:21:43.842 11:32:02 -- common/autotest_common.sh@941 -- # uname 00:21:43.842 11:32:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.842 11:32:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94389 00:21:43.842 11:32:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.842 11:32:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.842 11:32:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94389' 00:21:43.842 killing process with pid 94389 00:21:43.842 11:32:02 -- common/autotest_common.sh@955 -- # kill 94389 00:21:43.842 [2024-11-26 11:32:02.037652] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:43.842 [2024-11-26 11:32:02.037722] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:43.842 11:32:02 -- common/autotest_common.sh@960 -- # wait 94389 00:21:44.102 ************************************ 00:21:44.102 END TEST raid5f_state_function_test 00:21:44.102 ************************************ 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:44.102 00:21:44.102 real 0m10.058s 00:21:44.102 user 0m17.661s 00:21:44.102 sys 0m1.634s 00:21:44.102 11:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.102 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:44.102 11:32:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:44.102 11:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.102 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:21:44.102 ************************************ 00:21:44.102 START TEST raid5f_state_function_test_sb 00:21:44.102 ************************************ 00:21:44.102 11:32:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:44.102 Process raid pid: 94767 00:21:44.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=94767 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 94767' 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 94767 /var/tmp/spdk-raid.sock 00:21:44.102 11:32:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.102 11:32:02 -- common/autotest_common.sh@829 -- # '[' -z 94767 ']' 00:21:44.102 11:32:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.102 11:32:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.102 11:32:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.102 11:32:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.102 11:32:02 -- common/autotest_common.sh@10 -- # set +x 00:21:44.102 [2024-11-26 11:32:02.325517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:44.102 [2024-11-26 11:32:02.325869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.361 [2024-11-26 11:32:02.482949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.361 [2024-11-26 11:32:02.513483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.361 [2024-11-26 11:32:02.542796] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.299 11:32:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.299 11:32:03 -- common/autotest_common.sh@862 -- # return 0 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:45.299 [2024-11-26 11:32:03.413572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.299 [2024-11-26 11:32:03.413624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.299 [2024-11-26 11:32:03.413648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.299 [2024-11-26 11:32:03.413660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.299 [2024-11-26 11:32:03.413669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.299 [2024-11-26 11:32:03.413680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.299 [2024-11-26 11:32:03.413691] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:45.299 [2024-11-26 11:32:03.413701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.299 11:32:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.559 11:32:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.559 "name": "Existed_Raid", 00:21:45.559 "uuid": "7359712d-017c-417f-b1ed-21e0500765b0", 00:21:45.559 "strip_size_kb": 64, 00:21:45.559 "state": "configuring", 00:21:45.559 "raid_level": "raid5f", 00:21:45.559 "superblock": true, 00:21:45.559 "num_base_bdevs": 4, 00:21:45.559 "num_base_bdevs_discovered": 0, 00:21:45.559 "num_base_bdevs_operational": 4, 00:21:45.559 "base_bdevs_list": [ 00:21:45.559 { 00:21:45.559 "name": "BaseBdev1", 00:21:45.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.559 "is_configured": false, 00:21:45.559 "data_offset": 0, 00:21:45.559 "data_size": 0 00:21:45.559 }, 00:21:45.559 { 00:21:45.559 "name": "BaseBdev2", 00:21:45.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.559 "is_configured": false, 00:21:45.559 "data_offset": 0, 00:21:45.559 "data_size": 0 00:21:45.559 }, 00:21:45.559 { 00:21:45.559 "name": "BaseBdev3", 00:21:45.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.559 "is_configured": false, 00:21:45.559 "data_offset": 0, 00:21:45.559 "data_size": 0 00:21:45.559 }, 00:21:45.559 { 00:21:45.559 "name": "BaseBdev4", 00:21:45.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.559 "is_configured": false, 00:21:45.559 "data_offset": 0, 00:21:45.559 "data_size": 0 00:21:45.559 } 00:21:45.559 ] 00:21:45.559 }' 00:21:45.559 11:32:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.559 11:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:45.818 11:32:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.077 [2024-11-26 11:32:04.149627] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.077 [2024-11-26 11:32:04.149668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:46.077 11:32:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:46.336 [2024-11-26 11:32:04.397729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.336 [2024-11-26 11:32:04.397775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.336 [2024-11-26 11:32:04.397790] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.336 [2024-11-26 11:32:04.397800] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.336 [2024-11-26 11:32:04.397810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.336 [2024-11-26 11:32:04.397820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.336 [2024-11-26 11:32:04.397829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:46.336 [2024-11-26 11:32:04.397837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:46.336 11:32:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:46.595 [2024-11-26 11:32:04.587532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.595 BaseBdev1 00:21:46.595 11:32:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:46.595 11:32:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:46.595 11:32:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:46.595 11:32:04 -- common/autotest_common.sh@899 -- # local i 00:21:46.595 11:32:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:46.595 11:32:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:46.595 11:32:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:46.595 11:32:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:46.854 [ 00:21:46.854 { 00:21:46.854 "name": "BaseBdev1", 00:21:46.854 "aliases": [ 00:21:46.854 "dc706425-822a-4911-bcba-0dbab54f4055" 00:21:46.854 ], 00:21:46.854 "product_name": "Malloc disk", 00:21:46.854 "block_size": 512, 00:21:46.854 "num_blocks": 65536, 00:21:46.854 "uuid": "dc706425-822a-4911-bcba-0dbab54f4055", 00:21:46.854 "assigned_rate_limits": { 00:21:46.854 "rw_ios_per_sec": 0, 00:21:46.854 "rw_mbytes_per_sec": 0, 00:21:46.854 "r_mbytes_per_sec": 0, 00:21:46.854 "w_mbytes_per_sec": 0 00:21:46.854 }, 00:21:46.854 "claimed": true, 00:21:46.854 "claim_type": "exclusive_write", 00:21:46.854 "zoned": false, 00:21:46.854 "supported_io_types": { 00:21:46.854 "read": true, 00:21:46.854 "write": true, 00:21:46.854 "unmap": true, 00:21:46.854 "write_zeroes": true, 00:21:46.854 "flush": true, 00:21:46.854 "reset": true, 00:21:46.854 "compare": false, 00:21:46.854 "compare_and_write": false, 00:21:46.854 "abort": true, 00:21:46.854 "nvme_admin": false, 00:21:46.854 "nvme_io": false 00:21:46.854 }, 00:21:46.854 "memory_domains": [ 00:21:46.854 { 00:21:46.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.854 "dma_device_type": 2 00:21:46.854 } 00:21:46.854 ], 00:21:46.854 "driver_specific": {} 00:21:46.854 } 00:21:46.854 ] 00:21:46.854 11:32:05 -- common/autotest_common.sh@905 -- # return 0 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.854 11:32:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.114 11:32:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.114 "name": "Existed_Raid", 00:21:47.114 "uuid": "031f1d44-ca5b-4473-b66c-ea320b48b5b5", 00:21:47.114 "strip_size_kb": 64, 00:21:47.114 "state": "configuring", 00:21:47.114 "raid_level": "raid5f", 00:21:47.114 "superblock": true, 00:21:47.114 "num_base_bdevs": 4, 00:21:47.114 "num_base_bdevs_discovered": 1, 00:21:47.114 "num_base_bdevs_operational": 4, 00:21:47.114 "base_bdevs_list": [ 00:21:47.114 { 00:21:47.114 "name": "BaseBdev1", 00:21:47.114 "uuid": "dc706425-822a-4911-bcba-0dbab54f4055", 00:21:47.114 "is_configured": true, 00:21:47.114 "data_offset": 2048, 00:21:47.114 "data_size": 63488 00:21:47.114 }, 00:21:47.114 { 00:21:47.114 "name": "BaseBdev2", 00:21:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.114 "is_configured": false, 00:21:47.114 "data_offset": 0, 00:21:47.114 "data_size": 0 00:21:47.114 }, 00:21:47.114 { 00:21:47.114 "name": "BaseBdev3", 00:21:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.114 "is_configured": false, 00:21:47.114 "data_offset": 0, 00:21:47.114 "data_size": 0 00:21:47.114 }, 00:21:47.114 { 00:21:47.114 "name": "BaseBdev4", 00:21:47.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.114 "is_configured": false, 00:21:47.114 "data_offset": 0, 00:21:47.114 "data_size": 0 00:21:47.114 } 00:21:47.114 ] 00:21:47.114 }' 00:21:47.114 11:32:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.114 11:32:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.376 11:32:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:47.634 [2024-11-26 11:32:05.775810] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:47.634 [2024-11-26 11:32:05.775865] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:47.634 11:32:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:47.635 11:32:05 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:47.894 11:32:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:48.154 BaseBdev1 00:21:48.154 11:32:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:48.154 11:32:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:48.154 11:32:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:48.154 11:32:06 -- common/autotest_common.sh@899 -- # local i 00:21:48.154 11:32:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:48.154 11:32:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:48.154 11:32:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.411 11:32:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:48.411 [ 00:21:48.411 { 00:21:48.411 "name": "BaseBdev1", 00:21:48.411 "aliases": [ 00:21:48.411 "0845672b-7481-4ce4-8ee7-9b64411c9063" 00:21:48.411 ], 00:21:48.411 "product_name": "Malloc disk", 00:21:48.411 "block_size": 512, 00:21:48.411 "num_blocks": 65536, 00:21:48.411 "uuid": "0845672b-7481-4ce4-8ee7-9b64411c9063", 00:21:48.411 "assigned_rate_limits": { 00:21:48.411 "rw_ios_per_sec": 0, 00:21:48.411 "rw_mbytes_per_sec": 0, 00:21:48.411 "r_mbytes_per_sec": 0, 00:21:48.411 "w_mbytes_per_sec": 0 00:21:48.411 }, 00:21:48.411 "claimed": false, 00:21:48.411 "zoned": false, 00:21:48.411 "supported_io_types": { 00:21:48.411 "read": true, 00:21:48.411 "write": true, 00:21:48.411 "unmap": true, 00:21:48.411 "write_zeroes": true, 00:21:48.411 "flush": true, 00:21:48.411 "reset": true, 00:21:48.411 "compare": false, 00:21:48.411 "compare_and_write": false, 00:21:48.411 "abort": true, 00:21:48.411 "nvme_admin": false, 00:21:48.411 "nvme_io": false 00:21:48.411 }, 00:21:48.411 "memory_domains": [ 00:21:48.411 { 00:21:48.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.411 "dma_device_type": 2 00:21:48.411 } 00:21:48.411 ], 00:21:48.411 "driver_specific": {} 00:21:48.411 } 00:21:48.411 ] 00:21:48.411 11:32:06 -- common/autotest_common.sh@905 -- # return 0 00:21:48.411 11:32:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:48.670 [2024-11-26 11:32:06.795575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.670 [2024-11-26 11:32:06.797521] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.670 [2024-11-26 11:32:06.797564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.670 [2024-11-26 11:32:06.797580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:48.670 [2024-11-26 11:32:06.797590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:48.670 [2024-11-26 11:32:06.797599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:48.670 [2024-11-26 11:32:06.797608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.670 11:32:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.929 11:32:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.929 "name": "Existed_Raid", 00:21:48.929 "uuid": "857684d5-ba5a-431d-9916-cba15461f4e9", 00:21:48.929 "strip_size_kb": 64, 00:21:48.929 "state": "configuring", 00:21:48.929 "raid_level": "raid5f", 00:21:48.929 "superblock": true, 00:21:48.929 "num_base_bdevs": 4, 00:21:48.929 "num_base_bdevs_discovered": 1, 00:21:48.929 "num_base_bdevs_operational": 4, 00:21:48.929 "base_bdevs_list": [ 00:21:48.929 { 00:21:48.929 "name": "BaseBdev1", 00:21:48.929 "uuid": "0845672b-7481-4ce4-8ee7-9b64411c9063", 00:21:48.929 "is_configured": true, 00:21:48.929 "data_offset": 2048, 00:21:48.929 "data_size": 63488 00:21:48.929 }, 00:21:48.929 { 00:21:48.929 "name": "BaseBdev2", 00:21:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.929 "is_configured": false, 00:21:48.929 "data_offset": 0, 00:21:48.929 "data_size": 0 00:21:48.929 }, 00:21:48.929 { 00:21:48.929 "name": "BaseBdev3", 00:21:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.929 "is_configured": false, 00:21:48.929 "data_offset": 0, 00:21:48.929 "data_size": 0 00:21:48.929 }, 00:21:48.929 { 00:21:48.929 "name": "BaseBdev4", 00:21:48.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.929 "is_configured": false, 00:21:48.929 "data_offset": 0, 00:21:48.929 "data_size": 0 00:21:48.929 } 00:21:48.929 ] 00:21:48.929 }' 00:21:48.929 11:32:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.929 11:32:07 -- common/autotest_common.sh@10 -- # set +x 00:21:49.206 11:32:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:49.485 [2024-11-26 11:32:07.558595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.485 BaseBdev2 00:21:49.485 11:32:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:49.485 11:32:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:49.485 11:32:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:49.485 11:32:07 -- common/autotest_common.sh@899 -- # local i 00:21:49.485 11:32:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:49.485 11:32:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:49.485 11:32:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.743 11:32:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:50.001 [ 00:21:50.001 { 00:21:50.002 "name": "BaseBdev2", 00:21:50.002 "aliases": [ 00:21:50.002 "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2" 00:21:50.002 ], 00:21:50.002 "product_name": "Malloc disk", 00:21:50.002 "block_size": 512, 00:21:50.002 "num_blocks": 65536, 00:21:50.002 "uuid": "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2", 00:21:50.002 "assigned_rate_limits": { 00:21:50.002 "rw_ios_per_sec": 0, 00:21:50.002 "rw_mbytes_per_sec": 0, 00:21:50.002 "r_mbytes_per_sec": 0, 00:21:50.002 "w_mbytes_per_sec": 0 00:21:50.002 }, 00:21:50.002 "claimed": true, 00:21:50.002 "claim_type": "exclusive_write", 00:21:50.002 "zoned": false, 00:21:50.002 "supported_io_types": { 00:21:50.002 "read": true, 00:21:50.002 "write": true, 00:21:50.002 "unmap": true, 00:21:50.002 "write_zeroes": true, 00:21:50.002 "flush": true, 00:21:50.002 "reset": true, 00:21:50.002 "compare": false, 00:21:50.002 "compare_and_write": false, 00:21:50.002 "abort": true, 00:21:50.002 "nvme_admin": false, 00:21:50.002 "nvme_io": false 00:21:50.002 }, 00:21:50.002 "memory_domains": [ 00:21:50.002 { 00:21:50.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.002 "dma_device_type": 2 00:21:50.002 } 00:21:50.002 ], 00:21:50.002 "driver_specific": {} 00:21:50.002 } 00:21:50.002 ] 00:21:50.002 11:32:08 -- common/autotest_common.sh@905 -- # return 0 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.002 11:32:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.260 11:32:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.260 "name": "Existed_Raid", 00:21:50.260 "uuid": "857684d5-ba5a-431d-9916-cba15461f4e9", 00:21:50.260 "strip_size_kb": 64, 00:21:50.260 "state": "configuring", 00:21:50.260 "raid_level": "raid5f", 00:21:50.260 "superblock": true, 00:21:50.260 "num_base_bdevs": 4, 00:21:50.260 "num_base_bdevs_discovered": 2, 00:21:50.260 "num_base_bdevs_operational": 4, 00:21:50.260 "base_bdevs_list": [ 00:21:50.260 { 00:21:50.260 "name": "BaseBdev1", 00:21:50.260 "uuid": "0845672b-7481-4ce4-8ee7-9b64411c9063", 00:21:50.260 "is_configured": true, 00:21:50.260 "data_offset": 2048, 00:21:50.260 "data_size": 63488 00:21:50.260 }, 00:21:50.260 { 00:21:50.260 "name": "BaseBdev2", 00:21:50.260 "uuid": "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2", 00:21:50.260 "is_configured": true, 00:21:50.260 "data_offset": 2048, 00:21:50.260 "data_size": 63488 00:21:50.260 }, 00:21:50.260 { 00:21:50.260 "name": "BaseBdev3", 00:21:50.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.260 "is_configured": false, 00:21:50.260 "data_offset": 0, 00:21:50.260 "data_size": 0 00:21:50.260 }, 00:21:50.260 { 00:21:50.260 "name": "BaseBdev4", 00:21:50.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.260 "is_configured": false, 00:21:50.260 "data_offset": 0, 00:21:50.260 "data_size": 0 00:21:50.260 } 00:21:50.260 ] 00:21:50.260 }' 00:21:50.260 11:32:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.260 11:32:08 -- common/autotest_common.sh@10 -- # set +x 00:21:50.519 11:32:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:50.778 [2024-11-26 11:32:08.803128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:50.778 BaseBdev3 00:21:50.778 11:32:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:50.778 11:32:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:50.778 11:32:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:50.778 11:32:08 -- common/autotest_common.sh@899 -- # local i 00:21:50.778 11:32:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:50.778 11:32:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:50.778 11:32:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.778 11:32:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:51.037 [ 00:21:51.037 { 00:21:51.037 "name": "BaseBdev3", 00:21:51.037 "aliases": [ 00:21:51.037 "4e22d5ca-952a-4899-a49e-1cb65a3c89a9" 00:21:51.037 ], 00:21:51.037 "product_name": "Malloc disk", 00:21:51.037 "block_size": 512, 00:21:51.037 "num_blocks": 65536, 00:21:51.037 "uuid": "4e22d5ca-952a-4899-a49e-1cb65a3c89a9", 00:21:51.037 "assigned_rate_limits": { 00:21:51.037 "rw_ios_per_sec": 0, 00:21:51.037 "rw_mbytes_per_sec": 0, 00:21:51.037 "r_mbytes_per_sec": 0, 00:21:51.037 "w_mbytes_per_sec": 0 00:21:51.037 }, 00:21:51.037 "claimed": true, 00:21:51.037 "claim_type": "exclusive_write", 00:21:51.037 "zoned": false, 00:21:51.037 "supported_io_types": { 00:21:51.037 "read": true, 00:21:51.037 "write": true, 00:21:51.037 "unmap": true, 00:21:51.037 "write_zeroes": true, 00:21:51.037 "flush": true, 00:21:51.037 "reset": true, 00:21:51.037 "compare": false, 00:21:51.037 "compare_and_write": false, 00:21:51.037 "abort": true, 00:21:51.037 "nvme_admin": false, 00:21:51.037 "nvme_io": false 00:21:51.037 }, 00:21:51.037 "memory_domains": [ 00:21:51.037 { 00:21:51.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.037 "dma_device_type": 2 00:21:51.037 } 00:21:51.037 ], 00:21:51.037 "driver_specific": {} 00:21:51.037 } 00:21:51.037 ] 00:21:51.037 11:32:09 -- common/autotest_common.sh@905 -- # return 0 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.037 11:32:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.296 11:32:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.296 "name": "Existed_Raid", 00:21:51.296 "uuid": "857684d5-ba5a-431d-9916-cba15461f4e9", 00:21:51.296 "strip_size_kb": 64, 00:21:51.296 "state": "configuring", 00:21:51.296 "raid_level": "raid5f", 00:21:51.296 "superblock": true, 00:21:51.296 "num_base_bdevs": 4, 00:21:51.296 "num_base_bdevs_discovered": 3, 00:21:51.296 "num_base_bdevs_operational": 4, 00:21:51.296 "base_bdevs_list": [ 00:21:51.296 { 00:21:51.296 "name": "BaseBdev1", 00:21:51.296 "uuid": "0845672b-7481-4ce4-8ee7-9b64411c9063", 00:21:51.296 "is_configured": true, 00:21:51.296 "data_offset": 2048, 00:21:51.296 "data_size": 63488 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "name": "BaseBdev2", 00:21:51.296 "uuid": "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2", 00:21:51.296 "is_configured": true, 00:21:51.296 "data_offset": 2048, 00:21:51.296 "data_size": 63488 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "name": "BaseBdev3", 00:21:51.296 "uuid": "4e22d5ca-952a-4899-a49e-1cb65a3c89a9", 00:21:51.296 "is_configured": true, 00:21:51.296 "data_offset": 2048, 00:21:51.296 "data_size": 63488 00:21:51.296 }, 00:21:51.296 { 00:21:51.296 "name": "BaseBdev4", 00:21:51.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.296 "is_configured": false, 00:21:51.296 "data_offset": 0, 00:21:51.296 "data_size": 0 00:21:51.296 } 00:21:51.296 ] 00:21:51.296 }' 00:21:51.296 11:32:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.296 11:32:09 -- common/autotest_common.sh@10 -- # set +x 00:21:51.554 11:32:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:51.813 [2024-11-26 11:32:09.847602] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:51.813 [2024-11-26 11:32:09.848055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:21:51.813 [2024-11-26 11:32:09.848204] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:51.813 [2024-11-26 11:32:09.848392] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:51.813 BaseBdev4 00:21:51.813 [2024-11-26 11:32:09.849188] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:21:51.813 [2024-11-26 11:32:09.849329] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:21:51.813 [2024-11-26 11:32:09.849612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.813 11:32:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:51.813 11:32:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:51.813 11:32:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:51.813 11:32:09 -- common/autotest_common.sh@899 -- # local i 00:21:51.813 11:32:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:51.813 11:32:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:51.813 11:32:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.072 11:32:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:52.072 [ 00:21:52.072 { 00:21:52.072 "name": "BaseBdev4", 00:21:52.072 "aliases": [ 00:21:52.072 "c2591d47-9818-4232-a83c-191a9a00287e" 00:21:52.072 ], 00:21:52.072 "product_name": "Malloc disk", 00:21:52.072 "block_size": 512, 00:21:52.072 "num_blocks": 65536, 00:21:52.072 "uuid": "c2591d47-9818-4232-a83c-191a9a00287e", 00:21:52.072 "assigned_rate_limits": { 00:21:52.072 "rw_ios_per_sec": 0, 00:21:52.072 "rw_mbytes_per_sec": 0, 00:21:52.072 "r_mbytes_per_sec": 0, 00:21:52.072 "w_mbytes_per_sec": 0 00:21:52.072 }, 00:21:52.072 "claimed": true, 00:21:52.072 "claim_type": "exclusive_write", 00:21:52.072 "zoned": false, 00:21:52.072 "supported_io_types": { 00:21:52.072 "read": true, 00:21:52.072 "write": true, 00:21:52.072 "unmap": true, 00:21:52.072 "write_zeroes": true, 00:21:52.072 "flush": true, 00:21:52.072 "reset": true, 00:21:52.072 "compare": false, 00:21:52.072 "compare_and_write": false, 00:21:52.072 "abort": true, 00:21:52.072 "nvme_admin": false, 00:21:52.072 "nvme_io": false 00:21:52.072 }, 00:21:52.072 "memory_domains": [ 00:21:52.072 { 00:21:52.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.072 "dma_device_type": 2 00:21:52.072 } 00:21:52.072 ], 00:21:52.072 "driver_specific": {} 00:21:52.072 } 00:21:52.072 ] 00:21:52.072 11:32:10 -- common/autotest_common.sh@905 -- # return 0 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.072 11:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.331 11:32:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:52.331 "name": "Existed_Raid", 00:21:52.331 "uuid": "857684d5-ba5a-431d-9916-cba15461f4e9", 00:21:52.331 "strip_size_kb": 64, 00:21:52.331 "state": "online", 00:21:52.331 "raid_level": "raid5f", 00:21:52.331 "superblock": true, 00:21:52.331 "num_base_bdevs": 4, 00:21:52.331 "num_base_bdevs_discovered": 4, 00:21:52.331 "num_base_bdevs_operational": 4, 00:21:52.331 "base_bdevs_list": [ 00:21:52.331 { 00:21:52.331 "name": "BaseBdev1", 00:21:52.331 "uuid": "0845672b-7481-4ce4-8ee7-9b64411c9063", 00:21:52.331 "is_configured": true, 00:21:52.331 "data_offset": 2048, 00:21:52.331 "data_size": 63488 00:21:52.331 }, 00:21:52.331 { 00:21:52.331 "name": "BaseBdev2", 00:21:52.331 "uuid": "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2", 00:21:52.331 "is_configured": true, 00:21:52.331 "data_offset": 2048, 00:21:52.331 "data_size": 63488 00:21:52.331 }, 00:21:52.331 { 00:21:52.331 "name": "BaseBdev3", 00:21:52.331 "uuid": "4e22d5ca-952a-4899-a49e-1cb65a3c89a9", 00:21:52.331 "is_configured": true, 00:21:52.331 "data_offset": 2048, 00:21:52.331 "data_size": 63488 00:21:52.331 }, 00:21:52.331 { 00:21:52.331 "name": "BaseBdev4", 00:21:52.331 "uuid": "c2591d47-9818-4232-a83c-191a9a00287e", 00:21:52.331 "is_configured": true, 00:21:52.331 "data_offset": 2048, 00:21:52.331 "data_size": 63488 00:21:52.331 } 00:21:52.331 ] 00:21:52.331 }' 00:21:52.331 11:32:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:52.331 11:32:10 -- common/autotest_common.sh@10 -- # set +x 00:21:52.591 11:32:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:52.850 [2024-11-26 11:32:10.947947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.850 11:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.108 11:32:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.108 "name": "Existed_Raid", 00:21:53.108 "uuid": "857684d5-ba5a-431d-9916-cba15461f4e9", 00:21:53.108 "strip_size_kb": 64, 00:21:53.108 "state": "online", 00:21:53.108 "raid_level": "raid5f", 00:21:53.108 "superblock": true, 00:21:53.108 "num_base_bdevs": 4, 00:21:53.108 "num_base_bdevs_discovered": 3, 00:21:53.108 "num_base_bdevs_operational": 3, 00:21:53.108 "base_bdevs_list": [ 00:21:53.108 { 00:21:53.108 "name": null, 00:21:53.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.108 "is_configured": false, 00:21:53.108 "data_offset": 2048, 00:21:53.108 "data_size": 63488 00:21:53.108 }, 00:21:53.108 { 00:21:53.108 "name": "BaseBdev2", 00:21:53.108 "uuid": "7e0a8eb5-121d-43c1-b5a1-4ffb55cf74b2", 00:21:53.108 "is_configured": true, 00:21:53.109 "data_offset": 2048, 00:21:53.109 "data_size": 63488 00:21:53.109 }, 00:21:53.109 { 00:21:53.109 "name": "BaseBdev3", 00:21:53.109 "uuid": "4e22d5ca-952a-4899-a49e-1cb65a3c89a9", 00:21:53.109 "is_configured": true, 00:21:53.109 "data_offset": 2048, 00:21:53.109 "data_size": 63488 00:21:53.109 }, 00:21:53.109 { 00:21:53.109 "name": "BaseBdev4", 00:21:53.109 "uuid": "c2591d47-9818-4232-a83c-191a9a00287e", 00:21:53.109 "is_configured": true, 00:21:53.109 "data_offset": 2048, 00:21:53.109 "data_size": 63488 00:21:53.109 } 00:21:53.109 ] 00:21:53.109 }' 00:21:53.109 11:32:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.109 11:32:11 -- common/autotest_common.sh@10 -- # set +x 00:21:53.367 11:32:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:53.367 11:32:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:53.367 11:32:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.367 11:32:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:53.626 11:32:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:53.626 11:32:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.626 11:32:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:53.885 [2024-11-26 11:32:12.042655] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:53.885 [2024-11-26 11:32:12.042828] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.885 [2024-11-26 11:32:12.043039] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.885 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:53.885 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:53.885 11:32:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.885 11:32:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:54.143 11:32:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:54.143 11:32:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:54.143 11:32:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:54.401 [2024-11-26 11:32:12.501881] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:54.401 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:54.401 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:54.401 11:32:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.401 11:32:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:54.660 [2024-11-26 11:32:12.856696] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:54.660 [2024-11-26 11:32:12.856896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.660 11:32:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:54.918 11:32:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:54.918 11:32:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:54.918 11:32:13 -- bdev/bdev_raid.sh@287 -- # killprocess 94767 00:21:54.918 11:32:13 -- common/autotest_common.sh@936 -- # '[' -z 94767 ']' 00:21:54.918 11:32:13 -- common/autotest_common.sh@940 -- # kill -0 94767 00:21:54.918 11:32:13 -- common/autotest_common.sh@941 -- # uname 00:21:54.918 11:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.918 11:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94767 00:21:54.918 killing process with pid 94767 00:21:54.918 11:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:54.918 11:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:54.918 11:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94767' 00:21:54.918 11:32:13 -- common/autotest_common.sh@955 -- # kill 94767 00:21:54.918 [2024-11-26 11:32:13.150468] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:54.918 [2024-11-26 11:32:13.150533] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.918 11:32:13 -- common/autotest_common.sh@960 -- # wait 94767 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:55.176 00:21:55.176 real 0m11.060s 00:21:55.176 user 0m19.558s 00:21:55.176 sys 0m1.701s 00:21:55.176 11:32:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:55.176 ************************************ 00:21:55.176 END TEST raid5f_state_function_test_sb 00:21:55.176 ************************************ 00:21:55.176 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:55.176 11:32:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:55.176 11:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:55.176 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:21:55.176 ************************************ 00:21:55.176 START TEST raid5f_superblock_test 00:21:55.176 ************************************ 00:21:55.176 11:32:13 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:55.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=95153 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:55.176 11:32:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 95153 /var/tmp/spdk-raid.sock 00:21:55.176 11:32:13 -- common/autotest_common.sh@829 -- # '[' -z 95153 ']' 00:21:55.176 11:32:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:55.176 11:32:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.176 11:32:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:55.176 11:32:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.176 11:32:13 -- common/autotest_common.sh@10 -- # set +x 00:21:55.435 [2024-11-26 11:32:13.437906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:55.435 [2024-11-26 11:32:13.438273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95153 ] 00:21:55.435 [2024-11-26 11:32:13.594616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.435 [2024-11-26 11:32:13.626193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.435 [2024-11-26 11:32:13.656841] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:56.371 11:32:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.371 11:32:14 -- common/autotest_common.sh@862 -- # return 0 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:56.371 malloc1 00:21:56.371 11:32:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:56.630 [2024-11-26 11:32:14.698621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:56.630 [2024-11-26 11:32:14.698844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.630 [2024-11-26 11:32:14.698959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:21:56.630 [2024-11-26 11:32:14.699195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.630 [2024-11-26 11:32:14.701494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.630 [2024-11-26 11:32:14.701655] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:56.630 pt1 00:21:56.630 11:32:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.631 11:32:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:56.890 malloc2 00:21:56.890 11:32:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:57.149 [2024-11-26 11:32:15.128939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:57.149 [2024-11-26 11:32:15.129016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.149 [2024-11-26 11:32:15.129068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:21:57.149 [2024-11-26 11:32:15.129083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.149 [2024-11-26 11:32:15.131675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.149 [2024-11-26 11:32:15.131718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:57.149 pt2 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:57.149 11:32:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:57.149 malloc3 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:57.409 [2024-11-26 11:32:15.614937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:57.409 [2024-11-26 11:32:15.615012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.409 [2024-11-26 11:32:15.615040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:21:57.409 [2024-11-26 11:32:15.615053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.409 [2024-11-26 11:32:15.617178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.409 [2024-11-26 11:32:15.617217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:57.409 pt3 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:57.409 11:32:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:57.668 malloc4 00:21:57.668 11:32:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:57.928 [2024-11-26 11:32:15.980933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:57.928 [2024-11-26 11:32:15.981040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.928 [2024-11-26 11:32:15.981075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:57.928 [2024-11-26 11:32:15.981088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.928 [2024-11-26 11:32:15.983404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.928 [2024-11-26 11:32:15.983443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:57.928 pt4 00:21:57.928 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:57.928 11:32:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:57.928 11:32:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:58.187 [2024-11-26 11:32:16.217040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:58.187 [2024-11-26 11:32:16.218873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:58.187 [2024-11-26 11:32:16.218980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:58.187 [2024-11-26 11:32:16.219040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:58.187 [2024-11-26 11:32:16.219259] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:21:58.187 [2024-11-26 11:32:16.219278] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:58.187 [2024-11-26 11:32:16.219379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:21:58.187 [2024-11-26 11:32:16.220034] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:21:58.187 [2024-11-26 11:32:16.220053] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:21:58.187 [2024-11-26 11:32:16.220162] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.187 11:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.445 11:32:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.445 "name": "raid_bdev1", 00:21:58.445 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:21:58.445 "strip_size_kb": 64, 00:21:58.445 "state": "online", 00:21:58.445 "raid_level": "raid5f", 00:21:58.445 "superblock": true, 00:21:58.445 "num_base_bdevs": 4, 00:21:58.445 "num_base_bdevs_discovered": 4, 00:21:58.445 "num_base_bdevs_operational": 4, 00:21:58.445 "base_bdevs_list": [ 00:21:58.445 { 00:21:58.445 "name": "pt1", 00:21:58.445 "uuid": "abb34573-322b-55f0-bb38-400152cfbcf1", 00:21:58.445 "is_configured": true, 00:21:58.445 "data_offset": 2048, 00:21:58.445 "data_size": 63488 00:21:58.445 }, 00:21:58.445 { 00:21:58.445 "name": "pt2", 00:21:58.445 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:21:58.445 "is_configured": true, 00:21:58.445 "data_offset": 2048, 00:21:58.446 "data_size": 63488 00:21:58.446 }, 00:21:58.446 { 00:21:58.446 "name": "pt3", 00:21:58.446 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:21:58.446 "is_configured": true, 00:21:58.446 "data_offset": 2048, 00:21:58.446 "data_size": 63488 00:21:58.446 }, 00:21:58.446 { 00:21:58.446 "name": "pt4", 00:21:58.446 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:21:58.446 "is_configured": true, 00:21:58.446 "data_offset": 2048, 00:21:58.446 "data_size": 63488 00:21:58.446 } 00:21:58.446 ] 00:21:58.446 }' 00:21:58.446 11:32:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.446 11:32:16 -- common/autotest_common.sh@10 -- # set +x 00:21:58.704 11:32:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:58.704 11:32:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:58.963 [2024-11-26 11:32:16.957809] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.963 11:32:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9a9a0242-2ceb-4d07-937e-476b6be3cbc2 00:21:58.963 11:32:16 -- bdev/bdev_raid.sh@380 -- # '[' -z 9a9a0242-2ceb-4d07-937e-476b6be3cbc2 ']' 00:21:58.963 11:32:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:58.963 [2024-11-26 11:32:17.141664] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.963 [2024-11-26 11:32:17.141692] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:58.963 [2024-11-26 11:32:17.141768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.963 [2024-11-26 11:32:17.141863] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.963 [2024-11-26 11:32:17.141898] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:21:58.963 11:32:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:58.963 11:32:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.222 11:32:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:59.222 11:32:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:59.222 11:32:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.222 11:32:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:59.480 11:32:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.480 11:32:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:59.480 11:32:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.480 11:32:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:59.739 11:32:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.739 11:32:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:59.997 11:32:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:59.997 11:32:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:00.256 11:32:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:00.256 11:32:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:00.256 11:32:18 -- common/autotest_common.sh@650 -- # local es=0 00:22:00.256 11:32:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:00.256 11:32:18 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.257 11:32:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.257 11:32:18 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.257 11:32:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.257 11:32:18 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.257 11:32:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.257 11:32:18 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:00.257 11:32:18 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:00.257 11:32:18 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:00.515 [2024-11-26 11:32:18.521975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:00.515 [2024-11-26 11:32:18.523875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:00.515 [2024-11-26 11:32:18.523931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:00.515 [2024-11-26 11:32:18.524137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:00.515 [2024-11-26 11:32:18.524231] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:00.515 [2024-11-26 11:32:18.524285] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:00.515 [2024-11-26 11:32:18.524312] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:00.515 [2024-11-26 11:32:18.524359] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:00.515 [2024-11-26 11:32:18.524381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.515 [2024-11-26 11:32:18.524405] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:22:00.515 request: 00:22:00.515 { 00:22:00.515 "name": "raid_bdev1", 00:22:00.515 "raid_level": "raid5f", 00:22:00.515 "base_bdevs": [ 00:22:00.515 "malloc1", 00:22:00.515 "malloc2", 00:22:00.515 "malloc3", 00:22:00.515 "malloc4" 00:22:00.515 ], 00:22:00.515 "superblock": false, 00:22:00.515 "strip_size_kb": 64, 00:22:00.515 "method": "bdev_raid_create", 00:22:00.515 "req_id": 1 00:22:00.515 } 00:22:00.515 Got JSON-RPC error response 00:22:00.515 response: 00:22:00.515 { 00:22:00.516 "code": -17, 00:22:00.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:00.516 } 00:22:00.516 11:32:18 -- common/autotest_common.sh@653 -- # es=1 00:22:00.516 11:32:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.516 11:32:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.516 11:32:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.516 11:32:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.516 11:32:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:00.774 [2024-11-26 11:32:18.950007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:00.774 [2024-11-26 11:32:18.950084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.774 [2024-11-26 11:32:18.950113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:22:00.774 [2024-11-26 11:32:18.950126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.774 [2024-11-26 11:32:18.952171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.774 [2024-11-26 11:32:18.952211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:00.774 [2024-11-26 11:32:18.952283] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:00.774 [2024-11-26 11:32:18.952326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:00.774 pt1 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.774 11:32:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.033 11:32:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.033 "name": "raid_bdev1", 00:22:01.033 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:01.033 "strip_size_kb": 64, 00:22:01.033 "state": "configuring", 00:22:01.033 "raid_level": "raid5f", 00:22:01.033 "superblock": true, 00:22:01.033 "num_base_bdevs": 4, 00:22:01.033 "num_base_bdevs_discovered": 1, 00:22:01.033 "num_base_bdevs_operational": 4, 00:22:01.033 "base_bdevs_list": [ 00:22:01.033 { 00:22:01.033 "name": "pt1", 00:22:01.033 "uuid": "abb34573-322b-55f0-bb38-400152cfbcf1", 00:22:01.033 "is_configured": true, 00:22:01.033 "data_offset": 2048, 00:22:01.033 "data_size": 63488 00:22:01.033 }, 00:22:01.033 { 00:22:01.033 "name": null, 00:22:01.033 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:01.033 "is_configured": false, 00:22:01.033 "data_offset": 2048, 00:22:01.033 "data_size": 63488 00:22:01.033 }, 00:22:01.033 { 00:22:01.033 "name": null, 00:22:01.033 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:01.033 "is_configured": false, 00:22:01.033 "data_offset": 2048, 00:22:01.033 "data_size": 63488 00:22:01.033 }, 00:22:01.033 { 00:22:01.033 "name": null, 00:22:01.033 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:01.033 "is_configured": false, 00:22:01.033 "data_offset": 2048, 00:22:01.033 "data_size": 63488 00:22:01.033 } 00:22:01.033 ] 00:22:01.033 }' 00:22:01.033 11:32:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.033 11:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.291 11:32:19 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:01.291 11:32:19 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:01.550 [2024-11-26 11:32:19.638268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:01.550 [2024-11-26 11:32:19.638363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.550 [2024-11-26 11:32:19.638395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:22:01.550 [2024-11-26 11:32:19.638408] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.550 [2024-11-26 11:32:19.638797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.550 [2024-11-26 11:32:19.638819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:01.550 [2024-11-26 11:32:19.638927] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:01.550 [2024-11-26 11:32:19.638957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:01.550 pt2 00:22:01.550 11:32:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:01.809 [2024-11-26 11:32:19.878297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.809 11:32:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.067 11:32:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.067 "name": "raid_bdev1", 00:22:02.067 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:02.067 "strip_size_kb": 64, 00:22:02.067 "state": "configuring", 00:22:02.067 "raid_level": "raid5f", 00:22:02.067 "superblock": true, 00:22:02.067 "num_base_bdevs": 4, 00:22:02.067 "num_base_bdevs_discovered": 1, 00:22:02.067 "num_base_bdevs_operational": 4, 00:22:02.067 "base_bdevs_list": [ 00:22:02.068 { 00:22:02.068 "name": "pt1", 00:22:02.068 "uuid": "abb34573-322b-55f0-bb38-400152cfbcf1", 00:22:02.068 "is_configured": true, 00:22:02.068 "data_offset": 2048, 00:22:02.068 "data_size": 63488 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": null, 00:22:02.068 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:02.068 "is_configured": false, 00:22:02.068 "data_offset": 2048, 00:22:02.068 "data_size": 63488 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": null, 00:22:02.068 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:02.068 "is_configured": false, 00:22:02.068 "data_offset": 2048, 00:22:02.068 "data_size": 63488 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": null, 00:22:02.068 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:02.068 "is_configured": false, 00:22:02.068 "data_offset": 2048, 00:22:02.068 "data_size": 63488 00:22:02.068 } 00:22:02.068 ] 00:22:02.068 }' 00:22:02.068 11:32:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.068 11:32:20 -- common/autotest_common.sh@10 -- # set +x 00:22:02.326 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:02.326 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:02.326 11:32:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.585 [2024-11-26 11:32:20.622500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.585 [2024-11-26 11:32:20.622567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.585 [2024-11-26 11:32:20.622593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:22:02.585 [2024-11-26 11:32:20.622609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.585 [2024-11-26 11:32:20.623034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.585 [2024-11-26 11:32:20.623062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.585 [2024-11-26 11:32:20.623133] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:02.585 [2024-11-26 11:32:20.623164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.585 pt2 00:22:02.585 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:02.585 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:02.585 11:32:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:02.585 [2024-11-26 11:32:20.814578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:02.585 [2024-11-26 11:32:20.814805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.585 [2024-11-26 11:32:20.814842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:22:02.585 [2024-11-26 11:32:20.814858] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.585 [2024-11-26 11:32:20.815284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.585 [2024-11-26 11:32:20.815311] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:02.585 [2024-11-26 11:32:20.815379] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:02.585 [2024-11-26 11:32:20.815416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:02.585 pt3 00:22:02.844 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:02.844 11:32:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:02.844 11:32:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:02.844 [2024-11-26 11:32:21.005416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:02.844 [2024-11-26 11:32:21.005496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.844 [2024-11-26 11:32:21.005540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:02.844 [2024-11-26 11:32:21.005575] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.844 [2024-11-26 11:32:21.006103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.844 [2024-11-26 11:32:21.006147] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:02.844 [2024-11-26 11:32:21.006231] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:02.844 [2024-11-26 11:32:21.006277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:02.844 [2024-11-26 11:32:21.006481] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:22:02.844 [2024-11-26 11:32:21.006504] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:02.844 [2024-11-26 11:32:21.006591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:02.844 [2024-11-26 11:32:21.007554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:22:02.844 [2024-11-26 11:32:21.007584] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:22:02.844 [2024-11-26 11:32:21.007732] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.844 pt4 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.844 11:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.103 11:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.103 "name": "raid_bdev1", 00:22:03.103 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:03.103 "strip_size_kb": 64, 00:22:03.103 "state": "online", 00:22:03.103 "raid_level": "raid5f", 00:22:03.103 "superblock": true, 00:22:03.103 "num_base_bdevs": 4, 00:22:03.103 "num_base_bdevs_discovered": 4, 00:22:03.103 "num_base_bdevs_operational": 4, 00:22:03.103 "base_bdevs_list": [ 00:22:03.103 { 00:22:03.103 "name": "pt1", 00:22:03.103 "uuid": "abb34573-322b-55f0-bb38-400152cfbcf1", 00:22:03.103 "is_configured": true, 00:22:03.103 "data_offset": 2048, 00:22:03.103 "data_size": 63488 00:22:03.103 }, 00:22:03.103 { 00:22:03.103 "name": "pt2", 00:22:03.103 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:03.103 "is_configured": true, 00:22:03.103 "data_offset": 2048, 00:22:03.103 "data_size": 63488 00:22:03.103 }, 00:22:03.103 { 00:22:03.103 "name": "pt3", 00:22:03.103 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:03.103 "is_configured": true, 00:22:03.103 "data_offset": 2048, 00:22:03.103 "data_size": 63488 00:22:03.103 }, 00:22:03.103 { 00:22:03.103 "name": "pt4", 00:22:03.103 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:03.103 "is_configured": true, 00:22:03.103 "data_offset": 2048, 00:22:03.103 "data_size": 63488 00:22:03.103 } 00:22:03.103 ] 00:22:03.103 }' 00:22:03.103 11:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.103 11:32:21 -- common/autotest_common.sh@10 -- # set +x 00:22:03.361 11:32:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:03.361 11:32:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:03.620 [2024-11-26 11:32:21.785663] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.620 11:32:21 -- bdev/bdev_raid.sh@430 -- # '[' 9a9a0242-2ceb-4d07-937e-476b6be3cbc2 '!=' 9a9a0242-2ceb-4d07-937e-476b6be3cbc2 ']' 00:22:03.620 11:32:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:03.620 11:32:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:03.620 11:32:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:03.620 11:32:21 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:03.878 [2024-11-26 11:32:21.977190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.878 11:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.137 11:32:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.137 "name": "raid_bdev1", 00:22:04.137 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:04.137 "strip_size_kb": 64, 00:22:04.137 "state": "online", 00:22:04.137 "raid_level": "raid5f", 00:22:04.137 "superblock": true, 00:22:04.137 "num_base_bdevs": 4, 00:22:04.137 "num_base_bdevs_discovered": 3, 00:22:04.137 "num_base_bdevs_operational": 3, 00:22:04.137 "base_bdevs_list": [ 00:22:04.137 { 00:22:04.137 "name": null, 00:22:04.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.137 "is_configured": false, 00:22:04.137 "data_offset": 2048, 00:22:04.137 "data_size": 63488 00:22:04.137 }, 00:22:04.137 { 00:22:04.137 "name": "pt2", 00:22:04.137 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:04.137 "is_configured": true, 00:22:04.137 "data_offset": 2048, 00:22:04.137 "data_size": 63488 00:22:04.137 }, 00:22:04.137 { 00:22:04.137 "name": "pt3", 00:22:04.137 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:04.137 "is_configured": true, 00:22:04.137 "data_offset": 2048, 00:22:04.137 "data_size": 63488 00:22:04.137 }, 00:22:04.137 { 00:22:04.137 "name": "pt4", 00:22:04.137 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:04.137 "is_configured": true, 00:22:04.137 "data_offset": 2048, 00:22:04.137 "data_size": 63488 00:22:04.137 } 00:22:04.137 ] 00:22:04.137 }' 00:22:04.137 11:32:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.137 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:22:04.396 11:32:22 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:04.655 [2024-11-26 11:32:22.737307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.655 [2024-11-26 11:32:22.737341] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.655 [2024-11-26 11:32:22.737410] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.655 [2024-11-26 11:32:22.737492] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.655 [2024-11-26 11:32:22.737504] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:22:04.655 11:32:22 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.655 11:32:22 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:04.913 11:32:22 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:04.913 11:32:22 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:04.913 11:32:22 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:04.913 11:32:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:04.913 11:32:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:05.172 11:32:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:05.431 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:05.431 11:32:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:05.431 11:32:23 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:05.431 11:32:23 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:05.431 11:32:23 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:05.690 [2024-11-26 11:32:23.773547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:05.690 [2024-11-26 11:32:23.773608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.690 [2024-11-26 11:32:23.773636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:22:05.690 [2024-11-26 11:32:23.773649] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.690 [2024-11-26 11:32:23.775810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.690 [2024-11-26 11:32:23.775850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:05.690 [2024-11-26 11:32:23.775957] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:05.690 [2024-11-26 11:32:23.776021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:05.690 pt2 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.690 11:32:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.949 11:32:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.949 "name": "raid_bdev1", 00:22:05.949 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:05.949 "strip_size_kb": 64, 00:22:05.949 "state": "configuring", 00:22:05.949 "raid_level": "raid5f", 00:22:05.949 "superblock": true, 00:22:05.949 "num_base_bdevs": 4, 00:22:05.949 "num_base_bdevs_discovered": 1, 00:22:05.949 "num_base_bdevs_operational": 3, 00:22:05.949 "base_bdevs_list": [ 00:22:05.949 { 00:22:05.949 "name": null, 00:22:05.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.949 "is_configured": false, 00:22:05.949 "data_offset": 2048, 00:22:05.949 "data_size": 63488 00:22:05.949 }, 00:22:05.949 { 00:22:05.949 "name": "pt2", 00:22:05.949 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:05.949 "is_configured": true, 00:22:05.949 "data_offset": 2048, 00:22:05.949 "data_size": 63488 00:22:05.949 }, 00:22:05.949 { 00:22:05.949 "name": null, 00:22:05.949 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:05.949 "is_configured": false, 00:22:05.949 "data_offset": 2048, 00:22:05.949 "data_size": 63488 00:22:05.949 }, 00:22:05.949 { 00:22:05.949 "name": null, 00:22:05.949 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:05.949 "is_configured": false, 00:22:05.949 "data_offset": 2048, 00:22:05.949 "data_size": 63488 00:22:05.949 } 00:22:05.949 ] 00:22:05.949 }' 00:22:05.949 11:32:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.949 11:32:23 -- common/autotest_common.sh@10 -- # set +x 00:22:06.207 11:32:24 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:06.207 11:32:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:06.207 11:32:24 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:06.467 [2024-11-26 11:32:24.501729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:06.467 [2024-11-26 11:32:24.501798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.467 [2024-11-26 11:32:24.501831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:22:06.467 [2024-11-26 11:32:24.501843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.467 [2024-11-26 11:32:24.502318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.467 [2024-11-26 11:32:24.502348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:06.467 [2024-11-26 11:32:24.502461] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:06.467 [2024-11-26 11:32:24.502495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:06.467 pt3 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.467 "name": "raid_bdev1", 00:22:06.467 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:06.467 "strip_size_kb": 64, 00:22:06.467 "state": "configuring", 00:22:06.467 "raid_level": "raid5f", 00:22:06.467 "superblock": true, 00:22:06.467 "num_base_bdevs": 4, 00:22:06.467 "num_base_bdevs_discovered": 2, 00:22:06.467 "num_base_bdevs_operational": 3, 00:22:06.467 "base_bdevs_list": [ 00:22:06.467 { 00:22:06.467 "name": null, 00:22:06.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.467 "is_configured": false, 00:22:06.467 "data_offset": 2048, 00:22:06.467 "data_size": 63488 00:22:06.467 }, 00:22:06.467 { 00:22:06.467 "name": "pt2", 00:22:06.467 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:06.467 "is_configured": true, 00:22:06.467 "data_offset": 2048, 00:22:06.467 "data_size": 63488 00:22:06.467 }, 00:22:06.467 { 00:22:06.467 "name": "pt3", 00:22:06.467 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:06.467 "is_configured": true, 00:22:06.467 "data_offset": 2048, 00:22:06.467 "data_size": 63488 00:22:06.467 }, 00:22:06.467 { 00:22:06.467 "name": null, 00:22:06.467 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:06.467 "is_configured": false, 00:22:06.467 "data_offset": 2048, 00:22:06.467 "data_size": 63488 00:22:06.467 } 00:22:06.467 ] 00:22:06.467 }' 00:22:06.467 11:32:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.467 11:32:24 -- common/autotest_common.sh@10 -- # set +x 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:07.036 [2024-11-26 11:32:25.245863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:07.036 [2024-11-26 11:32:25.245938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.036 [2024-11-26 11:32:25.245972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:22:07.036 [2024-11-26 11:32:25.245992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.036 [2024-11-26 11:32:25.246378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.036 [2024-11-26 11:32:25.246400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:07.036 [2024-11-26 11:32:25.246468] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:07.036 [2024-11-26 11:32:25.246493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:07.036 [2024-11-26 11:32:25.246614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:22:07.036 [2024-11-26 11:32:25.246627] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:07.036 [2024-11-26 11:32:25.246690] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:07.036 [2024-11-26 11:32:25.247447] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:22:07.036 [2024-11-26 11:32:25.247475] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:22:07.036 [2024-11-26 11:32:25.247714] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.036 pt4 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.036 11:32:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.295 11:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.295 "name": "raid_bdev1", 00:22:07.295 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:07.295 "strip_size_kb": 64, 00:22:07.295 "state": "online", 00:22:07.295 "raid_level": "raid5f", 00:22:07.295 "superblock": true, 00:22:07.295 "num_base_bdevs": 4, 00:22:07.295 "num_base_bdevs_discovered": 3, 00:22:07.295 "num_base_bdevs_operational": 3, 00:22:07.295 "base_bdevs_list": [ 00:22:07.295 { 00:22:07.295 "name": null, 00:22:07.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.295 "is_configured": false, 00:22:07.295 "data_offset": 2048, 00:22:07.295 "data_size": 63488 00:22:07.295 }, 00:22:07.295 { 00:22:07.295 "name": "pt2", 00:22:07.295 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:07.295 "is_configured": true, 00:22:07.295 "data_offset": 2048, 00:22:07.295 "data_size": 63488 00:22:07.295 }, 00:22:07.295 { 00:22:07.295 "name": "pt3", 00:22:07.295 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:07.295 "is_configured": true, 00:22:07.295 "data_offset": 2048, 00:22:07.295 "data_size": 63488 00:22:07.295 }, 00:22:07.295 { 00:22:07.295 "name": "pt4", 00:22:07.295 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:07.295 "is_configured": true, 00:22:07.295 "data_offset": 2048, 00:22:07.295 "data_size": 63488 00:22:07.295 } 00:22:07.295 ] 00:22:07.295 }' 00:22:07.295 11:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.295 11:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 11:32:25 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:07.554 11:32:25 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:07.813 [2024-11-26 11:32:26.014059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.813 [2024-11-26 11:32:26.014091] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.813 [2024-11-26 11:32:26.014179] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.813 [2024-11-26 11:32:26.014250] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.813 [2024-11-26 11:32:26.014266] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:22:07.813 11:32:26 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.813 11:32:26 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:08.072 11:32:26 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:08.072 11:32:26 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:08.072 11:32:26 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:08.331 [2024-11-26 11:32:26.426141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:08.331 [2024-11-26 11:32:26.426221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.331 [2024-11-26 11:32:26.426265] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:22:08.331 [2024-11-26 11:32:26.426279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.331 [2024-11-26 11:32:26.428417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.331 [2024-11-26 11:32:26.428460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:08.331 [2024-11-26 11:32:26.428533] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:08.331 [2024-11-26 11:32:26.428583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:08.331 pt1 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.331 11:32:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.589 11:32:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.589 "name": "raid_bdev1", 00:22:08.589 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:08.589 "strip_size_kb": 64, 00:22:08.589 "state": "configuring", 00:22:08.589 "raid_level": "raid5f", 00:22:08.589 "superblock": true, 00:22:08.589 "num_base_bdevs": 4, 00:22:08.589 "num_base_bdevs_discovered": 1, 00:22:08.589 "num_base_bdevs_operational": 4, 00:22:08.589 "base_bdevs_list": [ 00:22:08.589 { 00:22:08.589 "name": "pt1", 00:22:08.589 "uuid": "abb34573-322b-55f0-bb38-400152cfbcf1", 00:22:08.589 "is_configured": true, 00:22:08.589 "data_offset": 2048, 00:22:08.589 "data_size": 63488 00:22:08.589 }, 00:22:08.589 { 00:22:08.589 "name": null, 00:22:08.589 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:08.589 "is_configured": false, 00:22:08.589 "data_offset": 2048, 00:22:08.589 "data_size": 63488 00:22:08.589 }, 00:22:08.589 { 00:22:08.589 "name": null, 00:22:08.589 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:08.589 "is_configured": false, 00:22:08.589 "data_offset": 2048, 00:22:08.589 "data_size": 63488 00:22:08.589 }, 00:22:08.589 { 00:22:08.589 "name": null, 00:22:08.589 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:08.589 "is_configured": false, 00:22:08.589 "data_offset": 2048, 00:22:08.589 "data_size": 63488 00:22:08.589 } 00:22:08.589 ] 00:22:08.590 }' 00:22:08.590 11:32:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.590 11:32:26 -- common/autotest_common.sh@10 -- # set +x 00:22:08.849 11:32:26 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:08.849 11:32:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:08.849 11:32:26 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:09.107 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:09.107 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:09.107 11:32:27 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:09.366 11:32:27 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:09.624 [2024-11-26 11:32:27.770498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:09.625 [2024-11-26 11:32:27.770563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.625 [2024-11-26 11:32:27.770589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:22:09.625 [2024-11-26 11:32:27.770606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.625 [2024-11-26 11:32:27.771027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.625 [2024-11-26 11:32:27.771054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:09.625 [2024-11-26 11:32:27.771120] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:09.625 [2024-11-26 11:32:27.771139] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:09.625 [2024-11-26 11:32:27.771150] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.625 [2024-11-26 11:32:27.771193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:22:09.625 [2024-11-26 11:32:27.771243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:09.625 pt4 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.625 11:32:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.883 11:32:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.883 "name": "raid_bdev1", 00:22:09.883 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:09.883 "strip_size_kb": 64, 00:22:09.883 "state": "configuring", 00:22:09.883 "raid_level": "raid5f", 00:22:09.883 "superblock": true, 00:22:09.883 "num_base_bdevs": 4, 00:22:09.883 "num_base_bdevs_discovered": 1, 00:22:09.883 "num_base_bdevs_operational": 3, 00:22:09.883 "base_bdevs_list": [ 00:22:09.883 { 00:22:09.883 "name": null, 00:22:09.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.883 "is_configured": false, 00:22:09.883 "data_offset": 2048, 00:22:09.883 "data_size": 63488 00:22:09.883 }, 00:22:09.883 { 00:22:09.883 "name": null, 00:22:09.883 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:09.883 "is_configured": false, 00:22:09.883 "data_offset": 2048, 00:22:09.883 "data_size": 63488 00:22:09.883 }, 00:22:09.883 { 00:22:09.883 "name": null, 00:22:09.883 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:09.883 "is_configured": false, 00:22:09.883 "data_offset": 2048, 00:22:09.883 "data_size": 63488 00:22:09.883 }, 00:22:09.883 { 00:22:09.883 "name": "pt4", 00:22:09.883 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:09.883 "is_configured": true, 00:22:09.883 "data_offset": 2048, 00:22:09.883 "data_size": 63488 00:22:09.883 } 00:22:09.883 ] 00:22:09.883 }' 00:22:09.883 11:32:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.883 11:32:28 -- common/autotest_common.sh@10 -- # set +x 00:22:10.142 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:10.142 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:10.142 11:32:28 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.401 [2024-11-26 11:32:28.550698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.401 [2024-11-26 11:32:28.550935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.401 [2024-11-26 11:32:28.550978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:22:10.401 [2024-11-26 11:32:28.550992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.401 [2024-11-26 11:32:28.551398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.401 [2024-11-26 11:32:28.551435] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.401 [2024-11-26 11:32:28.551504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:10.401 [2024-11-26 11:32:28.551533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.401 pt2 00:22:10.401 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:10.401 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:10.401 11:32:28 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:10.660 [2024-11-26 11:32:28.794765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:10.660 [2024-11-26 11:32:28.794814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.660 [2024-11-26 11:32:28.794849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:22:10.660 [2024-11-26 11:32:28.794861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.660 [2024-11-26 11:32:28.795280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.660 [2024-11-26 11:32:28.795309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:10.660 [2024-11-26 11:32:28.795375] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:10.660 [2024-11-26 11:32:28.795401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:10.660 [2024-11-26 11:32:28.795546] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:22:10.660 [2024-11-26 11:32:28.795560] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:10.660 [2024-11-26 11:32:28.795660] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:10.660 [2024-11-26 11:32:28.796578] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:22:10.660 [2024-11-26 11:32:28.796768] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:22:10.660 [2024-11-26 11:32:28.797080] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.660 pt3 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.660 11:32:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.918 11:32:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.918 "name": "raid_bdev1", 00:22:10.918 "uuid": "9a9a0242-2ceb-4d07-937e-476b6be3cbc2", 00:22:10.918 "strip_size_kb": 64, 00:22:10.918 "state": "online", 00:22:10.918 "raid_level": "raid5f", 00:22:10.918 "superblock": true, 00:22:10.918 "num_base_bdevs": 4, 00:22:10.918 "num_base_bdevs_discovered": 3, 00:22:10.918 "num_base_bdevs_operational": 3, 00:22:10.918 "base_bdevs_list": [ 00:22:10.918 { 00:22:10.918 "name": null, 00:22:10.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.918 "is_configured": false, 00:22:10.918 "data_offset": 2048, 00:22:10.918 "data_size": 63488 00:22:10.918 }, 00:22:10.918 { 00:22:10.918 "name": "pt2", 00:22:10.918 "uuid": "572ae815-d5c4-5ee8-a356-bb74083c08e4", 00:22:10.918 "is_configured": true, 00:22:10.918 "data_offset": 2048, 00:22:10.918 "data_size": 63488 00:22:10.918 }, 00:22:10.918 { 00:22:10.918 "name": "pt3", 00:22:10.918 "uuid": "bd10df25-a2c2-54c8-ae82-5d9ae4938382", 00:22:10.918 "is_configured": true, 00:22:10.918 "data_offset": 2048, 00:22:10.918 "data_size": 63488 00:22:10.918 }, 00:22:10.918 { 00:22:10.918 "name": "pt4", 00:22:10.918 "uuid": "c52f842c-5c81-596c-b9f8-b9afea386492", 00:22:10.918 "is_configured": true, 00:22:10.918 "data_offset": 2048, 00:22:10.918 "data_size": 63488 00:22:10.918 } 00:22:10.919 ] 00:22:10.919 }' 00:22:10.919 11:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.919 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:22:11.176 11:32:29 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:11.176 11:32:29 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:11.434 [2024-11-26 11:32:29.495062] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.434 11:32:29 -- bdev/bdev_raid.sh@506 -- # '[' 9a9a0242-2ceb-4d07-937e-476b6be3cbc2 '!=' 9a9a0242-2ceb-4d07-937e-476b6be3cbc2 ']' 00:22:11.434 11:32:29 -- bdev/bdev_raid.sh@511 -- # killprocess 95153 00:22:11.434 11:32:29 -- common/autotest_common.sh@936 -- # '[' -z 95153 ']' 00:22:11.434 11:32:29 -- common/autotest_common.sh@940 -- # kill -0 95153 00:22:11.434 11:32:29 -- common/autotest_common.sh@941 -- # uname 00:22:11.434 11:32:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.434 11:32:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95153 00:22:11.434 killing process with pid 95153 00:22:11.434 11:32:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:11.434 11:32:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:11.434 11:32:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95153' 00:22:11.434 11:32:29 -- common/autotest_common.sh@955 -- # kill 95153 00:22:11.434 [2024-11-26 11:32:29.543199] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.434 11:32:29 -- common/autotest_common.sh@960 -- # wait 95153 00:22:11.434 [2024-11-26 11:32:29.543325] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.434 [2024-11-26 11:32:29.543397] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.434 [2024-11-26 11:32:29.543412] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:22:11.434 [2024-11-26 11:32:29.569959] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:11.694 ************************************ 00:22:11.694 END TEST raid5f_superblock_test 00:22:11.694 ************************************ 00:22:11.694 11:32:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:11.694 00:22:11.694 real 0m16.353s 00:22:11.694 user 0m29.302s 00:22:11.694 sys 0m2.502s 00:22:11.694 11:32:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.694 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:22:11.694 11:32:29 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:11.694 11:32:29 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:22:11.695 11:32:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:11.695 11:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.695 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:22:11.695 ************************************ 00:22:11.695 START TEST raid5f_rebuild_test 00:22:11.695 ************************************ 00:22:11.695 11:32:29 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:11.695 11:32:29 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:11.696 11:32:29 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:11.696 11:32:29 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:11.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:11.696 11:32:29 -- bdev/bdev_raid.sh@544 -- # raid_pid=95740 00:22:11.696 11:32:29 -- bdev/bdev_raid.sh@545 -- # waitforlisten 95740 /var/tmp/spdk-raid.sock 00:22:11.696 11:32:29 -- common/autotest_common.sh@829 -- # '[' -z 95740 ']' 00:22:11.696 11:32:29 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:11.696 11:32:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:11.696 11:32:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.696 11:32:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:11.696 11:32:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.696 11:32:29 -- common/autotest_common.sh@10 -- # set +x 00:22:11.696 [2024-11-26 11:32:29.850448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.696 [2024-11-26 11:32:29.850857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95740 ] 00:22:11.696 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:11.696 Zero copy mechanism will not be used. 00:22:11.955 [2024-11-26 11:32:30.017204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.955 [2024-11-26 11:32:30.059723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.955 [2024-11-26 11:32:30.100487] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.522 11:32:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.522 11:32:30 -- common/autotest_common.sh@862 -- # return 0 00:22:12.522 11:32:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:12.522 11:32:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:12.522 11:32:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:12.779 BaseBdev1 00:22:12.779 11:32:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:12.779 11:32:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:12.779 11:32:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:13.038 BaseBdev2 00:22:13.038 11:32:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:13.038 11:32:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:13.038 11:32:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:13.296 BaseBdev3 00:22:13.296 11:32:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:13.296 11:32:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:13.296 11:32:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:13.554 BaseBdev4 00:22:13.554 11:32:31 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:13.554 spare_malloc 00:22:13.555 11:32:31 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:13.813 spare_delay 00:22:13.813 11:32:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:14.072 [2024-11-26 11:32:32.112120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:14.072 [2024-11-26 11:32:32.112335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.072 [2024-11-26 11:32:32.112399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:22:14.072 [2024-11-26 11:32:32.112417] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.072 [2024-11-26 11:32:32.114698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.072 [2024-11-26 11:32:32.114743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:14.072 spare 00:22:14.072 11:32:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:14.072 [2024-11-26 11:32:32.292191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.072 [2024-11-26 11:32:32.294111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:14.072 [2024-11-26 11:32:32.294164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:14.072 [2024-11-26 11:32:32.294207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:14.072 [2024-11-26 11:32:32.294285] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:22:14.072 [2024-11-26 11:32:32.294307] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:14.072 [2024-11-26 11:32:32.294412] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:14.072 [2024-11-26 11:32:32.295145] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:22:14.072 [2024-11-26 11:32:32.295182] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:22:14.072 [2024-11-26 11:32:32.295345] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.072 11:32:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:14.072 11:32:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.072 11:32:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.072 11:32:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.331 "name": "raid_bdev1", 00:22:14.331 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:14.331 "strip_size_kb": 64, 00:22:14.331 "state": "online", 00:22:14.331 "raid_level": "raid5f", 00:22:14.331 "superblock": false, 00:22:14.331 "num_base_bdevs": 4, 00:22:14.331 "num_base_bdevs_discovered": 4, 00:22:14.331 "num_base_bdevs_operational": 4, 00:22:14.331 "base_bdevs_list": [ 00:22:14.331 { 00:22:14.331 "name": "BaseBdev1", 00:22:14.331 "uuid": "4acc2860-27a6-4d8b-b616-24ddcd413bca", 00:22:14.331 "is_configured": true, 00:22:14.331 "data_offset": 0, 00:22:14.331 "data_size": 65536 00:22:14.331 }, 00:22:14.331 { 00:22:14.331 "name": "BaseBdev2", 00:22:14.331 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:14.331 "is_configured": true, 00:22:14.331 "data_offset": 0, 00:22:14.331 "data_size": 65536 00:22:14.331 }, 00:22:14.331 { 00:22:14.331 "name": "BaseBdev3", 00:22:14.331 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:14.331 "is_configured": true, 00:22:14.331 "data_offset": 0, 00:22:14.331 "data_size": 65536 00:22:14.331 }, 00:22:14.331 { 00:22:14.331 "name": "BaseBdev4", 00:22:14.331 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:14.331 "is_configured": true, 00:22:14.331 "data_offset": 0, 00:22:14.331 "data_size": 65536 00:22:14.331 } 00:22:14.331 ] 00:22:14.331 }' 00:22:14.331 11:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.331 11:32:32 -- common/autotest_common.sh@10 -- # set +x 00:22:14.589 11:32:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:14.589 11:32:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:14.847 [2024-11-26 11:32:32.969130] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.847 11:32:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:22:14.847 11:32:32 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.847 11:32:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:15.105 11:32:33 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:15.105 11:32:33 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:15.105 11:32:33 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:15.105 11:32:33 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@12 -- # local i 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:15.105 11:32:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:15.364 [2024-11-26 11:32:33.397121] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:15.364 /dev/nbd0 00:22:15.364 11:32:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:15.364 11:32:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:15.364 11:32:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:15.364 11:32:33 -- common/autotest_common.sh@867 -- # local i 00:22:15.364 11:32:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:15.364 11:32:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:15.364 11:32:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:15.364 11:32:33 -- common/autotest_common.sh@871 -- # break 00:22:15.364 11:32:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:15.364 11:32:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:15.364 11:32:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.364 1+0 records in 00:22:15.364 1+0 records out 00:22:15.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270752 s, 15.1 MB/s 00:22:15.364 11:32:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.364 11:32:33 -- common/autotest_common.sh@884 -- # size=4096 00:22:15.364 11:32:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.364 11:32:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:15.364 11:32:33 -- common/autotest_common.sh@887 -- # return 0 00:22:15.364 11:32:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.364 11:32:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:15.364 11:32:33 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:15.364 11:32:33 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:22:15.364 11:32:33 -- bdev/bdev_raid.sh@582 -- # echo 192 00:22:15.364 11:32:33 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:22:15.931 512+0 records in 00:22:15.931 512+0 records out 00:22:15.931 100663296 bytes (101 MB, 96 MiB) copied, 0.466412 s, 216 MB/s 00:22:15.931 11:32:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@51 -- # local i 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.931 11:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:15.931 [2024-11-26 11:32:34.094026] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@41 -- # break 00:22:15.931 11:32:34 -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.931 11:32:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:16.190 [2024-11-26 11:32:34.285090] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.190 11:32:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.449 11:32:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.449 "name": "raid_bdev1", 00:22:16.449 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:16.449 "strip_size_kb": 64, 00:22:16.449 "state": "online", 00:22:16.449 "raid_level": "raid5f", 00:22:16.449 "superblock": false, 00:22:16.449 "num_base_bdevs": 4, 00:22:16.449 "num_base_bdevs_discovered": 3, 00:22:16.449 "num_base_bdevs_operational": 3, 00:22:16.449 "base_bdevs_list": [ 00:22:16.449 { 00:22:16.449 "name": null, 00:22:16.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.449 "is_configured": false, 00:22:16.449 "data_offset": 0, 00:22:16.449 "data_size": 65536 00:22:16.449 }, 00:22:16.449 { 00:22:16.449 "name": "BaseBdev2", 00:22:16.449 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:16.449 "is_configured": true, 00:22:16.449 "data_offset": 0, 00:22:16.449 "data_size": 65536 00:22:16.449 }, 00:22:16.449 { 00:22:16.449 "name": "BaseBdev3", 00:22:16.449 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:16.449 "is_configured": true, 00:22:16.449 "data_offset": 0, 00:22:16.449 "data_size": 65536 00:22:16.449 }, 00:22:16.449 { 00:22:16.449 "name": "BaseBdev4", 00:22:16.449 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:16.449 "is_configured": true, 00:22:16.449 "data_offset": 0, 00:22:16.449 "data_size": 65536 00:22:16.449 } 00:22:16.449 ] 00:22:16.449 }' 00:22:16.449 11:32:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.449 11:32:34 -- common/autotest_common.sh@10 -- # set +x 00:22:16.708 11:32:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:16.967 [2024-11-26 11:32:35.025293] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:16.967 [2024-11-26 11:32:35.025345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.967 [2024-11-26 11:32:35.027566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:22:16.967 [2024-11-26 11:32:35.029816] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:16.967 11:32:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:17.964 11:32:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.964 11:32:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:17.964 11:32:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:17.964 11:32:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:17.964 11:32:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:17.965 11:32:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.965 11:32:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:18.247 "name": "raid_bdev1", 00:22:18.247 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:18.247 "strip_size_kb": 64, 00:22:18.247 "state": "online", 00:22:18.247 "raid_level": "raid5f", 00:22:18.247 "superblock": false, 00:22:18.247 "num_base_bdevs": 4, 00:22:18.247 "num_base_bdevs_discovered": 4, 00:22:18.247 "num_base_bdevs_operational": 4, 00:22:18.247 "process": { 00:22:18.247 "type": "rebuild", 00:22:18.247 "target": "spare", 00:22:18.247 "progress": { 00:22:18.247 "blocks": 23040, 00:22:18.247 "percent": 11 00:22:18.247 } 00:22:18.247 }, 00:22:18.247 "base_bdevs_list": [ 00:22:18.247 { 00:22:18.247 "name": "spare", 00:22:18.247 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:18.247 "is_configured": true, 00:22:18.247 "data_offset": 0, 00:22:18.247 "data_size": 65536 00:22:18.247 }, 00:22:18.247 { 00:22:18.247 "name": "BaseBdev2", 00:22:18.247 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:18.247 "is_configured": true, 00:22:18.247 "data_offset": 0, 00:22:18.247 "data_size": 65536 00:22:18.247 }, 00:22:18.247 { 00:22:18.247 "name": "BaseBdev3", 00:22:18.247 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:18.247 "is_configured": true, 00:22:18.247 "data_offset": 0, 00:22:18.247 "data_size": 65536 00:22:18.247 }, 00:22:18.247 { 00:22:18.247 "name": "BaseBdev4", 00:22:18.247 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:18.247 "is_configured": true, 00:22:18.247 "data_offset": 0, 00:22:18.247 "data_size": 65536 00:22:18.247 } 00:22:18.247 ] 00:22:18.247 }' 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.247 11:32:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:18.505 [2024-11-26 11:32:36.539135] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:18.505 [2024-11-26 11:32:36.539335] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:18.506 [2024-11-26 11:32:36.539461] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.506 11:32:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.764 "name": "raid_bdev1", 00:22:18.764 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:18.764 "strip_size_kb": 64, 00:22:18.764 "state": "online", 00:22:18.764 "raid_level": "raid5f", 00:22:18.764 "superblock": false, 00:22:18.764 "num_base_bdevs": 4, 00:22:18.764 "num_base_bdevs_discovered": 3, 00:22:18.764 "num_base_bdevs_operational": 3, 00:22:18.764 "base_bdevs_list": [ 00:22:18.764 { 00:22:18.764 "name": null, 00:22:18.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.764 "is_configured": false, 00:22:18.764 "data_offset": 0, 00:22:18.764 "data_size": 65536 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "BaseBdev2", 00:22:18.764 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 0, 00:22:18.764 "data_size": 65536 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "BaseBdev3", 00:22:18.764 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 0, 00:22:18.764 "data_size": 65536 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "BaseBdev4", 00:22:18.764 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 0, 00:22:18.764 "data_size": 65536 00:22:18.764 } 00:22:18.764 ] 00:22:18.764 }' 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.764 11:32:36 -- common/autotest_common.sh@10 -- # set +x 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:18.764 11:32:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.764 11:32:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.765 11:32:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:19.024 "name": "raid_bdev1", 00:22:19.024 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:19.024 "strip_size_kb": 64, 00:22:19.024 "state": "online", 00:22:19.024 "raid_level": "raid5f", 00:22:19.024 "superblock": false, 00:22:19.024 "num_base_bdevs": 4, 00:22:19.024 "num_base_bdevs_discovered": 3, 00:22:19.024 "num_base_bdevs_operational": 3, 00:22:19.024 "base_bdevs_list": [ 00:22:19.024 { 00:22:19.024 "name": null, 00:22:19.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.024 "is_configured": false, 00:22:19.024 "data_offset": 0, 00:22:19.024 "data_size": 65536 00:22:19.024 }, 00:22:19.024 { 00:22:19.024 "name": "BaseBdev2", 00:22:19.024 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:19.024 "is_configured": true, 00:22:19.024 "data_offset": 0, 00:22:19.024 "data_size": 65536 00:22:19.024 }, 00:22:19.024 { 00:22:19.024 "name": "BaseBdev3", 00:22:19.024 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:19.024 "is_configured": true, 00:22:19.024 "data_offset": 0, 00:22:19.024 "data_size": 65536 00:22:19.024 }, 00:22:19.024 { 00:22:19.024 "name": "BaseBdev4", 00:22:19.024 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:19.024 "is_configured": true, 00:22:19.024 "data_offset": 0, 00:22:19.024 "data_size": 65536 00:22:19.024 } 00:22:19.024 ] 00:22:19.024 }' 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:19.024 11:32:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:19.282 [2024-11-26 11:32:37.368124] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:19.282 [2024-11-26 11:32:37.368191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.282 [2024-11-26 11:32:37.370451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b0d0 00:22:19.282 [2024-11-26 11:32:37.372553] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.282 11:32:37 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.218 11:32:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.477 11:32:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.478 "name": "raid_bdev1", 00:22:20.478 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:20.478 "strip_size_kb": 64, 00:22:20.478 "state": "online", 00:22:20.478 "raid_level": "raid5f", 00:22:20.478 "superblock": false, 00:22:20.478 "num_base_bdevs": 4, 00:22:20.478 "num_base_bdevs_discovered": 4, 00:22:20.478 "num_base_bdevs_operational": 4, 00:22:20.478 "process": { 00:22:20.478 "type": "rebuild", 00:22:20.478 "target": "spare", 00:22:20.478 "progress": { 00:22:20.478 "blocks": 21120, 00:22:20.478 "percent": 10 00:22:20.478 } 00:22:20.478 }, 00:22:20.478 "base_bdevs_list": [ 00:22:20.478 { 00:22:20.478 "name": "spare", 00:22:20.478 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:20.478 "is_configured": true, 00:22:20.478 "data_offset": 0, 00:22:20.478 "data_size": 65536 00:22:20.478 }, 00:22:20.478 { 00:22:20.478 "name": "BaseBdev2", 00:22:20.478 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:20.478 "is_configured": true, 00:22:20.478 "data_offset": 0, 00:22:20.478 "data_size": 65536 00:22:20.478 }, 00:22:20.478 { 00:22:20.478 "name": "BaseBdev3", 00:22:20.478 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:20.478 "is_configured": true, 00:22:20.478 "data_offset": 0, 00:22:20.478 "data_size": 65536 00:22:20.478 }, 00:22:20.478 { 00:22:20.478 "name": "BaseBdev4", 00:22:20.478 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:20.478 "is_configured": true, 00:22:20.478 "data_offset": 0, 00:22:20.478 "data_size": 65536 00:22:20.478 } 00:22:20.478 ] 00:22:20.478 }' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@657 -- # local timeout=579 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.478 11:32:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.737 "name": "raid_bdev1", 00:22:20.737 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:20.737 "strip_size_kb": 64, 00:22:20.737 "state": "online", 00:22:20.737 "raid_level": "raid5f", 00:22:20.737 "superblock": false, 00:22:20.737 "num_base_bdevs": 4, 00:22:20.737 "num_base_bdevs_discovered": 4, 00:22:20.737 "num_base_bdevs_operational": 4, 00:22:20.737 "process": { 00:22:20.737 "type": "rebuild", 00:22:20.737 "target": "spare", 00:22:20.737 "progress": { 00:22:20.737 "blocks": 26880, 00:22:20.737 "percent": 13 00:22:20.737 } 00:22:20.737 }, 00:22:20.737 "base_bdevs_list": [ 00:22:20.737 { 00:22:20.737 "name": "spare", 00:22:20.737 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 0, 00:22:20.737 "data_size": 65536 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev2", 00:22:20.737 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 0, 00:22:20.737 "data_size": 65536 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev3", 00:22:20.737 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 0, 00:22:20.737 "data_size": 65536 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev4", 00:22:20.737 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 0, 00:22:20.737 "data_size": 65536 00:22:20.737 } 00:22:20.737 ] 00:22:20.737 }' 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.737 11:32:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:21.672 11:32:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:21.672 11:32:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.672 11:32:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.672 11:32:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:21.672 11:32:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:21.673 11:32:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.673 11:32:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.673 11:32:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.931 "name": "raid_bdev1", 00:22:21.931 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:21.931 "strip_size_kb": 64, 00:22:21.931 "state": "online", 00:22:21.931 "raid_level": "raid5f", 00:22:21.931 "superblock": false, 00:22:21.931 "num_base_bdevs": 4, 00:22:21.931 "num_base_bdevs_discovered": 4, 00:22:21.931 "num_base_bdevs_operational": 4, 00:22:21.931 "process": { 00:22:21.931 "type": "rebuild", 00:22:21.931 "target": "spare", 00:22:21.931 "progress": { 00:22:21.931 "blocks": 51840, 00:22:21.931 "percent": 26 00:22:21.931 } 00:22:21.931 }, 00:22:21.931 "base_bdevs_list": [ 00:22:21.931 { 00:22:21.931 "name": "spare", 00:22:21.931 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:21.931 "is_configured": true, 00:22:21.931 "data_offset": 0, 00:22:21.931 "data_size": 65536 00:22:21.931 }, 00:22:21.931 { 00:22:21.931 "name": "BaseBdev2", 00:22:21.931 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:21.931 "is_configured": true, 00:22:21.931 "data_offset": 0, 00:22:21.931 "data_size": 65536 00:22:21.931 }, 00:22:21.931 { 00:22:21.931 "name": "BaseBdev3", 00:22:21.931 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:21.931 "is_configured": true, 00:22:21.931 "data_offset": 0, 00:22:21.931 "data_size": 65536 00:22:21.931 }, 00:22:21.931 { 00:22:21.931 "name": "BaseBdev4", 00:22:21.931 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:21.931 "is_configured": true, 00:22:21.931 "data_offset": 0, 00:22:21.931 "data_size": 65536 00:22:21.931 } 00:22:21.931 ] 00:22:21.931 }' 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.931 11:32:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:23.308 "name": "raid_bdev1", 00:22:23.308 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:23.308 "strip_size_kb": 64, 00:22:23.308 "state": "online", 00:22:23.308 "raid_level": "raid5f", 00:22:23.308 "superblock": false, 00:22:23.308 "num_base_bdevs": 4, 00:22:23.308 "num_base_bdevs_discovered": 4, 00:22:23.308 "num_base_bdevs_operational": 4, 00:22:23.308 "process": { 00:22:23.308 "type": "rebuild", 00:22:23.308 "target": "spare", 00:22:23.308 "progress": { 00:22:23.308 "blocks": 74880, 00:22:23.308 "percent": 38 00:22:23.308 } 00:22:23.308 }, 00:22:23.308 "base_bdevs_list": [ 00:22:23.308 { 00:22:23.308 "name": "spare", 00:22:23.308 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:23.308 "is_configured": true, 00:22:23.308 "data_offset": 0, 00:22:23.308 "data_size": 65536 00:22:23.308 }, 00:22:23.308 { 00:22:23.308 "name": "BaseBdev2", 00:22:23.308 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:23.308 "is_configured": true, 00:22:23.308 "data_offset": 0, 00:22:23.308 "data_size": 65536 00:22:23.308 }, 00:22:23.308 { 00:22:23.308 "name": "BaseBdev3", 00:22:23.308 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:23.308 "is_configured": true, 00:22:23.308 "data_offset": 0, 00:22:23.308 "data_size": 65536 00:22:23.308 }, 00:22:23.308 { 00:22:23.308 "name": "BaseBdev4", 00:22:23.308 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:23.308 "is_configured": true, 00:22:23.308 "data_offset": 0, 00:22:23.308 "data_size": 65536 00:22:23.308 } 00:22:23.308 ] 00:22:23.308 }' 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.308 11:32:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.246 11:32:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.247 11:32:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.512 "name": "raid_bdev1", 00:22:24.512 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:24.512 "strip_size_kb": 64, 00:22:24.512 "state": "online", 00:22:24.512 "raid_level": "raid5f", 00:22:24.512 "superblock": false, 00:22:24.512 "num_base_bdevs": 4, 00:22:24.512 "num_base_bdevs_discovered": 4, 00:22:24.512 "num_base_bdevs_operational": 4, 00:22:24.512 "process": { 00:22:24.512 "type": "rebuild", 00:22:24.512 "target": "spare", 00:22:24.512 "progress": { 00:22:24.512 "blocks": 99840, 00:22:24.512 "percent": 50 00:22:24.512 } 00:22:24.512 }, 00:22:24.512 "base_bdevs_list": [ 00:22:24.512 { 00:22:24.512 "name": "spare", 00:22:24.512 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:24.512 "is_configured": true, 00:22:24.512 "data_offset": 0, 00:22:24.512 "data_size": 65536 00:22:24.512 }, 00:22:24.512 { 00:22:24.512 "name": "BaseBdev2", 00:22:24.512 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:24.512 "is_configured": true, 00:22:24.512 "data_offset": 0, 00:22:24.512 "data_size": 65536 00:22:24.512 }, 00:22:24.512 { 00:22:24.512 "name": "BaseBdev3", 00:22:24.512 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:24.512 "is_configured": true, 00:22:24.512 "data_offset": 0, 00:22:24.512 "data_size": 65536 00:22:24.512 }, 00:22:24.512 { 00:22:24.512 "name": "BaseBdev4", 00:22:24.512 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:24.512 "is_configured": true, 00:22:24.512 "data_offset": 0, 00:22:24.512 "data_size": 65536 00:22:24.512 } 00:22:24.512 ] 00:22:24.512 }' 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.512 11:32:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.449 11:32:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.708 "name": "raid_bdev1", 00:22:25.708 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:25.708 "strip_size_kb": 64, 00:22:25.708 "state": "online", 00:22:25.708 "raid_level": "raid5f", 00:22:25.708 "superblock": false, 00:22:25.708 "num_base_bdevs": 4, 00:22:25.708 "num_base_bdevs_discovered": 4, 00:22:25.708 "num_base_bdevs_operational": 4, 00:22:25.708 "process": { 00:22:25.708 "type": "rebuild", 00:22:25.708 "target": "spare", 00:22:25.708 "progress": { 00:22:25.708 "blocks": 122880, 00:22:25.708 "percent": 62 00:22:25.708 } 00:22:25.708 }, 00:22:25.708 "base_bdevs_list": [ 00:22:25.708 { 00:22:25.708 "name": "spare", 00:22:25.708 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:25.708 "is_configured": true, 00:22:25.708 "data_offset": 0, 00:22:25.708 "data_size": 65536 00:22:25.708 }, 00:22:25.708 { 00:22:25.708 "name": "BaseBdev2", 00:22:25.708 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:25.708 "is_configured": true, 00:22:25.708 "data_offset": 0, 00:22:25.708 "data_size": 65536 00:22:25.708 }, 00:22:25.708 { 00:22:25.708 "name": "BaseBdev3", 00:22:25.708 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:25.708 "is_configured": true, 00:22:25.708 "data_offset": 0, 00:22:25.708 "data_size": 65536 00:22:25.708 }, 00:22:25.708 { 00:22:25.708 "name": "BaseBdev4", 00:22:25.708 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:25.708 "is_configured": true, 00:22:25.708 "data_offset": 0, 00:22:25.708 "data_size": 65536 00:22:25.708 } 00:22:25.708 ] 00:22:25.708 }' 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.708 11:32:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.086 11:32:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.086 11:32:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.086 "name": "raid_bdev1", 00:22:27.086 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:27.086 "strip_size_kb": 64, 00:22:27.086 "state": "online", 00:22:27.086 "raid_level": "raid5f", 00:22:27.086 "superblock": false, 00:22:27.086 "num_base_bdevs": 4, 00:22:27.086 "num_base_bdevs_discovered": 4, 00:22:27.086 "num_base_bdevs_operational": 4, 00:22:27.086 "process": { 00:22:27.086 "type": "rebuild", 00:22:27.086 "target": "spare", 00:22:27.086 "progress": { 00:22:27.086 "blocks": 147840, 00:22:27.086 "percent": 75 00:22:27.086 } 00:22:27.086 }, 00:22:27.086 "base_bdevs_list": [ 00:22:27.086 { 00:22:27.086 "name": "spare", 00:22:27.086 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:27.086 "is_configured": true, 00:22:27.086 "data_offset": 0, 00:22:27.086 "data_size": 65536 00:22:27.086 }, 00:22:27.086 { 00:22:27.086 "name": "BaseBdev2", 00:22:27.086 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:27.086 "is_configured": true, 00:22:27.086 "data_offset": 0, 00:22:27.087 "data_size": 65536 00:22:27.087 }, 00:22:27.087 { 00:22:27.087 "name": "BaseBdev3", 00:22:27.087 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:27.087 "is_configured": true, 00:22:27.087 "data_offset": 0, 00:22:27.087 "data_size": 65536 00:22:27.087 }, 00:22:27.087 { 00:22:27.087 "name": "BaseBdev4", 00:22:27.087 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:27.087 "is_configured": true, 00:22:27.087 "data_offset": 0, 00:22:27.087 "data_size": 65536 00:22:27.087 } 00:22:27.087 ] 00:22:27.087 }' 00:22:27.087 11:32:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.087 11:32:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.087 11:32:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.087 11:32:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.087 11:32:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.023 11:32:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.282 "name": "raid_bdev1", 00:22:28.282 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:28.282 "strip_size_kb": 64, 00:22:28.282 "state": "online", 00:22:28.282 "raid_level": "raid5f", 00:22:28.282 "superblock": false, 00:22:28.282 "num_base_bdevs": 4, 00:22:28.282 "num_base_bdevs_discovered": 4, 00:22:28.282 "num_base_bdevs_operational": 4, 00:22:28.282 "process": { 00:22:28.282 "type": "rebuild", 00:22:28.282 "target": "spare", 00:22:28.282 "progress": { 00:22:28.282 "blocks": 170880, 00:22:28.282 "percent": 86 00:22:28.282 } 00:22:28.282 }, 00:22:28.282 "base_bdevs_list": [ 00:22:28.282 { 00:22:28.282 "name": "spare", 00:22:28.282 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:28.282 "is_configured": true, 00:22:28.282 "data_offset": 0, 00:22:28.282 "data_size": 65536 00:22:28.282 }, 00:22:28.282 { 00:22:28.282 "name": "BaseBdev2", 00:22:28.282 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:28.282 "is_configured": true, 00:22:28.282 "data_offset": 0, 00:22:28.282 "data_size": 65536 00:22:28.282 }, 00:22:28.282 { 00:22:28.282 "name": "BaseBdev3", 00:22:28.282 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:28.282 "is_configured": true, 00:22:28.282 "data_offset": 0, 00:22:28.282 "data_size": 65536 00:22:28.282 }, 00:22:28.282 { 00:22:28.282 "name": "BaseBdev4", 00:22:28.282 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:28.282 "is_configured": true, 00:22:28.282 "data_offset": 0, 00:22:28.282 "data_size": 65536 00:22:28.282 } 00:22:28.282 ] 00:22:28.282 }' 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.282 11:32:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.221 11:32:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.481 "name": "raid_bdev1", 00:22:29.481 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:29.481 "strip_size_kb": 64, 00:22:29.481 "state": "online", 00:22:29.481 "raid_level": "raid5f", 00:22:29.481 "superblock": false, 00:22:29.481 "num_base_bdevs": 4, 00:22:29.481 "num_base_bdevs_discovered": 4, 00:22:29.481 "num_base_bdevs_operational": 4, 00:22:29.481 "process": { 00:22:29.481 "type": "rebuild", 00:22:29.481 "target": "spare", 00:22:29.481 "progress": { 00:22:29.481 "blocks": 195840, 00:22:29.481 "percent": 99 00:22:29.481 } 00:22:29.481 }, 00:22:29.481 "base_bdevs_list": [ 00:22:29.481 { 00:22:29.481 "name": "spare", 00:22:29.481 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:29.481 "is_configured": true, 00:22:29.481 "data_offset": 0, 00:22:29.481 "data_size": 65536 00:22:29.481 }, 00:22:29.481 { 00:22:29.481 "name": "BaseBdev2", 00:22:29.481 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:29.481 "is_configured": true, 00:22:29.481 "data_offset": 0, 00:22:29.481 "data_size": 65536 00:22:29.481 }, 00:22:29.481 { 00:22:29.481 "name": "BaseBdev3", 00:22:29.481 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:29.481 "is_configured": true, 00:22:29.481 "data_offset": 0, 00:22:29.481 "data_size": 65536 00:22:29.481 }, 00:22:29.481 { 00:22:29.481 "name": "BaseBdev4", 00:22:29.481 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:29.481 "is_configured": true, 00:22:29.481 "data_offset": 0, 00:22:29.481 "data_size": 65536 00:22:29.481 } 00:22:29.481 ] 00:22:29.481 }' 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.481 11:32:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.739 [2024-11-26 11:32:47.732307] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:29.740 [2024-11-26 11:32:47.732445] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:29.740 [2024-11-26 11:32:47.732530] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.676 11:32:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.935 "name": "raid_bdev1", 00:22:30.935 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:30.935 "strip_size_kb": 64, 00:22:30.935 "state": "online", 00:22:30.935 "raid_level": "raid5f", 00:22:30.935 "superblock": false, 00:22:30.935 "num_base_bdevs": 4, 00:22:30.935 "num_base_bdevs_discovered": 4, 00:22:30.935 "num_base_bdevs_operational": 4, 00:22:30.935 "base_bdevs_list": [ 00:22:30.935 { 00:22:30.935 "name": "spare", 00:22:30.935 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:30.935 "is_configured": true, 00:22:30.935 "data_offset": 0, 00:22:30.935 "data_size": 65536 00:22:30.935 }, 00:22:30.935 { 00:22:30.935 "name": "BaseBdev2", 00:22:30.935 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:30.935 "is_configured": true, 00:22:30.935 "data_offset": 0, 00:22:30.935 "data_size": 65536 00:22:30.935 }, 00:22:30.935 { 00:22:30.935 "name": "BaseBdev3", 00:22:30.935 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:30.935 "is_configured": true, 00:22:30.935 "data_offset": 0, 00:22:30.935 "data_size": 65536 00:22:30.935 }, 00:22:30.935 { 00:22:30.935 "name": "BaseBdev4", 00:22:30.935 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:30.935 "is_configured": true, 00:22:30.935 "data_offset": 0, 00:22:30.935 "data_size": 65536 00:22:30.935 } 00:22:30.935 ] 00:22:30.935 }' 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@660 -- # break 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.935 11:32:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.935 11:32:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.935 "name": "raid_bdev1", 00:22:30.935 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:30.935 "strip_size_kb": 64, 00:22:30.935 "state": "online", 00:22:30.935 "raid_level": "raid5f", 00:22:30.935 "superblock": false, 00:22:30.935 "num_base_bdevs": 4, 00:22:30.935 "num_base_bdevs_discovered": 4, 00:22:30.935 "num_base_bdevs_operational": 4, 00:22:30.935 "base_bdevs_list": [ 00:22:30.935 { 00:22:30.935 "name": "spare", 00:22:30.935 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:30.935 "is_configured": true, 00:22:30.935 "data_offset": 0, 00:22:30.935 "data_size": 65536 00:22:30.936 }, 00:22:30.936 { 00:22:30.936 "name": "BaseBdev2", 00:22:30.936 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:30.936 "is_configured": true, 00:22:30.936 "data_offset": 0, 00:22:30.936 "data_size": 65536 00:22:30.936 }, 00:22:30.936 { 00:22:30.936 "name": "BaseBdev3", 00:22:30.936 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:30.936 "is_configured": true, 00:22:30.936 "data_offset": 0, 00:22:30.936 "data_size": 65536 00:22:30.936 }, 00:22:30.936 { 00:22:30.936 "name": "BaseBdev4", 00:22:30.936 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:30.936 "is_configured": true, 00:22:30.936 "data_offset": 0, 00:22:30.936 "data_size": 65536 00:22:30.936 } 00:22:30.936 ] 00:22:30.936 }' 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.936 11:32:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.194 11:32:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.194 "name": "raid_bdev1", 00:22:31.194 "uuid": "a1365103-2f6a-4cc5-a807-38b5540fac4e", 00:22:31.194 "strip_size_kb": 64, 00:22:31.194 "state": "online", 00:22:31.194 "raid_level": "raid5f", 00:22:31.194 "superblock": false, 00:22:31.194 "num_base_bdevs": 4, 00:22:31.194 "num_base_bdevs_discovered": 4, 00:22:31.194 "num_base_bdevs_operational": 4, 00:22:31.194 "base_bdevs_list": [ 00:22:31.194 { 00:22:31.194 "name": "spare", 00:22:31.194 "uuid": "7b55caf0-2c62-53ab-8860-499b91b11ce6", 00:22:31.194 "is_configured": true, 00:22:31.194 "data_offset": 0, 00:22:31.194 "data_size": 65536 00:22:31.194 }, 00:22:31.194 { 00:22:31.194 "name": "BaseBdev2", 00:22:31.194 "uuid": "811b9869-e8e3-46ff-9877-5a6153bc0f8c", 00:22:31.194 "is_configured": true, 00:22:31.194 "data_offset": 0, 00:22:31.194 "data_size": 65536 00:22:31.194 }, 00:22:31.194 { 00:22:31.194 "name": "BaseBdev3", 00:22:31.194 "uuid": "6c86279f-8154-4744-8f62-bf647eea8e57", 00:22:31.194 "is_configured": true, 00:22:31.194 "data_offset": 0, 00:22:31.194 "data_size": 65536 00:22:31.194 }, 00:22:31.194 { 00:22:31.194 "name": "BaseBdev4", 00:22:31.194 "uuid": "89a4a6b6-00bd-4be3-9a11-05c271b59365", 00:22:31.194 "is_configured": true, 00:22:31.194 "data_offset": 0, 00:22:31.194 "data_size": 65536 00:22:31.194 } 00:22:31.194 ] 00:22:31.194 }' 00:22:31.194 11:32:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.194 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:22:31.452 11:32:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:31.711 [2024-11-26 11:32:49.837454] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.711 [2024-11-26 11:32:49.837489] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.711 [2024-11-26 11:32:49.837570] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.711 [2024-11-26 11:32:49.837662] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.711 [2024-11-26 11:32:49.837677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:22:31.711 11:32:49 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:31.711 11:32:49 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.970 11:32:50 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:31.970 11:32:50 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:31.970 11:32:50 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:31.970 11:32:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:32.229 /dev/nbd0 00:22:32.229 11:32:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:32.229 11:32:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:32.229 11:32:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:32.229 11:32:50 -- common/autotest_common.sh@867 -- # local i 00:22:32.229 11:32:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:32.229 11:32:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:32.229 11:32:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:32.229 11:32:50 -- common/autotest_common.sh@871 -- # break 00:22:32.229 11:32:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:32.229 11:32:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:32.229 11:32:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.229 1+0 records in 00:22:32.229 1+0 records out 00:22:32.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397128 s, 10.3 MB/s 00:22:32.229 11:32:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.229 11:32:50 -- common/autotest_common.sh@884 -- # size=4096 00:22:32.229 11:32:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.229 11:32:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:32.229 11:32:50 -- common/autotest_common.sh@887 -- # return 0 00:22:32.229 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.229 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.229 11:32:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:32.489 /dev/nbd1 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:32.489 11:32:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:32.489 11:32:50 -- common/autotest_common.sh@867 -- # local i 00:22:32.489 11:32:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:32.489 11:32:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:32.489 11:32:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:32.489 11:32:50 -- common/autotest_common.sh@871 -- # break 00:22:32.489 11:32:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:32.489 11:32:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:32.489 11:32:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.489 1+0 records in 00:22:32.489 1+0 records out 00:22:32.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422875 s, 9.7 MB/s 00:22:32.489 11:32:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.489 11:32:50 -- common/autotest_common.sh@884 -- # size=4096 00:22:32.489 11:32:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.489 11:32:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:32.489 11:32:50 -- common/autotest_common.sh@887 -- # return 0 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.489 11:32:50 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:32.489 11:32:50 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@51 -- # local i 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:32.489 11:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@41 -- # break 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:32.747 11:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@41 -- # break 00:22:33.006 11:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:22:33.006 11:32:51 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:33.006 11:32:51 -- bdev/bdev_raid.sh@709 -- # killprocess 95740 00:22:33.006 11:32:51 -- common/autotest_common.sh@936 -- # '[' -z 95740 ']' 00:22:33.006 11:32:51 -- common/autotest_common.sh@940 -- # kill -0 95740 00:22:33.006 11:32:51 -- common/autotest_common.sh@941 -- # uname 00:22:33.006 11:32:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:33.006 11:32:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95740 00:22:33.006 killing process with pid 95740 00:22:33.006 Received shutdown signal, test time was about 60.000000 seconds 00:22:33.006 00:22:33.006 Latency(us) 00:22:33.006 [2024-11-26T11:32:51.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.006 [2024-11-26T11:32:51.237Z] =================================================================================================================== 00:22:33.007 [2024-11-26T11:32:51.237Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.007 11:32:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:33.007 11:32:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:33.007 11:32:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95740' 00:22:33.007 11:32:51 -- common/autotest_common.sh@955 -- # kill 95740 00:22:33.007 11:32:51 -- common/autotest_common.sh@960 -- # wait 95740 00:22:33.007 [2024-11-26 11:32:51.179665] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.007 [2024-11-26 11:32:51.208029] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.266 ************************************ 00:22:33.266 END TEST raid5f_rebuild_test 00:22:33.266 ************************************ 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:33.266 00:22:33.266 real 0m21.584s 00:22:33.266 user 0m29.451s 00:22:33.266 sys 0m2.619s 00:22:33.266 11:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:33.266 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:22:33.266 11:32:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:33.266 11:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:33.266 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:22:33.266 ************************************ 00:22:33.266 START TEST raid5f_rebuild_test_sb 00:22:33.266 ************************************ 00:22:33.266 11:32:51 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@544 -- # raid_pid=96293 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@545 -- # waitforlisten 96293 /var/tmp/spdk-raid.sock 00:22:33.266 11:32:51 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:33.266 11:32:51 -- common/autotest_common.sh@829 -- # '[' -z 96293 ']' 00:22:33.266 11:32:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:33.266 11:32:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.266 11:32:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:33.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:33.266 11:32:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.266 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:22:33.266 [2024-11-26 11:32:51.490979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.266 [2024-11-26 11:32:51.491381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:22:33.266 Zero copy mechanism will not be used. 00:22:33.266 :6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96293 ] 00:22:33.525 [2024-11-26 11:32:51.654969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.525 [2024-11-26 11:32:51.687934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.525 [2024-11-26 11:32:51.718980] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.093 11:32:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.093 11:32:52 -- common/autotest_common.sh@862 -- # return 0 00:22:34.093 11:32:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:34.093 11:32:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:34.093 11:32:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:34.351 BaseBdev1_malloc 00:22:34.351 11:32:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:34.610 [2024-11-26 11:32:52.651832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:34.610 [2024-11-26 11:32:52.651928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.610 [2024-11-26 11:32:52.651957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:34.610 [2024-11-26 11:32:52.651989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.610 [2024-11-26 11:32:52.654230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.610 [2024-11-26 11:32:52.654291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:34.610 BaseBdev1 00:22:34.610 11:32:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:34.610 11:32:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:34.610 11:32:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:34.868 BaseBdev2_malloc 00:22:34.868 11:32:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:35.127 [2024-11-26 11:32:53.119111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:35.127 [2024-11-26 11:32:53.119382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.127 [2024-11-26 11:32:53.119459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:35.127 [2024-11-26 11:32:53.119665] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.127 [2024-11-26 11:32:53.121941] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.127 [2024-11-26 11:32:53.122144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:35.127 BaseBdev2 00:22:35.127 11:32:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:35.127 11:32:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:35.127 11:32:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:35.127 BaseBdev3_malloc 00:22:35.127 11:32:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:35.385 [2024-11-26 11:32:53.489492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:35.385 [2024-11-26 11:32:53.489555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.385 [2024-11-26 11:32:53.489580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:35.385 [2024-11-26 11:32:53.489595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.385 [2024-11-26 11:32:53.491835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.385 [2024-11-26 11:32:53.491890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:35.385 BaseBdev3 00:22:35.385 11:32:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:35.385 11:32:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:35.385 11:32:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:35.645 BaseBdev4_malloc 00:22:35.645 11:32:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:35.645 [2024-11-26 11:32:53.843741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:35.645 [2024-11-26 11:32:53.843798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.645 [2024-11-26 11:32:53.843827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:22:35.645 [2024-11-26 11:32:53.843843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.645 [2024-11-26 11:32:53.846099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.645 [2024-11-26 11:32:53.846144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:35.645 BaseBdev4 00:22:35.645 11:32:53 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:35.903 spare_malloc 00:22:35.903 11:32:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:36.162 spare_delay 00:22:36.162 11:32:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:36.421 [2024-11-26 11:32:54.449623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:36.421 [2024-11-26 11:32:54.449855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.421 [2024-11-26 11:32:54.449967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:22:36.421 [2024-11-26 11:32:54.450183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.421 [2024-11-26 11:32:54.452372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.421 [2024-11-26 11:32:54.452562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:36.421 spare 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:36.421 [2024-11-26 11:32:54.621693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.421 [2024-11-26 11:32:54.623777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.421 [2024-11-26 11:32:54.624040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.421 [2024-11-26 11:32:54.624149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.421 [2024-11-26 11:32:54.624510] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:22:36.421 [2024-11-26 11:32:54.624578] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:36.421 [2024-11-26 11:32:54.624812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:36.421 [2024-11-26 11:32:54.625492] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:22:36.421 [2024-11-26 11:32:54.625513] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:22:36.421 [2024-11-26 11:32:54.625676] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.421 11:32:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.680 11:32:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:36.680 "name": "raid_bdev1", 00:22:36.680 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:36.680 "strip_size_kb": 64, 00:22:36.680 "state": "online", 00:22:36.680 "raid_level": "raid5f", 00:22:36.680 "superblock": true, 00:22:36.680 "num_base_bdevs": 4, 00:22:36.680 "num_base_bdevs_discovered": 4, 00:22:36.680 "num_base_bdevs_operational": 4, 00:22:36.680 "base_bdevs_list": [ 00:22:36.680 { 00:22:36.680 "name": "BaseBdev1", 00:22:36.680 "uuid": "ed91e000-108c-5e14-9f8e-7b0dcfbc724e", 00:22:36.680 "is_configured": true, 00:22:36.680 "data_offset": 2048, 00:22:36.680 "data_size": 63488 00:22:36.680 }, 00:22:36.680 { 00:22:36.680 "name": "BaseBdev2", 00:22:36.680 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:36.680 "is_configured": true, 00:22:36.680 "data_offset": 2048, 00:22:36.680 "data_size": 63488 00:22:36.680 }, 00:22:36.680 { 00:22:36.680 "name": "BaseBdev3", 00:22:36.680 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:36.680 "is_configured": true, 00:22:36.680 "data_offset": 2048, 00:22:36.680 "data_size": 63488 00:22:36.680 }, 00:22:36.680 { 00:22:36.680 "name": "BaseBdev4", 00:22:36.680 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:36.680 "is_configured": true, 00:22:36.680 "data_offset": 2048, 00:22:36.680 "data_size": 63488 00:22:36.680 } 00:22:36.680 ] 00:22:36.680 }' 00:22:36.680 11:32:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:36.680 11:32:54 -- common/autotest_common.sh@10 -- # set +x 00:22:36.939 11:32:55 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.939 11:32:55 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:37.197 [2024-11-26 11:32:55.253905] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.197 11:32:55 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:22:37.197 11:32:55 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.197 11:32:55 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:37.455 11:32:55 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:37.455 11:32:55 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:37.455 11:32:55 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:37.455 11:32:55 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@12 -- # local i 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.455 11:32:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.456 11:32:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:37.456 [2024-11-26 11:32:55.689975] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:22:37.714 /dev/nbd0 00:22:37.714 11:32:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:37.714 11:32:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:37.714 11:32:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:37.714 11:32:55 -- common/autotest_common.sh@867 -- # local i 00:22:37.714 11:32:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:37.714 11:32:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:37.714 11:32:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:37.714 11:32:55 -- common/autotest_common.sh@871 -- # break 00:22:37.714 11:32:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:37.714 11:32:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:37.714 11:32:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:37.714 1+0 records in 00:22:37.714 1+0 records out 00:22:37.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603504 s, 6.8 MB/s 00:22:37.714 11:32:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.714 11:32:55 -- common/autotest_common.sh@884 -- # size=4096 00:22:37.714 11:32:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.714 11:32:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:37.714 11:32:55 -- common/autotest_common.sh@887 -- # return 0 00:22:37.714 11:32:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:37.714 11:32:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.714 11:32:55 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:37.714 11:32:55 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:22:37.714 11:32:55 -- bdev/bdev_raid.sh@582 -- # echo 192 00:22:37.714 11:32:55 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:38.281 496+0 records in 00:22:38.282 496+0 records out 00:22:38.282 97517568 bytes (98 MB, 93 MiB) copied, 0.489401 s, 199 MB/s 00:22:38.282 11:32:56 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@51 -- # local i 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.282 [2024-11-26 11:32:56.431449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@41 -- # break 00:22:38.282 11:32:56 -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.282 11:32:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:38.540 [2024-11-26 11:32:56.663578] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.540 11:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.838 11:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.838 "name": "raid_bdev1", 00:22:38.838 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:38.838 "strip_size_kb": 64, 00:22:38.838 "state": "online", 00:22:38.838 "raid_level": "raid5f", 00:22:38.838 "superblock": true, 00:22:38.838 "num_base_bdevs": 4, 00:22:38.838 "num_base_bdevs_discovered": 3, 00:22:38.838 "num_base_bdevs_operational": 3, 00:22:38.838 "base_bdevs_list": [ 00:22:38.838 { 00:22:38.838 "name": null, 00:22:38.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.838 "is_configured": false, 00:22:38.838 "data_offset": 2048, 00:22:38.838 "data_size": 63488 00:22:38.838 }, 00:22:38.838 { 00:22:38.838 "name": "BaseBdev2", 00:22:38.838 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:38.838 "is_configured": true, 00:22:38.838 "data_offset": 2048, 00:22:38.838 "data_size": 63488 00:22:38.838 }, 00:22:38.838 { 00:22:38.838 "name": "BaseBdev3", 00:22:38.838 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:38.838 "is_configured": true, 00:22:38.838 "data_offset": 2048, 00:22:38.838 "data_size": 63488 00:22:38.838 }, 00:22:38.838 { 00:22:38.838 "name": "BaseBdev4", 00:22:38.838 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:38.838 "is_configured": true, 00:22:38.838 "data_offset": 2048, 00:22:38.838 "data_size": 63488 00:22:38.838 } 00:22:38.838 ] 00:22:38.838 }' 00:22:38.838 11:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.838 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:22:39.096 11:32:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:39.354 [2024-11-26 11:32:57.411733] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:39.354 [2024-11-26 11:32:57.411784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.354 [2024-11-26 11:32:57.414151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a300 00:22:39.354 [2024-11-26 11:32:57.416327] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:39.354 11:32:57 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.294 11:32:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.554 11:32:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.554 "name": "raid_bdev1", 00:22:40.554 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:40.554 "strip_size_kb": 64, 00:22:40.554 "state": "online", 00:22:40.554 "raid_level": "raid5f", 00:22:40.554 "superblock": true, 00:22:40.554 "num_base_bdevs": 4, 00:22:40.554 "num_base_bdevs_discovered": 4, 00:22:40.554 "num_base_bdevs_operational": 4, 00:22:40.554 "process": { 00:22:40.554 "type": "rebuild", 00:22:40.554 "target": "spare", 00:22:40.554 "progress": { 00:22:40.554 "blocks": 23040, 00:22:40.554 "percent": 12 00:22:40.554 } 00:22:40.554 }, 00:22:40.554 "base_bdevs_list": [ 00:22:40.554 { 00:22:40.554 "name": "spare", 00:22:40.554 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:40.554 "is_configured": true, 00:22:40.554 "data_offset": 2048, 00:22:40.554 "data_size": 63488 00:22:40.554 }, 00:22:40.554 { 00:22:40.555 "name": "BaseBdev2", 00:22:40.555 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:40.555 "is_configured": true, 00:22:40.555 "data_offset": 2048, 00:22:40.555 "data_size": 63488 00:22:40.555 }, 00:22:40.555 { 00:22:40.555 "name": "BaseBdev3", 00:22:40.555 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:40.555 "is_configured": true, 00:22:40.555 "data_offset": 2048, 00:22:40.555 "data_size": 63488 00:22:40.555 }, 00:22:40.555 { 00:22:40.555 "name": "BaseBdev4", 00:22:40.555 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:40.555 "is_configured": true, 00:22:40.555 "data_offset": 2048, 00:22:40.555 "data_size": 63488 00:22:40.555 } 00:22:40.555 ] 00:22:40.555 }' 00:22:40.555 11:32:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.555 11:32:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.555 11:32:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.555 11:32:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.555 11:32:58 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:40.814 [2024-11-26 11:32:58.853793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.814 [2024-11-26 11:32:58.926495] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:40.814 [2024-11-26 11:32:58.926555] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.814 11:32:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.074 11:32:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.074 "name": "raid_bdev1", 00:22:41.074 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:41.074 "strip_size_kb": 64, 00:22:41.074 "state": "online", 00:22:41.074 "raid_level": "raid5f", 00:22:41.074 "superblock": true, 00:22:41.074 "num_base_bdevs": 4, 00:22:41.074 "num_base_bdevs_discovered": 3, 00:22:41.074 "num_base_bdevs_operational": 3, 00:22:41.074 "base_bdevs_list": [ 00:22:41.074 { 00:22:41.074 "name": null, 00:22:41.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.074 "is_configured": false, 00:22:41.074 "data_offset": 2048, 00:22:41.074 "data_size": 63488 00:22:41.074 }, 00:22:41.074 { 00:22:41.074 "name": "BaseBdev2", 00:22:41.074 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:41.074 "is_configured": true, 00:22:41.074 "data_offset": 2048, 00:22:41.074 "data_size": 63488 00:22:41.074 }, 00:22:41.074 { 00:22:41.074 "name": "BaseBdev3", 00:22:41.074 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:41.074 "is_configured": true, 00:22:41.074 "data_offset": 2048, 00:22:41.074 "data_size": 63488 00:22:41.074 }, 00:22:41.074 { 00:22:41.074 "name": "BaseBdev4", 00:22:41.074 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:41.074 "is_configured": true, 00:22:41.074 "data_offset": 2048, 00:22:41.074 "data_size": 63488 00:22:41.074 } 00:22:41.074 ] 00:22:41.074 }' 00:22:41.074 11:32:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.074 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.333 11:32:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.592 11:32:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:41.592 "name": "raid_bdev1", 00:22:41.592 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:41.592 "strip_size_kb": 64, 00:22:41.592 "state": "online", 00:22:41.592 "raid_level": "raid5f", 00:22:41.592 "superblock": true, 00:22:41.593 "num_base_bdevs": 4, 00:22:41.593 "num_base_bdevs_discovered": 3, 00:22:41.593 "num_base_bdevs_operational": 3, 00:22:41.593 "base_bdevs_list": [ 00:22:41.593 { 00:22:41.593 "name": null, 00:22:41.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.593 "is_configured": false, 00:22:41.593 "data_offset": 2048, 00:22:41.593 "data_size": 63488 00:22:41.593 }, 00:22:41.593 { 00:22:41.593 "name": "BaseBdev2", 00:22:41.593 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:41.593 "is_configured": true, 00:22:41.593 "data_offset": 2048, 00:22:41.593 "data_size": 63488 00:22:41.593 }, 00:22:41.593 { 00:22:41.593 "name": "BaseBdev3", 00:22:41.593 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:41.593 "is_configured": true, 00:22:41.593 "data_offset": 2048, 00:22:41.593 "data_size": 63488 00:22:41.593 }, 00:22:41.593 { 00:22:41.593 "name": "BaseBdev4", 00:22:41.593 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:41.593 "is_configured": true, 00:22:41.593 "data_offset": 2048, 00:22:41.593 "data_size": 63488 00:22:41.593 } 00:22:41.593 ] 00:22:41.593 }' 00:22:41.593 11:32:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:41.593 11:32:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:41.593 11:32:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:41.593 11:32:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:41.593 11:32:59 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:41.852 [2024-11-26 11:33:00.010943] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:41.852 [2024-11-26 11:33:00.010984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.852 [2024-11-26 11:33:00.013687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a3d0 00:22:41.852 [2024-11-26 11:33:00.016578] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:41.852 11:33:00 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.231 "name": "raid_bdev1", 00:22:43.231 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:43.231 "strip_size_kb": 64, 00:22:43.231 "state": "online", 00:22:43.231 "raid_level": "raid5f", 00:22:43.231 "superblock": true, 00:22:43.231 "num_base_bdevs": 4, 00:22:43.231 "num_base_bdevs_discovered": 4, 00:22:43.231 "num_base_bdevs_operational": 4, 00:22:43.231 "process": { 00:22:43.231 "type": "rebuild", 00:22:43.231 "target": "spare", 00:22:43.231 "progress": { 00:22:43.231 "blocks": 23040, 00:22:43.231 "percent": 12 00:22:43.231 } 00:22:43.231 }, 00:22:43.231 "base_bdevs_list": [ 00:22:43.231 { 00:22:43.231 "name": "spare", 00:22:43.231 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:43.231 "is_configured": true, 00:22:43.231 "data_offset": 2048, 00:22:43.231 "data_size": 63488 00:22:43.231 }, 00:22:43.231 { 00:22:43.231 "name": "BaseBdev2", 00:22:43.231 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:43.231 "is_configured": true, 00:22:43.231 "data_offset": 2048, 00:22:43.231 "data_size": 63488 00:22:43.231 }, 00:22:43.231 { 00:22:43.231 "name": "BaseBdev3", 00:22:43.231 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:43.231 "is_configured": true, 00:22:43.231 "data_offset": 2048, 00:22:43.231 "data_size": 63488 00:22:43.231 }, 00:22:43.231 { 00:22:43.231 "name": "BaseBdev4", 00:22:43.231 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:43.231 "is_configured": true, 00:22:43.231 "data_offset": 2048, 00:22:43.231 "data_size": 63488 00:22:43.231 } 00:22:43.231 ] 00:22:43.231 }' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:43.231 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:43.231 11:33:01 -- bdev/bdev_raid.sh@657 -- # local timeout=602 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.232 11:33:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.490 11:33:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.490 "name": "raid_bdev1", 00:22:43.490 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:43.490 "strip_size_kb": 64, 00:22:43.490 "state": "online", 00:22:43.490 "raid_level": "raid5f", 00:22:43.490 "superblock": true, 00:22:43.490 "num_base_bdevs": 4, 00:22:43.490 "num_base_bdevs_discovered": 4, 00:22:43.490 "num_base_bdevs_operational": 4, 00:22:43.490 "process": { 00:22:43.490 "type": "rebuild", 00:22:43.490 "target": "spare", 00:22:43.491 "progress": { 00:22:43.491 "blocks": 28800, 00:22:43.491 "percent": 15 00:22:43.491 } 00:22:43.491 }, 00:22:43.491 "base_bdevs_list": [ 00:22:43.491 { 00:22:43.491 "name": "spare", 00:22:43.491 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:43.491 "is_configured": true, 00:22:43.491 "data_offset": 2048, 00:22:43.491 "data_size": 63488 00:22:43.491 }, 00:22:43.491 { 00:22:43.491 "name": "BaseBdev2", 00:22:43.491 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:43.491 "is_configured": true, 00:22:43.491 "data_offset": 2048, 00:22:43.491 "data_size": 63488 00:22:43.491 }, 00:22:43.491 { 00:22:43.491 "name": "BaseBdev3", 00:22:43.491 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:43.491 "is_configured": true, 00:22:43.491 "data_offset": 2048, 00:22:43.491 "data_size": 63488 00:22:43.491 }, 00:22:43.491 { 00:22:43.491 "name": "BaseBdev4", 00:22:43.491 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:43.491 "is_configured": true, 00:22:43.491 "data_offset": 2048, 00:22:43.491 "data_size": 63488 00:22:43.491 } 00:22:43.491 ] 00:22:43.491 }' 00:22:43.491 11:33:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.491 11:33:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.491 11:33:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.491 11:33:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.491 11:33:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.425 11:33:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.683 "name": "raid_bdev1", 00:22:44.683 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:44.683 "strip_size_kb": 64, 00:22:44.683 "state": "online", 00:22:44.683 "raid_level": "raid5f", 00:22:44.683 "superblock": true, 00:22:44.683 "num_base_bdevs": 4, 00:22:44.683 "num_base_bdevs_discovered": 4, 00:22:44.683 "num_base_bdevs_operational": 4, 00:22:44.683 "process": { 00:22:44.683 "type": "rebuild", 00:22:44.683 "target": "spare", 00:22:44.683 "progress": { 00:22:44.683 "blocks": 51840, 00:22:44.683 "percent": 27 00:22:44.683 } 00:22:44.683 }, 00:22:44.683 "base_bdevs_list": [ 00:22:44.683 { 00:22:44.683 "name": "spare", 00:22:44.683 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:44.683 "is_configured": true, 00:22:44.683 "data_offset": 2048, 00:22:44.683 "data_size": 63488 00:22:44.683 }, 00:22:44.683 { 00:22:44.683 "name": "BaseBdev2", 00:22:44.683 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:44.683 "is_configured": true, 00:22:44.683 "data_offset": 2048, 00:22:44.683 "data_size": 63488 00:22:44.683 }, 00:22:44.683 { 00:22:44.683 "name": "BaseBdev3", 00:22:44.683 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:44.683 "is_configured": true, 00:22:44.683 "data_offset": 2048, 00:22:44.683 "data_size": 63488 00:22:44.683 }, 00:22:44.683 { 00:22:44.683 "name": "BaseBdev4", 00:22:44.683 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:44.683 "is_configured": true, 00:22:44.683 "data_offset": 2048, 00:22:44.683 "data_size": 63488 00:22:44.683 } 00:22:44.683 ] 00:22:44.683 }' 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.683 11:33:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.620 11:33:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.879 11:33:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.879 "name": "raid_bdev1", 00:22:45.879 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:45.879 "strip_size_kb": 64, 00:22:45.880 "state": "online", 00:22:45.880 "raid_level": "raid5f", 00:22:45.880 "superblock": true, 00:22:45.880 "num_base_bdevs": 4, 00:22:45.880 "num_base_bdevs_discovered": 4, 00:22:45.880 "num_base_bdevs_operational": 4, 00:22:45.880 "process": { 00:22:45.880 "type": "rebuild", 00:22:45.880 "target": "spare", 00:22:45.880 "progress": { 00:22:45.880 "blocks": 76800, 00:22:45.880 "percent": 40 00:22:45.880 } 00:22:45.880 }, 00:22:45.880 "base_bdevs_list": [ 00:22:45.880 { 00:22:45.880 "name": "spare", 00:22:45.880 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:45.880 "is_configured": true, 00:22:45.880 "data_offset": 2048, 00:22:45.880 "data_size": 63488 00:22:45.880 }, 00:22:45.880 { 00:22:45.880 "name": "BaseBdev2", 00:22:45.880 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:45.880 "is_configured": true, 00:22:45.880 "data_offset": 2048, 00:22:45.880 "data_size": 63488 00:22:45.880 }, 00:22:45.880 { 00:22:45.880 "name": "BaseBdev3", 00:22:45.880 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:45.880 "is_configured": true, 00:22:45.880 "data_offset": 2048, 00:22:45.880 "data_size": 63488 00:22:45.880 }, 00:22:45.880 { 00:22:45.880 "name": "BaseBdev4", 00:22:45.880 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:45.880 "is_configured": true, 00:22:45.880 "data_offset": 2048, 00:22:45.880 "data_size": 63488 00:22:45.880 } 00:22:45.880 ] 00:22:45.880 }' 00:22:45.880 11:33:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.880 11:33:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.880 11:33:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.139 11:33:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.139 11:33:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.076 11:33:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.335 "name": "raid_bdev1", 00:22:47.335 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:47.335 "strip_size_kb": 64, 00:22:47.335 "state": "online", 00:22:47.335 "raid_level": "raid5f", 00:22:47.335 "superblock": true, 00:22:47.335 "num_base_bdevs": 4, 00:22:47.335 "num_base_bdevs_discovered": 4, 00:22:47.335 "num_base_bdevs_operational": 4, 00:22:47.335 "process": { 00:22:47.335 "type": "rebuild", 00:22:47.335 "target": "spare", 00:22:47.335 "progress": { 00:22:47.335 "blocks": 101760, 00:22:47.335 "percent": 53 00:22:47.335 } 00:22:47.335 }, 00:22:47.335 "base_bdevs_list": [ 00:22:47.335 { 00:22:47.335 "name": "spare", 00:22:47.335 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:47.335 "is_configured": true, 00:22:47.335 "data_offset": 2048, 00:22:47.335 "data_size": 63488 00:22:47.335 }, 00:22:47.335 { 00:22:47.335 "name": "BaseBdev2", 00:22:47.335 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:47.335 "is_configured": true, 00:22:47.335 "data_offset": 2048, 00:22:47.335 "data_size": 63488 00:22:47.335 }, 00:22:47.335 { 00:22:47.335 "name": "BaseBdev3", 00:22:47.335 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:47.335 "is_configured": true, 00:22:47.335 "data_offset": 2048, 00:22:47.335 "data_size": 63488 00:22:47.335 }, 00:22:47.335 { 00:22:47.335 "name": "BaseBdev4", 00:22:47.335 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:47.335 "is_configured": true, 00:22:47.335 "data_offset": 2048, 00:22:47.335 "data_size": 63488 00:22:47.335 } 00:22:47.335 ] 00:22:47.335 }' 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.335 11:33:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.358 11:33:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.633 "name": "raid_bdev1", 00:22:48.633 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:48.633 "strip_size_kb": 64, 00:22:48.633 "state": "online", 00:22:48.633 "raid_level": "raid5f", 00:22:48.633 "superblock": true, 00:22:48.633 "num_base_bdevs": 4, 00:22:48.633 "num_base_bdevs_discovered": 4, 00:22:48.633 "num_base_bdevs_operational": 4, 00:22:48.633 "process": { 00:22:48.633 "type": "rebuild", 00:22:48.633 "target": "spare", 00:22:48.633 "progress": { 00:22:48.633 "blocks": 124800, 00:22:48.633 "percent": 65 00:22:48.633 } 00:22:48.633 }, 00:22:48.633 "base_bdevs_list": [ 00:22:48.633 { 00:22:48.633 "name": "spare", 00:22:48.633 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:48.633 "is_configured": true, 00:22:48.633 "data_offset": 2048, 00:22:48.633 "data_size": 63488 00:22:48.633 }, 00:22:48.633 { 00:22:48.633 "name": "BaseBdev2", 00:22:48.633 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:48.633 "is_configured": true, 00:22:48.633 "data_offset": 2048, 00:22:48.633 "data_size": 63488 00:22:48.633 }, 00:22:48.633 { 00:22:48.633 "name": "BaseBdev3", 00:22:48.633 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:48.633 "is_configured": true, 00:22:48.633 "data_offset": 2048, 00:22:48.633 "data_size": 63488 00:22:48.633 }, 00:22:48.633 { 00:22:48.633 "name": "BaseBdev4", 00:22:48.633 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:48.633 "is_configured": true, 00:22:48.633 "data_offset": 2048, 00:22:48.633 "data_size": 63488 00:22:48.633 } 00:22:48.633 ] 00:22:48.633 }' 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.633 11:33:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.578 11:33:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.837 "name": "raid_bdev1", 00:22:49.837 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:49.837 "strip_size_kb": 64, 00:22:49.837 "state": "online", 00:22:49.837 "raid_level": "raid5f", 00:22:49.837 "superblock": true, 00:22:49.837 "num_base_bdevs": 4, 00:22:49.837 "num_base_bdevs_discovered": 4, 00:22:49.837 "num_base_bdevs_operational": 4, 00:22:49.837 "process": { 00:22:49.837 "type": "rebuild", 00:22:49.837 "target": "spare", 00:22:49.837 "progress": { 00:22:49.837 "blocks": 149760, 00:22:49.837 "percent": 78 00:22:49.837 } 00:22:49.837 }, 00:22:49.837 "base_bdevs_list": [ 00:22:49.837 { 00:22:49.837 "name": "spare", 00:22:49.837 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:49.837 "is_configured": true, 00:22:49.837 "data_offset": 2048, 00:22:49.837 "data_size": 63488 00:22:49.837 }, 00:22:49.837 { 00:22:49.837 "name": "BaseBdev2", 00:22:49.837 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:49.837 "is_configured": true, 00:22:49.837 "data_offset": 2048, 00:22:49.837 "data_size": 63488 00:22:49.837 }, 00:22:49.837 { 00:22:49.837 "name": "BaseBdev3", 00:22:49.837 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:49.837 "is_configured": true, 00:22:49.837 "data_offset": 2048, 00:22:49.837 "data_size": 63488 00:22:49.837 }, 00:22:49.837 { 00:22:49.837 "name": "BaseBdev4", 00:22:49.837 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:49.837 "is_configured": true, 00:22:49.837 "data_offset": 2048, 00:22:49.837 "data_size": 63488 00:22:49.837 } 00:22:49.837 ] 00:22:49.837 }' 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.837 11:33:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.773 11:33:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.031 11:33:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.031 "name": "raid_bdev1", 00:22:51.031 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:51.031 "strip_size_kb": 64, 00:22:51.032 "state": "online", 00:22:51.032 "raid_level": "raid5f", 00:22:51.032 "superblock": true, 00:22:51.032 "num_base_bdevs": 4, 00:22:51.032 "num_base_bdevs_discovered": 4, 00:22:51.032 "num_base_bdevs_operational": 4, 00:22:51.032 "process": { 00:22:51.032 "type": "rebuild", 00:22:51.032 "target": "spare", 00:22:51.032 "progress": { 00:22:51.032 "blocks": 172800, 00:22:51.032 "percent": 90 00:22:51.032 } 00:22:51.032 }, 00:22:51.032 "base_bdevs_list": [ 00:22:51.032 { 00:22:51.032 "name": "spare", 00:22:51.032 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:51.032 "is_configured": true, 00:22:51.032 "data_offset": 2048, 00:22:51.032 "data_size": 63488 00:22:51.032 }, 00:22:51.032 { 00:22:51.032 "name": "BaseBdev2", 00:22:51.032 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:51.032 "is_configured": true, 00:22:51.032 "data_offset": 2048, 00:22:51.032 "data_size": 63488 00:22:51.032 }, 00:22:51.032 { 00:22:51.032 "name": "BaseBdev3", 00:22:51.032 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:51.032 "is_configured": true, 00:22:51.032 "data_offset": 2048, 00:22:51.032 "data_size": 63488 00:22:51.032 }, 00:22:51.032 { 00:22:51.032 "name": "BaseBdev4", 00:22:51.032 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:51.032 "is_configured": true, 00:22:51.032 "data_offset": 2048, 00:22:51.032 "data_size": 63488 00:22:51.032 } 00:22:51.032 ] 00:22:51.032 }' 00:22:51.032 11:33:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.032 11:33:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.032 11:33:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.032 11:33:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.032 11:33:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:51.967 [2024-11-26 11:33:10.082484] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:51.967 [2024-11-26 11:33:10.082557] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:51.967 [2024-11-26 11:33:10.082685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.967 11:33:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.225 11:33:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.225 "name": "raid_bdev1", 00:22:52.225 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:52.225 "strip_size_kb": 64, 00:22:52.225 "state": "online", 00:22:52.225 "raid_level": "raid5f", 00:22:52.225 "superblock": true, 00:22:52.225 "num_base_bdevs": 4, 00:22:52.225 "num_base_bdevs_discovered": 4, 00:22:52.225 "num_base_bdevs_operational": 4, 00:22:52.225 "base_bdevs_list": [ 00:22:52.225 { 00:22:52.225 "name": "spare", 00:22:52.225 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:52.225 "is_configured": true, 00:22:52.226 "data_offset": 2048, 00:22:52.226 "data_size": 63488 00:22:52.226 }, 00:22:52.226 { 00:22:52.226 "name": "BaseBdev2", 00:22:52.226 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:52.226 "is_configured": true, 00:22:52.226 "data_offset": 2048, 00:22:52.226 "data_size": 63488 00:22:52.226 }, 00:22:52.226 { 00:22:52.226 "name": "BaseBdev3", 00:22:52.226 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:52.226 "is_configured": true, 00:22:52.226 "data_offset": 2048, 00:22:52.226 "data_size": 63488 00:22:52.226 }, 00:22:52.226 { 00:22:52.226 "name": "BaseBdev4", 00:22:52.226 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:52.226 "is_configured": true, 00:22:52.226 "data_offset": 2048, 00:22:52.226 "data_size": 63488 00:22:52.226 } 00:22:52.226 ] 00:22:52.226 }' 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@660 -- # break 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.226 11:33:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.485 "name": "raid_bdev1", 00:22:52.485 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:52.485 "strip_size_kb": 64, 00:22:52.485 "state": "online", 00:22:52.485 "raid_level": "raid5f", 00:22:52.485 "superblock": true, 00:22:52.485 "num_base_bdevs": 4, 00:22:52.485 "num_base_bdevs_discovered": 4, 00:22:52.485 "num_base_bdevs_operational": 4, 00:22:52.485 "base_bdevs_list": [ 00:22:52.485 { 00:22:52.485 "name": "spare", 00:22:52.485 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 }, 00:22:52.485 { 00:22:52.485 "name": "BaseBdev2", 00:22:52.485 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 }, 00:22:52.485 { 00:22:52.485 "name": "BaseBdev3", 00:22:52.485 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 }, 00:22:52.485 { 00:22:52.485 "name": "BaseBdev4", 00:22:52.485 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:52.485 "is_configured": true, 00:22:52.485 "data_offset": 2048, 00:22:52.485 "data_size": 63488 00:22:52.485 } 00:22:52.485 ] 00:22:52.485 }' 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.485 11:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.744 11:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.744 "name": "raid_bdev1", 00:22:52.744 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:52.744 "strip_size_kb": 64, 00:22:52.744 "state": "online", 00:22:52.744 "raid_level": "raid5f", 00:22:52.744 "superblock": true, 00:22:52.744 "num_base_bdevs": 4, 00:22:52.744 "num_base_bdevs_discovered": 4, 00:22:52.744 "num_base_bdevs_operational": 4, 00:22:52.744 "base_bdevs_list": [ 00:22:52.744 { 00:22:52.744 "name": "spare", 00:22:52.744 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:52.744 "is_configured": true, 00:22:52.744 "data_offset": 2048, 00:22:52.744 "data_size": 63488 00:22:52.744 }, 00:22:52.744 { 00:22:52.744 "name": "BaseBdev2", 00:22:52.744 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:52.744 "is_configured": true, 00:22:52.744 "data_offset": 2048, 00:22:52.744 "data_size": 63488 00:22:52.744 }, 00:22:52.744 { 00:22:52.744 "name": "BaseBdev3", 00:22:52.744 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:52.744 "is_configured": true, 00:22:52.744 "data_offset": 2048, 00:22:52.744 "data_size": 63488 00:22:52.744 }, 00:22:52.744 { 00:22:52.744 "name": "BaseBdev4", 00:22:52.744 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:52.744 "is_configured": true, 00:22:52.744 "data_offset": 2048, 00:22:52.744 "data_size": 63488 00:22:52.744 } 00:22:52.744 ] 00:22:52.744 }' 00:22:52.744 11:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.744 11:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.312 11:33:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:53.312 [2024-11-26 11:33:11.406743] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.312 [2024-11-26 11:33:11.406776] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.312 [2024-11-26 11:33:11.406861] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.312 [2024-11-26 11:33:11.406996] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.312 [2024-11-26 11:33:11.407013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:22:53.312 11:33:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.312 11:33:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:53.571 11:33:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:53.571 11:33:11 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:53.571 11:33:11 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@12 -- # local i 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.571 11:33:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:53.830 /dev/nbd0 00:22:53.830 11:33:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:53.830 11:33:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:53.830 11:33:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:53.830 11:33:11 -- common/autotest_common.sh@867 -- # local i 00:22:53.830 11:33:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:53.830 11:33:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:53.830 11:33:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:53.830 11:33:11 -- common/autotest_common.sh@871 -- # break 00:22:53.830 11:33:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:53.830 11:33:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:53.830 11:33:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:53.830 1+0 records in 00:22:53.830 1+0 records out 00:22:53.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211263 s, 19.4 MB/s 00:22:53.830 11:33:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.830 11:33:11 -- common/autotest_common.sh@884 -- # size=4096 00:22:53.830 11:33:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:53.830 11:33:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:53.830 11:33:11 -- common/autotest_common.sh@887 -- # return 0 00:22:53.830 11:33:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.830 11:33:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.830 11:33:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:54.090 /dev/nbd1 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.090 11:33:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:54.090 11:33:12 -- common/autotest_common.sh@867 -- # local i 00:22:54.090 11:33:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:54.090 11:33:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:54.090 11:33:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:54.090 11:33:12 -- common/autotest_common.sh@871 -- # break 00:22:54.090 11:33:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:54.090 11:33:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:54.090 11:33:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.090 1+0 records in 00:22:54.090 1+0 records out 00:22:54.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266533 s, 15.4 MB/s 00:22:54.090 11:33:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.090 11:33:12 -- common/autotest_common.sh@884 -- # size=4096 00:22:54.090 11:33:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.090 11:33:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:54.090 11:33:12 -- common/autotest_common.sh@887 -- # return 0 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.090 11:33:12 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:54.090 11:33:12 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@51 -- # local i 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.090 11:33:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@41 -- # break 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.349 11:33:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@41 -- # break 00:22:54.608 11:33:12 -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.608 11:33:12 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:54.608 11:33:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:54.608 11:33:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:54.608 11:33:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:54.867 11:33:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.126 [2024-11-26 11:33:13.195757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.126 [2024-11-26 11:33:13.195834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.126 [2024-11-26 11:33:13.195867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:22:55.126 [2024-11-26 11:33:13.195880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.126 [2024-11-26 11:33:13.198265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.126 [2024-11-26 11:33:13.198304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.126 [2024-11-26 11:33:13.198402] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.126 [2024-11-26 11:33:13.198448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.126 BaseBdev1 00:22:55.126 11:33:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.126 11:33:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:55.126 11:33:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:55.385 11:33:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:55.645 [2024-11-26 11:33:13.707835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:55.645 [2024-11-26 11:33:13.708053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.645 [2024-11-26 11:33:13.708094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:22:55.645 [2024-11-26 11:33:13.708111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.645 [2024-11-26 11:33:13.708584] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.645 [2024-11-26 11:33:13.708617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:55.645 [2024-11-26 11:33:13.708707] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:55.645 [2024-11-26 11:33:13.708724] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:55.645 [2024-11-26 11:33:13.708740] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.645 [2024-11-26 11:33:13.708770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:22:55.645 [2024-11-26 11:33:13.708840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.645 BaseBdev2 00:22:55.645 11:33:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.645 11:33:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:55.645 11:33:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:55.904 11:33:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:55.904 [2024-11-26 11:33:14.119931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:55.904 [2024-11-26 11:33:14.120182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.904 [2024-11-26 11:33:14.120251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:22:55.904 [2024-11-26 11:33:14.120435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.904 [2024-11-26 11:33:14.120985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.904 [2024-11-26 11:33:14.121146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:55.904 [2024-11-26 11:33:14.121348] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:55.904 [2024-11-26 11:33:14.121500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:55.904 BaseBdev3 00:22:55.904 11:33:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.904 11:33:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:55.904 11:33:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:56.163 11:33:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:56.423 [2024-11-26 11:33:14.496039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:56.423 [2024-11-26 11:33:14.496276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.423 [2024-11-26 11:33:14.496330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:22:56.423 [2024-11-26 11:33:14.496346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.423 [2024-11-26 11:33:14.496817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.423 [2024-11-26 11:33:14.496857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:56.423 [2024-11-26 11:33:14.496967] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:56.423 [2024-11-26 11:33:14.497016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:56.423 BaseBdev4 00:22:56.423 11:33:14 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:56.682 11:33:14 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:56.682 [2024-11-26 11:33:14.900153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.682 [2024-11-26 11:33:14.900375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.682 [2024-11-26 11:33:14.900416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:22:56.682 [2024-11-26 11:33:14.900434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.682 [2024-11-26 11:33:14.901009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.682 [2024-11-26 11:33:14.901041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.682 [2024-11-26 11:33:14.901120] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:56.683 [2024-11-26 11:33:14.901158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.683 spare 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.683 11:33:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.942 11:33:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.942 11:33:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.942 [2024-11-26 11:33:15.001336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:22:56.942 [2024-11-26 11:33:15.001369] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:56.942 [2024-11-26 11:33:15.001474] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048a80 00:22:56.942 [2024-11-26 11:33:15.002223] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:22:56.942 [2024-11-26 11:33:15.002258] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:22:56.942 [2024-11-26 11:33:15.002414] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.942 11:33:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.942 "name": "raid_bdev1", 00:22:56.942 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:56.942 "strip_size_kb": 64, 00:22:56.942 "state": "online", 00:22:56.942 "raid_level": "raid5f", 00:22:56.942 "superblock": true, 00:22:56.942 "num_base_bdevs": 4, 00:22:56.942 "num_base_bdevs_discovered": 4, 00:22:56.942 "num_base_bdevs_operational": 4, 00:22:56.942 "base_bdevs_list": [ 00:22:56.942 { 00:22:56.942 "name": "spare", 00:22:56.942 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:56.942 "is_configured": true, 00:22:56.942 "data_offset": 2048, 00:22:56.942 "data_size": 63488 00:22:56.942 }, 00:22:56.942 { 00:22:56.942 "name": "BaseBdev2", 00:22:56.942 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:56.942 "is_configured": true, 00:22:56.942 "data_offset": 2048, 00:22:56.942 "data_size": 63488 00:22:56.942 }, 00:22:56.942 { 00:22:56.942 "name": "BaseBdev3", 00:22:56.942 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:56.942 "is_configured": true, 00:22:56.942 "data_offset": 2048, 00:22:56.942 "data_size": 63488 00:22:56.942 }, 00:22:56.942 { 00:22:56.942 "name": "BaseBdev4", 00:22:56.942 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:56.942 "is_configured": true, 00:22:56.942 "data_offset": 2048, 00:22:56.942 "data_size": 63488 00:22:56.942 } 00:22:56.942 ] 00:22:56.942 }' 00:22:56.942 11:33:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.942 11:33:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.202 11:33:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.461 11:33:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.461 "name": "raid_bdev1", 00:22:57.461 "uuid": "827bcf13-7490-4ee8-9ce1-d3bbf78ad684", 00:22:57.461 "strip_size_kb": 64, 00:22:57.461 "state": "online", 00:22:57.461 "raid_level": "raid5f", 00:22:57.461 "superblock": true, 00:22:57.461 "num_base_bdevs": 4, 00:22:57.461 "num_base_bdevs_discovered": 4, 00:22:57.461 "num_base_bdevs_operational": 4, 00:22:57.461 "base_bdevs_list": [ 00:22:57.461 { 00:22:57.461 "name": "spare", 00:22:57.461 "uuid": "f05ff762-5036-5af0-acba-b112bbb6a7ec", 00:22:57.461 "is_configured": true, 00:22:57.461 "data_offset": 2048, 00:22:57.461 "data_size": 63488 00:22:57.461 }, 00:22:57.461 { 00:22:57.461 "name": "BaseBdev2", 00:22:57.461 "uuid": "33ce1788-f5d4-5699-8586-f8b0e162d19f", 00:22:57.461 "is_configured": true, 00:22:57.461 "data_offset": 2048, 00:22:57.461 "data_size": 63488 00:22:57.461 }, 00:22:57.461 { 00:22:57.461 "name": "BaseBdev3", 00:22:57.461 "uuid": "52cbce89-a97e-5d47-950b-6041a6f8e29b", 00:22:57.461 "is_configured": true, 00:22:57.461 "data_offset": 2048, 00:22:57.461 "data_size": 63488 00:22:57.461 }, 00:22:57.461 { 00:22:57.461 "name": "BaseBdev4", 00:22:57.461 "uuid": "db9a6c79-85a8-59ee-a7d5-44ae3850eb9d", 00:22:57.461 "is_configured": true, 00:22:57.461 "data_offset": 2048, 00:22:57.461 "data_size": 63488 00:22:57.461 } 00:22:57.461 ] 00:22:57.461 }' 00:22:57.461 11:33:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.461 11:33:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:57.461 11:33:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.721 11:33:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:57.721 11:33:15 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.721 11:33:15 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:57.721 11:33:15 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.721 11:33:15 -- bdev/bdev_raid.sh@709 -- # killprocess 96293 00:22:57.721 11:33:15 -- common/autotest_common.sh@936 -- # '[' -z 96293 ']' 00:22:57.721 11:33:15 -- common/autotest_common.sh@940 -- # kill -0 96293 00:22:57.721 11:33:15 -- common/autotest_common.sh@941 -- # uname 00:22:57.721 11:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.721 11:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96293 00:22:57.721 killing process with pid 96293 00:22:57.721 Received shutdown signal, test time was about 60.000000 seconds 00:22:57.721 00:22:57.721 Latency(us) 00:22:57.721 [2024-11-26T11:33:15.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.721 [2024-11-26T11:33:15.951Z] =================================================================================================================== 00:22:57.721 [2024-11-26T11:33:15.951Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.721 11:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:57.721 11:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:57.721 11:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96293' 00:22:57.721 11:33:15 -- common/autotest_common.sh@955 -- # kill 96293 00:22:57.721 [2024-11-26 11:33:15.919049] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.721 11:33:15 -- common/autotest_common.sh@960 -- # wait 96293 00:22:57.721 [2024-11-26 11:33:15.919141] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.721 [2024-11-26 11:33:15.919262] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.721 [2024-11-26 11:33:15.919278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:22:57.721 [2024-11-26 11:33:15.948294] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:57.980 11:33:16 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:57.980 00:22:57.980 real 0m24.684s 00:22:57.980 user 0m35.933s 00:22:57.980 sys 0m3.139s 00:22:57.980 ************************************ 00:22:57.980 END TEST raid5f_rebuild_test_sb 00:22:57.980 ************************************ 00:22:57.980 11:33:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:57.980 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.980 11:33:16 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:22:57.980 ************************************ 00:22:57.980 END TEST bdev_raid 00:22:57.980 ************************************ 00:22:57.980 00:22:57.980 real 9m46.287s 00:22:57.980 user 15m44.517s 00:22:57.980 sys 1m33.049s 00:22:57.980 11:33:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:57.980 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.980 11:33:16 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:57.980 11:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:57.980 11:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:57.980 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.980 ************************************ 00:22:57.980 START TEST bdevperf_config 00:22:57.980 ************************************ 00:22:57.980 11:33:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:58.240 * Looking for test storage... 00:22:58.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:22:58.240 11:33:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:58.240 11:33:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:58.240 11:33:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:58.240 11:33:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:58.240 11:33:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:58.240 11:33:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:58.240 11:33:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:58.240 11:33:16 -- scripts/common.sh@335 -- # IFS=.-: 00:22:58.240 11:33:16 -- scripts/common.sh@335 -- # read -ra ver1 00:22:58.240 11:33:16 -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.240 11:33:16 -- scripts/common.sh@336 -- # read -ra ver2 00:22:58.240 11:33:16 -- scripts/common.sh@337 -- # local 'op=<' 00:22:58.240 11:33:16 -- scripts/common.sh@339 -- # ver1_l=2 00:22:58.240 11:33:16 -- scripts/common.sh@340 -- # ver2_l=1 00:22:58.240 11:33:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:58.240 11:33:16 -- scripts/common.sh@343 -- # case "$op" in 00:22:58.240 11:33:16 -- scripts/common.sh@344 -- # : 1 00:22:58.240 11:33:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:58.240 11:33:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.240 11:33:16 -- scripts/common.sh@364 -- # decimal 1 00:22:58.240 11:33:16 -- scripts/common.sh@352 -- # local d=1 00:22:58.240 11:33:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.240 11:33:16 -- scripts/common.sh@354 -- # echo 1 00:22:58.240 11:33:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:58.240 11:33:16 -- scripts/common.sh@365 -- # decimal 2 00:22:58.240 11:33:16 -- scripts/common.sh@352 -- # local d=2 00:22:58.240 11:33:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.240 11:33:16 -- scripts/common.sh@354 -- # echo 2 00:22:58.240 11:33:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:58.240 11:33:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:58.240 11:33:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:58.240 11:33:16 -- scripts/common.sh@367 -- # return 0 00:22:58.240 11:33:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.240 11:33:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:58.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.240 --rc genhtml_branch_coverage=1 00:22:58.240 --rc genhtml_function_coverage=1 00:22:58.240 --rc genhtml_legend=1 00:22:58.240 --rc geninfo_all_blocks=1 00:22:58.240 --rc geninfo_unexecuted_blocks=1 00:22:58.240 00:22:58.240 ' 00:22:58.240 11:33:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:58.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.240 --rc genhtml_branch_coverage=1 00:22:58.240 --rc genhtml_function_coverage=1 00:22:58.240 --rc genhtml_legend=1 00:22:58.240 --rc geninfo_all_blocks=1 00:22:58.240 --rc geninfo_unexecuted_blocks=1 00:22:58.240 00:22:58.240 ' 00:22:58.240 11:33:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:58.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.240 --rc genhtml_branch_coverage=1 00:22:58.240 --rc genhtml_function_coverage=1 00:22:58.240 --rc genhtml_legend=1 00:22:58.240 --rc geninfo_all_blocks=1 00:22:58.240 --rc geninfo_unexecuted_blocks=1 00:22:58.240 00:22:58.240 ' 00:22:58.240 11:33:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:58.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.240 --rc genhtml_branch_coverage=1 00:22:58.240 --rc genhtml_function_coverage=1 00:22:58.240 --rc genhtml_legend=1 00:22:58.240 --rc geninfo_all_blocks=1 00:22:58.240 --rc geninfo_unexecuted_blocks=1 00:22:58.240 00:22:58.240 ' 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:22:58.240 11:33:16 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:22:58.240 11:33:16 -- bdevperf/common.sh@8 -- # local job_section=global 00:22:58.240 11:33:16 -- bdevperf/common.sh@9 -- # local rw=read 00:22:58.240 11:33:16 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:58.240 11:33:16 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:22:58.240 11:33:16 -- bdevperf/common.sh@13 -- # cat 00:22:58.240 11:33:16 -- bdevperf/common.sh@18 -- # job='[global]' 00:22:58.240 00:22:58.240 11:33:16 -- bdevperf/common.sh@19 -- # echo 00:22:58.240 11:33:16 -- bdevperf/common.sh@20 -- # cat 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@18 -- # create_job job0 00:22:58.240 11:33:16 -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:58.240 11:33:16 -- bdevperf/common.sh@9 -- # local rw= 00:22:58.240 11:33:16 -- bdevperf/common.sh@10 -- # local filename= 00:22:58.240 11:33:16 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:58.240 11:33:16 -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:58.240 00:22:58.240 11:33:16 -- bdevperf/common.sh@19 -- # echo 00:22:58.240 11:33:16 -- bdevperf/common.sh@20 -- # cat 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@19 -- # create_job job1 00:22:58.240 11:33:16 -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:58.240 11:33:16 -- bdevperf/common.sh@9 -- # local rw= 00:22:58.240 11:33:16 -- bdevperf/common.sh@10 -- # local filename= 00:22:58.240 11:33:16 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:58.240 11:33:16 -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:58.240 00:22:58.240 11:33:16 -- bdevperf/common.sh@19 -- # echo 00:22:58.240 11:33:16 -- bdevperf/common.sh@20 -- # cat 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@20 -- # create_job job2 00:22:58.240 11:33:16 -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:58.240 11:33:16 -- bdevperf/common.sh@9 -- # local rw= 00:22:58.240 11:33:16 -- bdevperf/common.sh@10 -- # local filename= 00:22:58.240 11:33:16 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:58.240 11:33:16 -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:58.240 00:22:58.240 11:33:16 -- bdevperf/common.sh@19 -- # echo 00:22:58.240 11:33:16 -- bdevperf/common.sh@20 -- # cat 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@21 -- # create_job job3 00:22:58.240 11:33:16 -- bdevperf/common.sh@8 -- # local job_section=job3 00:22:58.240 11:33:16 -- bdevperf/common.sh@9 -- # local rw= 00:22:58.240 11:33:16 -- bdevperf/common.sh@10 -- # local filename= 00:22:58.240 11:33:16 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:22:58.240 00:22:58.240 11:33:16 -- bdevperf/common.sh@18 -- # job='[job3]' 00:22:58.240 11:33:16 -- bdevperf/common.sh@19 -- # echo 00:22:58.240 11:33:16 -- bdevperf/common.sh@20 -- # cat 00:22:58.240 11:33:16 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:00.775 11:33:19 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-26 11:33:16.468034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.775 [2024-11-26 11:33:16.468228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96984 ] 00:23:00.775 Using job config with 4 jobs 00:23:00.775 [2024-11-26 11:33:16.633967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.775 [2024-11-26 11:33:16.683908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.775 cpumask for '\''job0'\'' is too big 00:23:00.775 cpumask for '\''job1'\'' is too big 00:23:00.775 cpumask for '\''job2'\'' is too big 00:23:00.775 cpumask for '\''job3'\'' is too big 00:23:00.775 Running I/O for 2 seconds... 00:23:00.775 00:23:00.775 Latency(us) 00:23:00.775 [2024-11-26T11:33:19.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.775 [2024-11-26T11:33:19.005Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.775 Malloc0 : 2.01 31661.48 30.92 0.00 0.00 8075.97 1623.51 14120.03 00:23:00.775 [2024-11-26T11:33:19.005Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.775 Malloc0 : 2.01 31640.58 30.90 0.00 0.00 8064.99 1534.14 12451.84 00:23:00.775 [2024-11-26T11:33:19.005Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.775 Malloc0 : 2.02 31620.07 30.88 0.00 0.00 8054.39 1526.69 10843.23 00:23:00.775 [2024-11-26T11:33:19.005Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.775 Malloc0 : 2.02 31694.59 30.95 0.00 0.00 8020.03 685.15 9949.56 00:23:00.775 [2024-11-26T11:33:19.005Z] =================================================================================================================== 00:23:00.775 [2024-11-26T11:33:19.005Z] Total : 126616.72 123.65 0.00 0.00 8053.81 685.15 14120.03' 00:23:00.775 11:33:19 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-26 11:33:16.468034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.775 [2024-11-26 11:33:16.468228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96984 ] 00:23:00.775 Using job config with 4 jobs 00:23:00.775 [2024-11-26 11:33:16.633967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.775 [2024-11-26 11:33:16.683908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.775 cpumask for '\''job0'\'' is too big 00:23:00.775 cpumask for '\''job1'\'' is too big 00:23:00.775 cpumask for '\''job2'\'' is too big 00:23:00.775 cpumask for '\''job3'\'' is too big 00:23:00.775 Running I/O for 2 seconds... 00:23:00.775 00:23:00.775 Latency(us) 00:23:00.775 [2024-11-26T11:33:19.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.775 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.01 31661.48 30.92 0.00 0.00 8075.97 1623.51 14120.03 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.01 31640.58 30.90 0.00 0.00 8064.99 1534.14 12451.84 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.02 31620.07 30.88 0.00 0.00 8054.39 1526.69 10843.23 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.02 31694.59 30.95 0.00 0.00 8020.03 685.15 9949.56 00:23:00.776 [2024-11-26T11:33:19.006Z] =================================================================================================================== 00:23:00.776 [2024-11-26T11:33:19.006Z] Total : 126616.72 123.65 0.00 0.00 8053.81 685.15 14120.03' 00:23:00.776 11:33:19 -- bdevperf/common.sh@32 -- # echo '[2024-11-26 11:33:16.468034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.776 [2024-11-26 11:33:16.468228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96984 ] 00:23:00.776 Using job config with 4 jobs 00:23:00.776 [2024-11-26 11:33:16.633967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.776 [2024-11-26 11:33:16.683908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.776 cpumask for '\''job0'\'' is too big 00:23:00.776 cpumask for '\''job1'\'' is too big 00:23:00.776 cpumask for '\''job2'\'' is too big 00:23:00.776 cpumask for '\''job3'\'' is too big 00:23:00.776 Running I/O for 2 seconds... 00:23:00.776 00:23:00.776 Latency(us) 00:23:00.776 [2024-11-26T11:33:19.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.01 31661.48 30.92 0.00 0.00 8075.97 1623.51 14120.03 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.01 31640.58 30.90 0.00 0.00 8064.99 1534.14 12451.84 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.02 31620.07 30.88 0.00 0.00 8054.39 1526.69 10843.23 00:23:00.776 [2024-11-26T11:33:19.006Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:00.776 Malloc0 : 2.02 31694.59 30.95 0.00 0.00 8020.03 685.15 9949.56 00:23:00.776 [2024-11-26T11:33:19.006Z] =================================================================================================================== 00:23:00.776 [2024-11-26T11:33:19.006Z] Total : 126616.72 123.65 0.00 0.00 8053.81 685.15 14120.03' 00:23:00.776 11:33:19 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:01.035 11:33:19 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:01.035 11:33:19 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:01.035 11:33:19 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:01.035 [2024-11-26 11:33:19.076373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:01.035 [2024-11-26 11:33:19.076594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97014 ] 00:23:01.035 [2024-11-26 11:33:19.241205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.294 [2024-11-26 11:33:19.287818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.294 cpumask for 'job0' is too big 00:23:01.294 cpumask for 'job1' is too big 00:23:01.294 cpumask for 'job2' is too big 00:23:01.294 cpumask for 'job3' is too big 00:23:03.828 11:33:21 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:03.828 Running I/O for 2 seconds... 00:23:03.828 00:23:03.828 Latency(us) 00:23:03.828 [2024-11-26T11:33:22.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.828 [2024-11-26T11:33:22.058Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:03.828 Malloc0 : 2.01 31689.01 30.95 0.00 0.00 8068.26 1645.85 14417.92 00:23:03.828 [2024-11-26T11:33:22.058Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:03.828 Malloc0 : 2.01 31668.31 30.93 0.00 0.00 8056.16 1675.64 12451.84 00:23:03.828 [2024-11-26T11:33:22.058Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:03.828 Malloc0 : 2.02 31707.73 30.96 0.00 0.00 8029.35 1630.95 10545.34 00:23:03.828 [2024-11-26T11:33:22.058Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:03.828 Malloc0 : 2.02 31684.39 30.94 0.00 0.00 8019.69 1563.93 10545.34 00:23:03.828 [2024-11-26T11:33:22.058Z] =================================================================================================================== 00:23:03.828 [2024-11-26T11:33:22.058Z] Total : 126749.45 123.78 0.00 0.00 8043.33 1563.93 14417.92' 00:23:03.828 11:33:21 -- bdevperf/test_config.sh@27 -- # cleanup 00:23:03.828 11:33:21 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:03.828 11:33:21 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:03.828 11:33:21 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:03.828 11:33:21 -- bdevperf/common.sh@9 -- # local rw=write 00:23:03.829 11:33:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:03.829 11:33:21 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:03.829 11:33:21 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:03.829 00:23:03.829 11:33:21 -- bdevperf/common.sh@19 -- # echo 00:23:03.829 11:33:21 -- bdevperf/common.sh@20 -- # cat 00:23:03.829 11:33:21 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:03.829 11:33:21 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:03.829 11:33:21 -- bdevperf/common.sh@9 -- # local rw=write 00:23:03.829 11:33:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:03.829 11:33:21 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:03.829 11:33:21 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:03.829 00:23:03.829 11:33:21 -- bdevperf/common.sh@19 -- # echo 00:23:03.829 11:33:21 -- bdevperf/common.sh@20 -- # cat 00:23:03.829 11:33:21 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:03.829 11:33:21 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:03.829 11:33:21 -- bdevperf/common.sh@9 -- # local rw=write 00:23:03.829 11:33:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:03.829 11:33:21 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:03.829 00:23:03.829 11:33:21 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:03.829 11:33:21 -- bdevperf/common.sh@19 -- # echo 00:23:03.829 11:33:21 -- bdevperf/common.sh@20 -- # cat 00:23:03.829 11:33:21 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:06.363 11:33:24 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-26 11:33:21.681659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.363 [2024-11-26 11:33:21.682361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97054 ] 00:23:06.363 Using job config with 3 jobs 00:23:06.363 [2024-11-26 11:33:21.846077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.363 [2024-11-26 11:33:21.887793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.363 cpumask for '\''job0'\'' is too big 00:23:06.363 cpumask for '\''job1'\'' is too big 00:23:06.363 cpumask for '\''job2'\'' is too big 00:23:06.363 Running I/O for 2 seconds... 00:23:06.363 00:23:06.363 Latency(us) 00:23:06.363 [2024-11-26T11:33:24.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.363 [2024-11-26T11:33:24.593Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.363 Malloc0 : 2.01 42263.26 41.27 0.00 0.00 6052.05 1474.56 9234.62 00:23:06.363 [2024-11-26T11:33:24.593Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.363 Malloc0 : 2.01 42235.47 41.25 0.00 0.00 6045.58 1429.88 7745.16 00:23:06.363 [2024-11-26T11:33:24.593Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.363 Malloc0 : 2.01 42204.81 41.22 0.00 0.00 6039.21 1437.32 7626.01 00:23:06.363 [2024-11-26T11:33:24.593Z] =================================================================================================================== 00:23:06.363 [2024-11-26T11:33:24.593Z] Total : 126703.53 123.73 0.00 0.00 6045.61 1429.88 9234.62' 00:23:06.363 11:33:24 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-26 11:33:21.681659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.363 [2024-11-26 11:33:21.682361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97054 ] 00:23:06.363 Using job config with 3 jobs 00:23:06.363 [2024-11-26 11:33:21.846077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.363 [2024-11-26 11:33:21.887793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.363 cpumask for '\''job0'\'' is too big 00:23:06.363 cpumask for '\''job1'\'' is too big 00:23:06.363 cpumask for '\''job2'\'' is too big 00:23:06.363 Running I/O for 2 seconds... 00:23:06.363 00:23:06.363 Latency(us) 00:23:06.363 [2024-11-26T11:33:24.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.363 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42263.26 41.27 0.00 0.00 6052.05 1474.56 9234.62 00:23:06.364 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42235.47 41.25 0.00 0.00 6045.58 1429.88 7745.16 00:23:06.364 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42204.81 41.22 0.00 0.00 6039.21 1437.32 7626.01 00:23:06.364 [2024-11-26T11:33:24.594Z] =================================================================================================================== 00:23:06.364 [2024-11-26T11:33:24.594Z] Total : 126703.53 123.73 0.00 0.00 6045.61 1429.88 9234.62' 00:23:06.364 11:33:24 -- bdevperf/common.sh@32 -- # echo '[2024-11-26 11:33:21.681659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.364 [2024-11-26 11:33:21.682361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97054 ] 00:23:06.364 Using job config with 3 jobs 00:23:06.364 [2024-11-26 11:33:21.846077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.364 [2024-11-26 11:33:21.887793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.364 cpumask for '\''job0'\'' is too big 00:23:06.364 cpumask for '\''job1'\'' is too big 00:23:06.364 cpumask for '\''job2'\'' is too big 00:23:06.364 Running I/O for 2 seconds... 00:23:06.364 00:23:06.364 Latency(us) 00:23:06.364 [2024-11-26T11:33:24.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.364 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42263.26 41.27 0.00 0.00 6052.05 1474.56 9234.62 00:23:06.364 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42235.47 41.25 0.00 0.00 6045.58 1429.88 7745.16 00:23:06.364 [2024-11-26T11:33:24.594Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:06.364 Malloc0 : 2.01 42204.81 41.22 0.00 0.00 6039.21 1437.32 7626.01 00:23:06.364 [2024-11-26T11:33:24.594Z] =================================================================================================================== 00:23:06.364 [2024-11-26T11:33:24.594Z] Total : 126703.53 123.73 0.00 0.00 6045.61 1429.88 9234.62' 00:23:06.364 11:33:24 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:06.364 11:33:24 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@35 -- # cleanup 00:23:06.364 11:33:24 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:06.364 11:33:24 -- bdevperf/common.sh@8 -- # local job_section=global 00:23:06.364 11:33:24 -- bdevperf/common.sh@9 -- # local rw=rw 00:23:06.364 11:33:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:06.364 11:33:24 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:06.364 11:33:24 -- bdevperf/common.sh@13 -- # cat 00:23:06.364 11:33:24 -- bdevperf/common.sh@18 -- # job='[global]' 00:23:06.364 00:23:06.364 11:33:24 -- bdevperf/common.sh@19 -- # echo 00:23:06.364 11:33:24 -- bdevperf/common.sh@20 -- # cat 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:06.364 11:33:24 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:06.364 11:33:24 -- bdevperf/common.sh@9 -- # local rw= 00:23:06.364 11:33:24 -- bdevperf/common.sh@10 -- # local filename= 00:23:06.364 11:33:24 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:06.364 11:33:24 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:06.364 00:23:06.364 11:33:24 -- bdevperf/common.sh@19 -- # echo 00:23:06.364 11:33:24 -- bdevperf/common.sh@20 -- # cat 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:06.364 11:33:24 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:06.364 11:33:24 -- bdevperf/common.sh@9 -- # local rw= 00:23:06.364 11:33:24 -- bdevperf/common.sh@10 -- # local filename= 00:23:06.364 11:33:24 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:06.364 00:23:06.364 11:33:24 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:06.364 11:33:24 -- bdevperf/common.sh@19 -- # echo 00:23:06.364 11:33:24 -- bdevperf/common.sh@20 -- # cat 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:06.364 11:33:24 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:06.364 11:33:24 -- bdevperf/common.sh@9 -- # local rw= 00:23:06.364 11:33:24 -- bdevperf/common.sh@10 -- # local filename= 00:23:06.364 11:33:24 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:06.364 00:23:06.364 11:33:24 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:06.364 11:33:24 -- bdevperf/common.sh@19 -- # echo 00:23:06.364 11:33:24 -- bdevperf/common.sh@20 -- # cat 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:06.364 11:33:24 -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:06.364 11:33:24 -- bdevperf/common.sh@9 -- # local rw= 00:23:06.364 11:33:24 -- bdevperf/common.sh@10 -- # local filename= 00:23:06.364 11:33:24 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:06.364 00:23:06.364 11:33:24 -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:06.364 11:33:24 -- bdevperf/common.sh@19 -- # echo 00:23:06.364 11:33:24 -- bdevperf/common.sh@20 -- # cat 00:23:06.364 11:33:24 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:08.899 11:33:26 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-26 11:33:24.290457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.899 [2024-11-26 11:33:24.290657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97092 ] 00:23:08.899 Using job config with 4 jobs 00:23:08.899 [2024-11-26 11:33:24.455069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.899 [2024-11-26 11:33:24.508383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.899 cpumask for '\''job0'\'' is too big 00:23:08.899 cpumask for '\''job1'\'' is too big 00:23:08.899 cpumask for '\''job2'\'' is too big 00:23:08.899 cpumask for '\''job3'\'' is too big 00:23:08.899 Running I/O for 2 seconds... 00:23:08.899 00:23:08.899 Latency(us) 00:23:08.899 [2024-11-26T11:33:27.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc0 : 2.02 15595.88 15.23 0.00 0.00 16399.02 3261.91 26691.03 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc1 : 2.02 15585.29 15.22 0.00 0.00 16395.35 3872.58 26571.87 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc0 : 2.03 15608.64 15.24 0.00 0.00 16327.55 3112.96 23354.65 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc1 : 2.04 15598.04 15.23 0.00 0.00 16324.84 3768.32 23235.49 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc0 : 2.04 15588.11 15.22 0.00 0.00 16284.51 3068.28 19660.80 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc1 : 2.04 15577.72 15.21 0.00 0.00 16280.74 3783.21 19541.64 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc0 : 2.04 15567.69 15.20 0.00 0.00 16247.16 3038.49 19899.11 00:23:08.899 [2024-11-26T11:33:27.129Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.899 Malloc1 : 2.04 15557.33 15.19 0.00 0.00 16243.49 3634.27 20018.27 00:23:08.899 [2024-11-26T11:33:27.129Z] =================================================================================================================== 00:23:08.899 [2024-11-26T11:33:27.130Z] Total : 124678.70 121.76 0.00 0.00 16312.66 3038.49 26691.03' 00:23:08.900 11:33:26 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-26 11:33:24.290457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.900 [2024-11-26 11:33:24.290657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97092 ] 00:23:08.900 Using job config with 4 jobs 00:23:08.900 [2024-11-26 11:33:24.455069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.900 [2024-11-26 11:33:24.508383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.900 cpumask for '\''job0'\'' is too big 00:23:08.900 cpumask for '\''job1'\'' is too big 00:23:08.900 cpumask for '\''job2'\'' is too big 00:23:08.900 cpumask for '\''job3'\'' is too big 00:23:08.900 Running I/O for 2 seconds... 00:23:08.900 00:23:08.900 Latency(us) 00:23:08.900 [2024-11-26T11:33:27.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.02 15595.88 15.23 0.00 0.00 16399.02 3261.91 26691.03 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.02 15585.29 15.22 0.00 0.00 16395.35 3872.58 26571.87 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.03 15608.64 15.24 0.00 0.00 16327.55 3112.96 23354.65 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15598.04 15.23 0.00 0.00 16324.84 3768.32 23235.49 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.04 15588.11 15.22 0.00 0.00 16284.51 3068.28 19660.80 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15577.72 15.21 0.00 0.00 16280.74 3783.21 19541.64 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.04 15567.69 15.20 0.00 0.00 16247.16 3038.49 19899.11 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15557.33 15.19 0.00 0.00 16243.49 3634.27 20018.27 00:23:08.900 [2024-11-26T11:33:27.130Z] =================================================================================================================== 00:23:08.900 [2024-11-26T11:33:27.130Z] Total : 124678.70 121.76 0.00 0.00 16312.66 3038.49 26691.03' 00:23:08.900 11:33:26 -- bdevperf/common.sh@32 -- # echo '[2024-11-26 11:33:24.290457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.900 [2024-11-26 11:33:24.290657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97092 ] 00:23:08.900 Using job config with 4 jobs 00:23:08.900 [2024-11-26 11:33:24.455069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.900 [2024-11-26 11:33:24.508383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.900 cpumask for '\''job0'\'' is too big 00:23:08.900 cpumask for '\''job1'\'' is too big 00:23:08.900 cpumask for '\''job2'\'' is too big 00:23:08.900 cpumask for '\''job3'\'' is too big 00:23:08.900 Running I/O for 2 seconds... 00:23:08.900 00:23:08.900 Latency(us) 00:23:08.900 [2024-11-26T11:33:27.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.02 15595.88 15.23 0.00 0.00 16399.02 3261.91 26691.03 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.02 15585.29 15.22 0.00 0.00 16395.35 3872.58 26571.87 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.03 15608.64 15.24 0.00 0.00 16327.55 3112.96 23354.65 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15598.04 15.23 0.00 0.00 16324.84 3768.32 23235.49 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.04 15588.11 15.22 0.00 0.00 16284.51 3068.28 19660.80 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15577.72 15.21 0.00 0.00 16280.74 3783.21 19541.64 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc0 : 2.04 15567.69 15.20 0.00 0.00 16247.16 3038.49 19899.11 00:23:08.900 [2024-11-26T11:33:27.130Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:08.900 Malloc1 : 2.04 15557.33 15.19 0.00 0.00 16243.49 3634.27 20018.27 00:23:08.900 [2024-11-26T11:33:27.130Z] =================================================================================================================== 00:23:08.900 [2024-11-26T11:33:27.130Z] Total : 124678.70 121.76 0.00 0.00 16312.66 3038.49 26691.03' 00:23:08.900 11:33:26 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:08.900 11:33:26 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:08.900 11:33:26 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:08.900 11:33:26 -- bdevperf/test_config.sh@44 -- # cleanup 00:23:08.900 11:33:26 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:08.900 11:33:26 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:08.900 00:23:08.900 real 0m10.648s 00:23:08.900 user 0m9.287s 00:23:08.900 sys 0m0.872s 00:23:08.900 11:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:08.900 ************************************ 00:23:08.900 END TEST bdevperf_config 00:23:08.900 ************************************ 00:23:08.900 11:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.900 11:33:26 -- spdk/autotest.sh@185 -- # uname -s 00:23:08.900 11:33:26 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:23:08.900 11:33:26 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:08.900 11:33:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:08.900 11:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:08.900 11:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.900 ************************************ 00:23:08.900 START TEST reactor_set_interrupt 00:23:08.900 ************************************ 00:23:08.900 11:33:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:08.900 * Looking for test storage... 00:23:08.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:08.900 11:33:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:08.900 11:33:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:08.900 11:33:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:08.900 11:33:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:08.900 11:33:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:08.900 11:33:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:08.900 11:33:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:08.900 11:33:27 -- scripts/common.sh@335 -- # IFS=.-: 00:23:08.900 11:33:27 -- scripts/common.sh@335 -- # read -ra ver1 00:23:08.900 11:33:27 -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.900 11:33:27 -- scripts/common.sh@336 -- # read -ra ver2 00:23:08.900 11:33:27 -- scripts/common.sh@337 -- # local 'op=<' 00:23:08.900 11:33:27 -- scripts/common.sh@339 -- # ver1_l=2 00:23:08.900 11:33:27 -- scripts/common.sh@340 -- # ver2_l=1 00:23:08.900 11:33:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:08.900 11:33:27 -- scripts/common.sh@343 -- # case "$op" in 00:23:08.900 11:33:27 -- scripts/common.sh@344 -- # : 1 00:23:08.900 11:33:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:08.900 11:33:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.900 11:33:27 -- scripts/common.sh@364 -- # decimal 1 00:23:08.900 11:33:27 -- scripts/common.sh@352 -- # local d=1 00:23:08.900 11:33:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.900 11:33:27 -- scripts/common.sh@354 -- # echo 1 00:23:08.900 11:33:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:08.900 11:33:27 -- scripts/common.sh@365 -- # decimal 2 00:23:08.900 11:33:27 -- scripts/common.sh@352 -- # local d=2 00:23:08.900 11:33:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.900 11:33:27 -- scripts/common.sh@354 -- # echo 2 00:23:08.900 11:33:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:08.900 11:33:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:08.900 11:33:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:08.900 11:33:27 -- scripts/common.sh@367 -- # return 0 00:23:08.900 11:33:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.900 11:33:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.900 --rc genhtml_branch_coverage=1 00:23:08.900 --rc genhtml_function_coverage=1 00:23:08.900 --rc genhtml_legend=1 00:23:08.900 --rc geninfo_all_blocks=1 00:23:08.900 --rc geninfo_unexecuted_blocks=1 00:23:08.900 00:23:08.900 ' 00:23:08.900 11:33:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:08.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.900 --rc genhtml_branch_coverage=1 00:23:08.900 --rc genhtml_function_coverage=1 00:23:08.900 --rc genhtml_legend=1 00:23:08.900 --rc geninfo_all_blocks=1 00:23:08.901 --rc geninfo_unexecuted_blocks=1 00:23:08.901 00:23:08.901 ' 00:23:08.901 11:33:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.901 --rc genhtml_branch_coverage=1 00:23:08.901 --rc genhtml_function_coverage=1 00:23:08.901 --rc genhtml_legend=1 00:23:08.901 --rc geninfo_all_blocks=1 00:23:08.901 --rc geninfo_unexecuted_blocks=1 00:23:08.901 00:23:08.901 ' 00:23:08.901 11:33:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.901 --rc genhtml_branch_coverage=1 00:23:08.901 --rc genhtml_function_coverage=1 00:23:08.901 --rc genhtml_legend=1 00:23:08.901 --rc geninfo_all_blocks=1 00:23:08.901 --rc geninfo_unexecuted_blocks=1 00:23:08.901 00:23:08.901 ' 00:23:08.901 11:33:27 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:08.901 11:33:27 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:08.901 11:33:27 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:08.901 11:33:27 -- common/autotest_common.sh@34 -- # set -e 00:23:08.901 11:33:27 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:08.901 11:33:27 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:08.901 11:33:27 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:08.901 11:33:27 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:08.901 11:33:27 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:08.901 11:33:27 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:08.901 11:33:27 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:08.901 11:33:27 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:08.901 11:33:27 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:08.901 11:33:27 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:08.901 11:33:27 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:08.901 11:33:27 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:08.901 11:33:27 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:08.901 11:33:27 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:08.901 11:33:27 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:08.901 11:33:27 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:08.901 11:33:27 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:08.901 11:33:27 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:08.901 11:33:27 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:08.901 11:33:27 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:08.901 11:33:27 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:08.901 11:33:27 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:08.901 11:33:27 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:08.901 11:33:27 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:08.901 11:33:27 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:08.901 11:33:27 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:08.901 11:33:27 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:08.901 11:33:27 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:08.901 11:33:27 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:08.901 11:33:27 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:08.901 11:33:27 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:08.901 11:33:27 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:08.901 11:33:27 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:08.901 11:33:27 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:08.901 11:33:27 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:08.901 11:33:27 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:08.901 11:33:27 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:08.901 11:33:27 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:08.901 11:33:27 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:08.901 11:33:27 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:08.901 11:33:27 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:08.901 11:33:27 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:08.901 11:33:27 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:08.901 11:33:27 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:08.901 11:33:27 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:08.901 11:33:27 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:08.901 11:33:27 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:08.901 11:33:27 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:08.901 11:33:27 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:08.901 11:33:27 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:08.901 11:33:27 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:08.901 11:33:27 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:08.901 11:33:27 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:08.901 11:33:27 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:08.901 11:33:27 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:08.901 11:33:27 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:08.901 11:33:27 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:08.901 11:33:27 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:08.901 11:33:27 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:08.901 11:33:27 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:08.901 11:33:27 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:08.901 11:33:27 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:08.901 11:33:27 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:08.901 11:33:27 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:23:08.901 11:33:27 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:08.901 11:33:27 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:08.901 11:33:27 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:08.901 11:33:27 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:08.901 11:33:27 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:08.901 11:33:27 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:08.901 11:33:27 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:08.901 11:33:27 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:08.901 11:33:27 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:08.901 11:33:27 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:23:08.901 11:33:27 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:08.901 11:33:27 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:08.901 11:33:27 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:08.901 11:33:27 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:08.901 11:33:27 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:08.901 11:33:27 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:08.901 11:33:27 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:08.901 11:33:27 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:08.901 11:33:27 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:08.901 11:33:27 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:08.901 11:33:27 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:08.901 11:33:27 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:09.162 11:33:27 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:09.162 11:33:27 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:09.162 11:33:27 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:09.162 11:33:27 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:09.162 11:33:27 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:09.162 11:33:27 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:09.162 11:33:27 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:09.162 11:33:27 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:09.162 11:33:27 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:09.162 11:33:27 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:09.162 11:33:27 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:09.162 11:33:27 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:09.162 11:33:27 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:09.162 #define SPDK_CONFIG_H 00:23:09.162 #define SPDK_CONFIG_APPS 1 00:23:09.162 #define SPDK_CONFIG_ARCH native 00:23:09.162 #define SPDK_CONFIG_ASAN 1 00:23:09.162 #undef SPDK_CONFIG_AVAHI 00:23:09.162 #undef SPDK_CONFIG_CET 00:23:09.162 #define SPDK_CONFIG_COVERAGE 1 00:23:09.162 #define SPDK_CONFIG_CROSS_PREFIX 00:23:09.162 #undef SPDK_CONFIG_CRYPTO 00:23:09.162 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:09.162 #undef SPDK_CONFIG_CUSTOMOCF 00:23:09.162 #undef SPDK_CONFIG_DAOS 00:23:09.162 #define SPDK_CONFIG_DAOS_DIR 00:23:09.162 #define SPDK_CONFIG_DEBUG 1 00:23:09.162 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:09.162 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:23:09.162 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:23:09.162 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:23:09.162 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:09.162 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:09.162 #define SPDK_CONFIG_EXAMPLES 1 00:23:09.162 #undef SPDK_CONFIG_FC 00:23:09.162 #define SPDK_CONFIG_FC_PATH 00:23:09.162 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:09.162 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:09.162 #undef SPDK_CONFIG_FUSE 00:23:09.162 #undef SPDK_CONFIG_FUZZER 00:23:09.162 #define SPDK_CONFIG_FUZZER_LIB 00:23:09.162 #undef SPDK_CONFIG_GOLANG 00:23:09.162 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:09.162 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:09.162 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:09.162 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:09.162 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:09.162 #define SPDK_CONFIG_IDXD 1 00:23:09.162 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:09.162 #undef SPDK_CONFIG_IPSEC_MB 00:23:09.162 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:09.162 #define SPDK_CONFIG_ISAL 1 00:23:09.162 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:09.162 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:09.162 #define SPDK_CONFIG_LIBDIR 00:23:09.162 #undef SPDK_CONFIG_LTO 00:23:09.162 #define SPDK_CONFIG_MAX_LCORES 00:23:09.162 #define SPDK_CONFIG_NVME_CUSE 1 00:23:09.162 #undef SPDK_CONFIG_OCF 00:23:09.162 #define SPDK_CONFIG_OCF_PATH 00:23:09.162 #define SPDK_CONFIG_OPENSSL_PATH 00:23:09.162 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:09.162 #undef SPDK_CONFIG_PGO_USE 00:23:09.162 #define SPDK_CONFIG_PREFIX /usr/local 00:23:09.162 #define SPDK_CONFIG_RAID5F 1 00:23:09.162 #undef SPDK_CONFIG_RBD 00:23:09.162 #define SPDK_CONFIG_RDMA 1 00:23:09.162 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:09.162 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:09.162 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:09.162 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:09.162 #undef SPDK_CONFIG_SHARED 00:23:09.162 #undef SPDK_CONFIG_SMA 00:23:09.162 #define SPDK_CONFIG_TESTS 1 00:23:09.162 #undef SPDK_CONFIG_TSAN 00:23:09.162 #define SPDK_CONFIG_UBLK 1 00:23:09.162 #define SPDK_CONFIG_UBSAN 1 00:23:09.162 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:09.162 #undef SPDK_CONFIG_URING 00:23:09.162 #define SPDK_CONFIG_URING_PATH 00:23:09.162 #undef SPDK_CONFIG_URING_ZNS 00:23:09.162 #undef SPDK_CONFIG_USDT 00:23:09.162 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:09.162 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:09.163 #undef SPDK_CONFIG_VFIO_USER 00:23:09.163 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:09.163 #define SPDK_CONFIG_VHOST 1 00:23:09.163 #define SPDK_CONFIG_VIRTIO 1 00:23:09.163 #undef SPDK_CONFIG_VTUNE 00:23:09.163 #define SPDK_CONFIG_VTUNE_DIR 00:23:09.163 #define SPDK_CONFIG_WERROR 1 00:23:09.163 #define SPDK_CONFIG_WPDK_DIR 00:23:09.163 #undef SPDK_CONFIG_XNVME 00:23:09.163 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:09.163 11:33:27 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:09.163 11:33:27 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.163 11:33:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.163 11:33:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.163 11:33:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.163 11:33:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:09.163 11:33:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:09.163 11:33:27 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:09.163 11:33:27 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:09.163 11:33:27 -- paths/export.sh@6 -- # export PATH 00:23:09.163 11:33:27 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:09.163 11:33:27 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:09.163 11:33:27 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:09.163 11:33:27 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:09.163 11:33:27 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:09.163 11:33:27 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:09.163 11:33:27 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:09.163 11:33:27 -- pm/common@16 -- # TEST_TAG=N/A 00:23:09.163 11:33:27 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:09.163 11:33:27 -- common/autotest_common.sh@52 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:23:09.163 11:33:27 -- common/autotest_common.sh@56 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:09.163 11:33:27 -- common/autotest_common.sh@58 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:23:09.163 11:33:27 -- common/autotest_common.sh@60 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:09.163 11:33:27 -- common/autotest_common.sh@62 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:23:09.163 11:33:27 -- common/autotest_common.sh@64 -- # : 00:23:09.163 11:33:27 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:23:09.163 11:33:27 -- common/autotest_common.sh@66 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:23:09.163 11:33:27 -- common/autotest_common.sh@68 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:23:09.163 11:33:27 -- common/autotest_common.sh@70 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:23:09.163 11:33:27 -- common/autotest_common.sh@72 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:09.163 11:33:27 -- common/autotest_common.sh@74 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:23:09.163 11:33:27 -- common/autotest_common.sh@76 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:23:09.163 11:33:27 -- common/autotest_common.sh@78 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:23:09.163 11:33:27 -- common/autotest_common.sh@80 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:23:09.163 11:33:27 -- common/autotest_common.sh@82 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:23:09.163 11:33:27 -- common/autotest_common.sh@84 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:23:09.163 11:33:27 -- common/autotest_common.sh@86 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:23:09.163 11:33:27 -- common/autotest_common.sh@88 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:23:09.163 11:33:27 -- common/autotest_common.sh@90 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:09.163 11:33:27 -- common/autotest_common.sh@92 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:23:09.163 11:33:27 -- common/autotest_common.sh@94 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:23:09.163 11:33:27 -- common/autotest_common.sh@96 -- # : rdma 00:23:09.163 11:33:27 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:09.163 11:33:27 -- common/autotest_common.sh@98 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:23:09.163 11:33:27 -- common/autotest_common.sh@100 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:23:09.163 11:33:27 -- common/autotest_common.sh@102 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:23:09.163 11:33:27 -- common/autotest_common.sh@104 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:23:09.163 11:33:27 -- common/autotest_common.sh@106 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:23:09.163 11:33:27 -- common/autotest_common.sh@108 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:23:09.163 11:33:27 -- common/autotest_common.sh@110 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:23:09.163 11:33:27 -- common/autotest_common.sh@112 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:09.163 11:33:27 -- common/autotest_common.sh@114 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:23:09.163 11:33:27 -- common/autotest_common.sh@116 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:23:09.163 11:33:27 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:23:09.163 11:33:27 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:09.163 11:33:27 -- common/autotest_common.sh@120 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:23:09.163 11:33:27 -- common/autotest_common.sh@122 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:23:09.163 11:33:27 -- common/autotest_common.sh@124 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:23:09.163 11:33:27 -- common/autotest_common.sh@126 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:23:09.163 11:33:27 -- common/autotest_common.sh@128 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:23:09.163 11:33:27 -- common/autotest_common.sh@130 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:23:09.163 11:33:27 -- common/autotest_common.sh@132 -- # : v23.11 00:23:09.163 11:33:27 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:23:09.163 11:33:27 -- common/autotest_common.sh@134 -- # : true 00:23:09.163 11:33:27 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:23:09.163 11:33:27 -- common/autotest_common.sh@136 -- # : 1 00:23:09.163 11:33:27 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:23:09.163 11:33:27 -- common/autotest_common.sh@138 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:23:09.163 11:33:27 -- common/autotest_common.sh@140 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:23:09.163 11:33:27 -- common/autotest_common.sh@142 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:23:09.163 11:33:27 -- common/autotest_common.sh@144 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:23:09.163 11:33:27 -- common/autotest_common.sh@146 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:23:09.163 11:33:27 -- common/autotest_common.sh@148 -- # : 00:23:09.163 11:33:27 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:23:09.163 11:33:27 -- common/autotest_common.sh@150 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:23:09.163 11:33:27 -- common/autotest_common.sh@152 -- # : 0 00:23:09.163 11:33:27 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:23:09.163 11:33:27 -- common/autotest_common.sh@154 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:23:09.164 11:33:27 -- common/autotest_common.sh@156 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:23:09.164 11:33:27 -- common/autotest_common.sh@158 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:23:09.164 11:33:27 -- common/autotest_common.sh@160 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:23:09.164 11:33:27 -- common/autotest_common.sh@163 -- # : 00:23:09.164 11:33:27 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:23:09.164 11:33:27 -- common/autotest_common.sh@165 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:23:09.164 11:33:27 -- common/autotest_common.sh@167 -- # : 0 00:23:09.164 11:33:27 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:09.164 11:33:27 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:09.164 11:33:27 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:09.164 11:33:27 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:09.164 11:33:27 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:09.164 11:33:27 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:23:09.164 11:33:27 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:09.164 11:33:27 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:09.164 11:33:27 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:09.164 11:33:27 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:09.164 11:33:27 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:09.164 11:33:27 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:23:09.164 11:33:27 -- common/autotest_common.sh@196 -- # cat 00:23:09.164 11:33:27 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:23:09.164 11:33:27 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:09.164 11:33:27 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:09.164 11:33:27 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:09.164 11:33:27 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:09.164 11:33:27 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:23:09.164 11:33:27 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:23:09.164 11:33:27 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:09.164 11:33:27 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:09.164 11:33:27 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:09.164 11:33:27 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:09.164 11:33:27 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:23:09.164 11:33:27 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:23:09.164 11:33:27 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:09.164 11:33:27 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:09.164 11:33:27 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:09.164 11:33:27 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:09.164 11:33:27 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:23:09.164 11:33:27 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:23:09.164 11:33:27 -- common/autotest_common.sh@249 -- # _LCOV= 00:23:09.164 11:33:27 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:09.164 11:33:27 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:23:09.164 11:33:27 -- common/autotest_common.sh@255 -- # lcov_opt= 00:23:09.164 11:33:27 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:23:09.164 11:33:27 -- common/autotest_common.sh@259 -- # export valgrind= 00:23:09.164 11:33:27 -- common/autotest_common.sh@259 -- # valgrind= 00:23:09.164 11:33:27 -- common/autotest_common.sh@265 -- # uname -s 00:23:09.164 11:33:27 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:23:09.164 11:33:27 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:23:09.164 11:33:27 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:23:09.164 11:33:27 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@275 -- # MAKE=make 00:23:09.164 11:33:27 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:23:09.164 11:33:27 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:23:09.164 11:33:27 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:23:09.164 11:33:27 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:09.164 11:33:27 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:23:09.164 11:33:27 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:23:09.164 11:33:27 -- common/autotest_common.sh@319 -- # [[ -z 97160 ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@319 -- # kill -0 97160 00:23:09.164 11:33:27 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:23:09.164 11:33:27 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:23:09.164 11:33:27 -- common/autotest_common.sh@332 -- # local mount target_dir 00:23:09.164 11:33:27 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:23:09.164 11:33:27 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:23:09.164 11:33:27 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:23:09.164 11:33:27 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:23:09.164 11:33:27 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.KHPdgJ 00:23:09.164 11:33:27 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:09.164 11:33:27 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:23:09.164 11:33:27 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.KHPdgJ/tests/interrupt /tmp/spdk.KHPdgJ 00:23:09.164 11:33:27 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:23:09.164 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.164 11:33:27 -- common/autotest_common.sh@328 -- # df -T 00:23:09.164 11:33:27 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:23:09.164 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:23:09.164 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=9056210944 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:23:09.164 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=10608541696 00:23:09.164 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=6268858368 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:23:09.164 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:23:09.164 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:23:09.164 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:23:09.164 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:23:09.165 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:23:09.165 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:23:09.165 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:23:09.165 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:23:09.165 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:23:09.165 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:23:09.165 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:23:09.165 11:33:27 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # avails["$mount"]=98302484480 00:23:09.165 11:33:27 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:23:09.165 11:33:27 -- common/autotest_common.sh@364 -- # uses["$mount"]=1400295424 00:23:09.165 11:33:27 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:09.165 11:33:27 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:23:09.165 * Looking for test storage... 00:23:09.165 11:33:27 -- common/autotest_common.sh@369 -- # local target_space new_size 00:23:09.165 11:33:27 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:23:09.165 11:33:27 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:09.165 11:33:27 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:09.165 11:33:27 -- common/autotest_common.sh@373 -- # mount=/ 00:23:09.165 11:33:27 -- common/autotest_common.sh@375 -- # target_space=9056210944 00:23:09.165 11:33:27 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:23:09.165 11:33:27 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:23:09.165 11:33:27 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@382 -- # new_size=12823134208 00:23:09.165 11:33:27 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:09.165 11:33:27 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:09.165 11:33:27 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:09.165 11:33:27 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:09.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:09.165 11:33:27 -- common/autotest_common.sh@390 -- # return 0 00:23:09.165 11:33:27 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:23:09.165 11:33:27 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:23:09.165 11:33:27 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:09.165 11:33:27 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1682 -- # true 00:23:09.165 11:33:27 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:23:09.165 11:33:27 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@27 -- # exec 00:23:09.165 11:33:27 -- common/autotest_common.sh@29 -- # exec 00:23:09.165 11:33:27 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:09.165 11:33:27 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:09.165 11:33:27 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:09.165 11:33:27 -- common/autotest_common.sh@18 -- # set -x 00:23:09.165 11:33:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:09.165 11:33:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:09.165 11:33:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:09.165 11:33:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:09.165 11:33:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:09.165 11:33:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:09.165 11:33:27 -- scripts/common.sh@335 -- # IFS=.-: 00:23:09.165 11:33:27 -- scripts/common.sh@335 -- # read -ra ver1 00:23:09.165 11:33:27 -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.165 11:33:27 -- scripts/common.sh@336 -- # read -ra ver2 00:23:09.165 11:33:27 -- scripts/common.sh@337 -- # local 'op=<' 00:23:09.165 11:33:27 -- scripts/common.sh@339 -- # ver1_l=2 00:23:09.165 11:33:27 -- scripts/common.sh@340 -- # ver2_l=1 00:23:09.165 11:33:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:09.165 11:33:27 -- scripts/common.sh@343 -- # case "$op" in 00:23:09.165 11:33:27 -- scripts/common.sh@344 -- # : 1 00:23:09.165 11:33:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:09.165 11:33:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.165 11:33:27 -- scripts/common.sh@364 -- # decimal 1 00:23:09.165 11:33:27 -- scripts/common.sh@352 -- # local d=1 00:23:09.165 11:33:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.165 11:33:27 -- scripts/common.sh@354 -- # echo 1 00:23:09.165 11:33:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:09.165 11:33:27 -- scripts/common.sh@365 -- # decimal 2 00:23:09.165 11:33:27 -- scripts/common.sh@352 -- # local d=2 00:23:09.165 11:33:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.165 11:33:27 -- scripts/common.sh@354 -- # echo 2 00:23:09.165 11:33:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:09.165 11:33:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:09.165 11:33:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:09.165 11:33:27 -- scripts/common.sh@367 -- # return 0 00:23:09.165 11:33:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.165 --rc genhtml_branch_coverage=1 00:23:09.165 --rc genhtml_function_coverage=1 00:23:09.165 --rc genhtml_legend=1 00:23:09.165 --rc geninfo_all_blocks=1 00:23:09.165 --rc geninfo_unexecuted_blocks=1 00:23:09.165 00:23:09.165 ' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.165 --rc genhtml_branch_coverage=1 00:23:09.165 --rc genhtml_function_coverage=1 00:23:09.165 --rc genhtml_legend=1 00:23:09.165 --rc geninfo_all_blocks=1 00:23:09.165 --rc geninfo_unexecuted_blocks=1 00:23:09.165 00:23:09.165 ' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.165 --rc genhtml_branch_coverage=1 00:23:09.165 --rc genhtml_function_coverage=1 00:23:09.165 --rc genhtml_legend=1 00:23:09.165 --rc geninfo_all_blocks=1 00:23:09.165 --rc geninfo_unexecuted_blocks=1 00:23:09.165 00:23:09.165 ' 00:23:09.165 11:33:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.165 --rc genhtml_branch_coverage=1 00:23:09.165 --rc genhtml_function_coverage=1 00:23:09.165 --rc genhtml_legend=1 00:23:09.165 --rc geninfo_all_blocks=1 00:23:09.165 --rc geninfo_unexecuted_blocks=1 00:23:09.165 00:23:09.165 ' 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:09.165 11:33:27 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:09.165 11:33:27 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:09.165 11:33:27 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=97215 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 97215 /var/tmp/spdk.sock 00:23:09.165 11:33:27 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:09.165 11:33:27 -- common/autotest_common.sh@829 -- # '[' -z 97215 ']' 00:23:09.165 11:33:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.165 11:33:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.165 11:33:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.165 11:33:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.165 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:09.165 [2024-11-26 11:33:27.373208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:09.166 [2024-11-26 11:33:27.373395] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97215 ] 00:23:09.425 [2024-11-26 11:33:27.536438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:09.425 [2024-11-26 11:33:27.577024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.425 [2024-11-26 11:33:27.577051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.425 [2024-11-26 11:33:27.577129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.425 [2024-11-26 11:33:27.621561] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:10.360 11:33:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.360 11:33:28 -- common/autotest_common.sh@862 -- # return 0 00:23:10.360 11:33:28 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:23:10.361 11:33:28 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:10.619 Malloc0 00:23:10.620 Malloc1 00:23:10.620 Malloc2 00:23:10.620 11:33:28 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:23:10.620 11:33:28 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:10.620 11:33:28 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:10.620 11:33:28 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:10.620 5000+0 records in 00:23:10.620 5000+0 records out 00:23:10.620 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0169217 s, 605 MB/s 00:23:10.620 11:33:28 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:10.879 AIO0 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 97215 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 97215 without_thd 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=97215 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:10.879 11:33:28 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:10.879 11:33:28 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:10.879 11:33:29 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:10.879 11:33:29 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:10.879 11:33:29 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:11.139 11:33:29 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:11.139 spdk_thread ids are 1 on reactor0. 00:23:11.139 11:33:29 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:11.139 11:33:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:11.139 11:33:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97215 0 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97215 0 idle 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:11.139 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:11.140 11:33:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97215 root 20 0 20.1t 79872 27264 S 10.0 0.7 0:00.26 reactor_0' 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@48 -- # echo 97215 root 20 0 20.1t 79872 27264 S 10.0 0.7 0:00.26 reactor_0 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:11.407 11:33:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:11.407 11:33:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97215 1 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97215 1 idle 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:11.407 11:33:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97222 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_1' 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@48 -- # echo 97222 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_1 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:11.667 11:33:29 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:11.667 11:33:29 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97215 2 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97215 2 idle 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:11.667 11:33:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97223 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_2' 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@48 -- # echo 97223 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_2 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:11.926 11:33:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:11.926 11:33:29 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:23:11.926 11:33:29 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:23:11.926 11:33:29 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:23:12.185 [2024-11-26 11:33:30.202416] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:12.186 11:33:30 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:12.445 [2024-11-26 11:33:30.450070] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:12.445 [2024-11-26 11:33:30.450978] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:12.445 11:33:30 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:12.445 [2024-11-26 11:33:30.637684] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:12.445 [2024-11-26 11:33:30.638393] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:12.445 11:33:30 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:12.445 11:33:30 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97215 0 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 97215 0 busy 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:12.445 11:33:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97215 root 20 0 20.1t 83456 27264 R 90.9 0.7 0:00.69 reactor_0' 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@48 -- # echo 97215 root 20 0 20.1t 83456 27264 R 90.9 0.7 0:00.69 reactor_0 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=90.9 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=90 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@51 -- # [[ 90 -lt 70 ]] 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:12.704 11:33:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:12.704 11:33:30 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:12.705 11:33:30 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97215 2 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 97215 2 busy 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:12.705 11:33:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97223 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.44 reactor_2' 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@48 -- # echo 97223 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.44 reactor_2 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:12.963 11:33:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:12.963 11:33:31 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:13.222 [2024-11-26 11:33:31.337787] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:13.222 [2024-11-26 11:33:31.338242] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:13.222 11:33:31 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:23:13.222 11:33:31 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 97215 2 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97215 2 idle 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:13.222 11:33:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97223 root 20 0 20.1t 83456 27264 S 0.0 0.7 0:00.69 reactor_2' 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@48 -- # echo 97223 root 20 0 20.1t 83456 27264 S 0.0 0.7 0:00.69 reactor_2 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:13.481 11:33:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:13.481 11:33:31 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:13.740 [2024-11-26 11:33:31.789695] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:13.740 [2024-11-26 11:33:31.790359] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:13.740 11:33:31 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:23:13.740 11:33:31 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:23:13.740 11:33:31 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:23:13.999 [2024-11-26 11:33:32.034218] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:13.999 11:33:32 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 97215 0 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97215 0 idle 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@33 -- # local pid=97215 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97215 -w 256 00:23:13.999 11:33:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97215 root 20 0 20.1t 83584 27264 S 10.0 0.7 0:01.62 reactor_0' 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@48 -- # echo 97215 root 20 0 20.1t 83584 27264 S 10.0 0.7 0:01.62 reactor_0 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:23:14.258 11:33:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:14.258 11:33:32 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:14.258 11:33:32 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:23:14.258 11:33:32 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:23:14.258 11:33:32 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 97215 00:23:14.258 11:33:32 -- common/autotest_common.sh@936 -- # '[' -z 97215 ']' 00:23:14.258 11:33:32 -- common/autotest_common.sh@940 -- # kill -0 97215 00:23:14.258 11:33:32 -- common/autotest_common.sh@941 -- # uname 00:23:14.258 11:33:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:14.258 11:33:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97215 00:23:14.258 11:33:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:14.258 11:33:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:14.258 killing process with pid 97215 00:23:14.258 11:33:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97215' 00:23:14.258 11:33:32 -- common/autotest_common.sh@955 -- # kill 97215 00:23:14.258 11:33:32 -- common/autotest_common.sh@960 -- # wait 97215 00:23:14.517 11:33:32 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:14.517 11:33:32 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=97351 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.517 11:33:32 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 97351 /var/tmp/spdk.sock 00:23:14.517 11:33:32 -- common/autotest_common.sh@829 -- # '[' -z 97351 ']' 00:23:14.517 11:33:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.517 11:33:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.517 11:33:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.517 11:33:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.517 11:33:32 -- common/autotest_common.sh@10 -- # set +x 00:23:14.517 [2024-11-26 11:33:32.570427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.517 [2024-11-26 11:33:32.570782] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97351 ] 00:23:14.518 [2024-11-26 11:33:32.735659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.778 [2024-11-26 11:33:32.770898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.778 [2024-11-26 11:33:32.770997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.778 [2024-11-26 11:33:32.771058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.778 [2024-11-26 11:33:32.813013] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:15.346 11:33:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.346 11:33:33 -- common/autotest_common.sh@862 -- # return 0 00:23:15.346 11:33:33 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:23:15.346 11:33:33 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.606 Malloc0 00:23:15.606 Malloc1 00:23:15.606 Malloc2 00:23:15.606 11:33:33 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:23:15.606 11:33:33 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:15.606 11:33:33 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:15.606 11:33:33 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:15.606 5000+0 records in 00:23:15.606 5000+0 records out 00:23:15.606 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0214674 s, 477 MB/s 00:23:15.606 11:33:33 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:15.865 AIO0 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 97351 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 97351 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=97351 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:15.865 11:33:34 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:15.865 11:33:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:16.125 11:33:34 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:16.125 11:33:34 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:16.125 11:33:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:16.385 spdk_thread ids are 1 on reactor0. 00:23:16.385 11:33:34 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:16.385 11:33:34 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:16.385 11:33:34 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:16.385 11:33:34 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97351 0 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97351 0 idle 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:16.385 11:33:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97351 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.25 reactor_0' 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@48 -- # echo 97351 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.25 reactor_0 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:16.644 11:33:34 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:16.644 11:33:34 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97351 1 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97351 1 idle 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:16.644 11:33:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97361 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_1' 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@48 -- # echo 97361 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_1 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:16.904 11:33:34 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:16.904 11:33:34 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 97351 2 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97351 2 idle 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:16.904 11:33:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97362 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_2' 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@48 -- # echo 97362 root 20 0 20.1t 79872 27264 S 0.0 0.7 0:00.00 reactor_2 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:17.163 11:33:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:17.163 11:33:35 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:23:17.163 11:33:35 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:17.422 [2024-11-26 11:33:35.421330] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:17.422 [2024-11-26 11:33:35.421582] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:23:17.422 [2024-11-26 11:33:35.421941] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:17.422 11:33:35 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:17.681 [2024-11-26 11:33:35.665173] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:17.681 [2024-11-26 11:33:35.665568] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:17.681 11:33:35 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:17.681 11:33:35 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97351 0 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 97351 0 busy 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97351 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.74 reactor_0' 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@48 -- # echo 97351 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.74 reactor_0 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:17.681 11:33:35 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:17.681 11:33:35 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 97351 2 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 97351 2 busy 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:17.681 11:33:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97362 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.44 reactor_2' 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@48 -- # echo 97362 root 20 0 20.1t 83456 27264 R 99.9 0.7 0:00.44 reactor_2 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:17.939 11:33:36 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:17.940 11:33:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:17.940 11:33:36 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:18.198 [2024-11-26 11:33:36.369459] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:18.198 [2024-11-26 11:33:36.369723] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:18.198 11:33:36 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:23:18.198 11:33:36 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 97351 2 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97351 2 idle 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:18.198 11:33:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97362 root 20 0 20.1t 83456 27264 S 0.0 0.7 0:00.69 reactor_2' 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@48 -- # echo 97362 root 20 0 20.1t 83456 27264 S 0.0 0.7 0:00.69 reactor_2 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:18.457 11:33:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:18.457 11:33:36 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:18.716 [2024-11-26 11:33:36.777504] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:18.716 [2024-11-26 11:33:36.777853] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:23:18.716 [2024-11-26 11:33:36.777883] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:18.716 11:33:36 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:23:18.716 11:33:36 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 97351 0 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 97351 0 idle 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@33 -- # local pid=97351 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 97351 -w 256 00:23:18.716 11:33:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 97351 root 20 0 20.1t 83584 27264 S 0.0 0.7 0:01.62 reactor_0' 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@48 -- # echo 97351 root 20 0 20.1t 83584 27264 S 0.0 0.7 0:01.62 reactor_0 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:18.976 11:33:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:18.976 11:33:37 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:18.976 11:33:37 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:23:18.976 11:33:37 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:18.976 11:33:37 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 97351 00:23:18.976 11:33:37 -- common/autotest_common.sh@936 -- # '[' -z 97351 ']' 00:23:18.976 11:33:37 -- common/autotest_common.sh@940 -- # kill -0 97351 00:23:18.976 11:33:37 -- common/autotest_common.sh@941 -- # uname 00:23:18.976 11:33:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.976 11:33:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97351 00:23:18.976 11:33:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:18.976 11:33:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:18.976 killing process with pid 97351 00:23:18.976 11:33:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97351' 00:23:18.976 11:33:37 -- common/autotest_common.sh@955 -- # kill 97351 00:23:18.976 11:33:37 -- common/autotest_common.sh@960 -- # wait 97351 00:23:19.235 11:33:37 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:23:19.235 11:33:37 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:19.235 ************************************ 00:23:19.235 END TEST reactor_set_interrupt 00:23:19.235 ************************************ 00:23:19.235 00:23:19.235 real 0m10.377s 00:23:19.235 user 0m9.658s 00:23:19.235 sys 0m1.607s 00:23:19.235 11:33:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:19.235 11:33:37 -- common/autotest_common.sh@10 -- # set +x 00:23:19.235 11:33:37 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:19.235 11:33:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:19.235 11:33:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.235 11:33:37 -- common/autotest_common.sh@10 -- # set +x 00:23:19.235 ************************************ 00:23:19.235 START TEST reap_unregistered_poller 00:23:19.235 ************************************ 00:23:19.235 11:33:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:19.235 * Looking for test storage... 00:23:19.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.235 11:33:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:19.235 11:33:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:19.235 11:33:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:19.497 11:33:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:19.497 11:33:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:19.497 11:33:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:19.497 11:33:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:19.497 11:33:37 -- scripts/common.sh@335 -- # IFS=.-: 00:23:19.497 11:33:37 -- scripts/common.sh@335 -- # read -ra ver1 00:23:19.497 11:33:37 -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.497 11:33:37 -- scripts/common.sh@336 -- # read -ra ver2 00:23:19.497 11:33:37 -- scripts/common.sh@337 -- # local 'op=<' 00:23:19.497 11:33:37 -- scripts/common.sh@339 -- # ver1_l=2 00:23:19.497 11:33:37 -- scripts/common.sh@340 -- # ver2_l=1 00:23:19.497 11:33:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:19.497 11:33:37 -- scripts/common.sh@343 -- # case "$op" in 00:23:19.497 11:33:37 -- scripts/common.sh@344 -- # : 1 00:23:19.497 11:33:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:19.497 11:33:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.497 11:33:37 -- scripts/common.sh@364 -- # decimal 1 00:23:19.497 11:33:37 -- scripts/common.sh@352 -- # local d=1 00:23:19.497 11:33:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.497 11:33:37 -- scripts/common.sh@354 -- # echo 1 00:23:19.497 11:33:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:19.497 11:33:37 -- scripts/common.sh@365 -- # decimal 2 00:23:19.497 11:33:37 -- scripts/common.sh@352 -- # local d=2 00:23:19.497 11:33:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.497 11:33:37 -- scripts/common.sh@354 -- # echo 2 00:23:19.497 11:33:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:19.497 11:33:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:19.497 11:33:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:19.497 11:33:37 -- scripts/common.sh@367 -- # return 0 00:23:19.497 11:33:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.497 11:33:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.497 --rc genhtml_branch_coverage=1 00:23:19.497 --rc genhtml_function_coverage=1 00:23:19.497 --rc genhtml_legend=1 00:23:19.497 --rc geninfo_all_blocks=1 00:23:19.497 --rc geninfo_unexecuted_blocks=1 00:23:19.497 00:23:19.497 ' 00:23:19.497 11:33:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.497 --rc genhtml_branch_coverage=1 00:23:19.497 --rc genhtml_function_coverage=1 00:23:19.497 --rc genhtml_legend=1 00:23:19.497 --rc geninfo_all_blocks=1 00:23:19.497 --rc geninfo_unexecuted_blocks=1 00:23:19.497 00:23:19.497 ' 00:23:19.497 11:33:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.498 --rc genhtml_branch_coverage=1 00:23:19.498 --rc genhtml_function_coverage=1 00:23:19.498 --rc genhtml_legend=1 00:23:19.498 --rc geninfo_all_blocks=1 00:23:19.498 --rc geninfo_unexecuted_blocks=1 00:23:19.498 00:23:19.498 ' 00:23:19.498 11:33:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:19.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.498 --rc genhtml_branch_coverage=1 00:23:19.498 --rc genhtml_function_coverage=1 00:23:19.498 --rc genhtml_legend=1 00:23:19.498 --rc geninfo_all_blocks=1 00:23:19.498 --rc geninfo_unexecuted_blocks=1 00:23:19.498 00:23:19.498 ' 00:23:19.498 11:33:37 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:19.498 11:33:37 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:19.498 11:33:37 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:19.498 11:33:37 -- common/autotest_common.sh@34 -- # set -e 00:23:19.498 11:33:37 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:19.498 11:33:37 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:19.498 11:33:37 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:19.498 11:33:37 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:19.498 11:33:37 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:19.498 11:33:37 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:19.498 11:33:37 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:19.498 11:33:37 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:19.498 11:33:37 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:19.498 11:33:37 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:19.498 11:33:37 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:19.498 11:33:37 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:19.498 11:33:37 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:19.498 11:33:37 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:19.498 11:33:37 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:19.498 11:33:37 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:19.498 11:33:37 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:19.498 11:33:37 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:19.498 11:33:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:19.498 11:33:37 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:19.498 11:33:37 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:19.498 11:33:37 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:19.498 11:33:37 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:19.498 11:33:37 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:19.498 11:33:37 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:19.498 11:33:37 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:19.498 11:33:37 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:19.498 11:33:37 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:19.498 11:33:37 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:19.498 11:33:37 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:19.498 11:33:37 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:19.498 11:33:37 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:19.498 11:33:37 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:19.498 11:33:37 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:19.498 11:33:37 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:19.498 11:33:37 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:19.498 11:33:37 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:19.498 11:33:37 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:19.498 11:33:37 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:19.498 11:33:37 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:19.498 11:33:37 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:19.498 11:33:37 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:19.498 11:33:37 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:19.498 11:33:37 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:19.498 11:33:37 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:19.498 11:33:37 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:19.498 11:33:37 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:19.498 11:33:37 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:19.498 11:33:37 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:19.498 11:33:37 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:19.498 11:33:37 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:19.498 11:33:37 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:19.498 11:33:37 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:19.498 11:33:37 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:19.498 11:33:37 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:19.498 11:33:37 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:19.498 11:33:37 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:19.498 11:33:37 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:19.498 11:33:37 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:19.498 11:33:37 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:19.498 11:33:37 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:19.498 11:33:37 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:19.498 11:33:37 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:19.498 11:33:37 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:23:19.498 11:33:37 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.498 11:33:37 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:19.498 11:33:37 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:19.498 11:33:37 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:19.498 11:33:37 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:19.498 11:33:37 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:19.498 11:33:37 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:19.498 11:33:37 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:19.498 11:33:37 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:19.498 11:33:37 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:23:19.498 11:33:37 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:19.498 11:33:37 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:19.498 11:33:37 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:19.498 11:33:37 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:19.498 11:33:37 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:19.498 11:33:37 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:19.498 11:33:37 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:19.498 11:33:37 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:19.498 11:33:37 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:19.498 11:33:37 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:19.498 11:33:37 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:19.498 11:33:37 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:19.498 11:33:37 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:19.498 11:33:37 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:19.498 11:33:37 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.498 11:33:37 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:19.498 11:33:37 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.498 11:33:37 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:19.498 11:33:37 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:19.498 11:33:37 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:19.498 11:33:37 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:19.499 11:33:37 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:19.499 11:33:37 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:19.499 11:33:37 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:19.499 11:33:37 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:19.499 #define SPDK_CONFIG_H 00:23:19.499 #define SPDK_CONFIG_APPS 1 00:23:19.499 #define SPDK_CONFIG_ARCH native 00:23:19.499 #define SPDK_CONFIG_ASAN 1 00:23:19.499 #undef SPDK_CONFIG_AVAHI 00:23:19.499 #undef SPDK_CONFIG_CET 00:23:19.499 #define SPDK_CONFIG_COVERAGE 1 00:23:19.499 #define SPDK_CONFIG_CROSS_PREFIX 00:23:19.499 #undef SPDK_CONFIG_CRYPTO 00:23:19.499 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:19.499 #undef SPDK_CONFIG_CUSTOMOCF 00:23:19.499 #undef SPDK_CONFIG_DAOS 00:23:19.499 #define SPDK_CONFIG_DAOS_DIR 00:23:19.499 #define SPDK_CONFIG_DEBUG 1 00:23:19.499 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:19.499 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:23:19.499 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:23:19.499 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.499 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:19.499 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:19.499 #define SPDK_CONFIG_EXAMPLES 1 00:23:19.499 #undef SPDK_CONFIG_FC 00:23:19.499 #define SPDK_CONFIG_FC_PATH 00:23:19.499 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:19.499 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:19.499 #undef SPDK_CONFIG_FUSE 00:23:19.499 #undef SPDK_CONFIG_FUZZER 00:23:19.499 #define SPDK_CONFIG_FUZZER_LIB 00:23:19.499 #undef SPDK_CONFIG_GOLANG 00:23:19.499 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:19.499 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:19.499 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:19.499 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:19.499 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:19.499 #define SPDK_CONFIG_IDXD 1 00:23:19.499 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:19.499 #undef SPDK_CONFIG_IPSEC_MB 00:23:19.499 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:19.499 #define SPDK_CONFIG_ISAL 1 00:23:19.499 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:19.499 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:19.499 #define SPDK_CONFIG_LIBDIR 00:23:19.499 #undef SPDK_CONFIG_LTO 00:23:19.499 #define SPDK_CONFIG_MAX_LCORES 00:23:19.499 #define SPDK_CONFIG_NVME_CUSE 1 00:23:19.499 #undef SPDK_CONFIG_OCF 00:23:19.499 #define SPDK_CONFIG_OCF_PATH 00:23:19.499 #define SPDK_CONFIG_OPENSSL_PATH 00:23:19.499 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:19.499 #undef SPDK_CONFIG_PGO_USE 00:23:19.499 #define SPDK_CONFIG_PREFIX /usr/local 00:23:19.499 #define SPDK_CONFIG_RAID5F 1 00:23:19.499 #undef SPDK_CONFIG_RBD 00:23:19.499 #define SPDK_CONFIG_RDMA 1 00:23:19.499 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:19.499 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:19.499 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:19.499 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:19.499 #undef SPDK_CONFIG_SHARED 00:23:19.499 #undef SPDK_CONFIG_SMA 00:23:19.499 #define SPDK_CONFIG_TESTS 1 00:23:19.499 #undef SPDK_CONFIG_TSAN 00:23:19.499 #define SPDK_CONFIG_UBLK 1 00:23:19.499 #define SPDK_CONFIG_UBSAN 1 00:23:19.499 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:19.499 #undef SPDK_CONFIG_URING 00:23:19.499 #define SPDK_CONFIG_URING_PATH 00:23:19.499 #undef SPDK_CONFIG_URING_ZNS 00:23:19.499 #undef SPDK_CONFIG_USDT 00:23:19.499 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:19.499 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:19.499 #undef SPDK_CONFIG_VFIO_USER 00:23:19.499 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:19.499 #define SPDK_CONFIG_VHOST 1 00:23:19.499 #define SPDK_CONFIG_VIRTIO 1 00:23:19.499 #undef SPDK_CONFIG_VTUNE 00:23:19.499 #define SPDK_CONFIG_VTUNE_DIR 00:23:19.499 #define SPDK_CONFIG_WERROR 1 00:23:19.499 #define SPDK_CONFIG_WPDK_DIR 00:23:19.499 #undef SPDK_CONFIG_XNVME 00:23:19.499 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:19.499 11:33:37 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:19.499 11:33:37 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.499 11:33:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.499 11:33:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.499 11:33:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.499 11:33:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:19.499 11:33:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:19.499 11:33:37 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:19.499 11:33:37 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:19.499 11:33:37 -- paths/export.sh@6 -- # export PATH 00:23:19.499 11:33:37 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:19.499 11:33:37 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:19.499 11:33:37 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:19.499 11:33:37 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:19.499 11:33:37 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:19.499 11:33:37 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:19.499 11:33:37 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:19.499 11:33:37 -- pm/common@16 -- # TEST_TAG=N/A 00:23:19.499 11:33:37 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:19.499 11:33:37 -- common/autotest_common.sh@52 -- # : 1 00:23:19.499 11:33:37 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:23:19.499 11:33:37 -- common/autotest_common.sh@56 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:19.499 11:33:37 -- common/autotest_common.sh@58 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:23:19.499 11:33:37 -- common/autotest_common.sh@60 -- # : 1 00:23:19.499 11:33:37 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:19.499 11:33:37 -- common/autotest_common.sh@62 -- # : 1 00:23:19.499 11:33:37 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:23:19.499 11:33:37 -- common/autotest_common.sh@64 -- # : 00:23:19.499 11:33:37 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:23:19.499 11:33:37 -- common/autotest_common.sh@66 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:23:19.499 11:33:37 -- common/autotest_common.sh@68 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:23:19.499 11:33:37 -- common/autotest_common.sh@70 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:23:19.499 11:33:37 -- common/autotest_common.sh@72 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:19.499 11:33:37 -- common/autotest_common.sh@74 -- # : 1 00:23:19.499 11:33:37 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:23:19.499 11:33:37 -- common/autotest_common.sh@76 -- # : 0 00:23:19.499 11:33:37 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:23:19.499 11:33:37 -- common/autotest_common.sh@78 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:23:19.500 11:33:37 -- common/autotest_common.sh@80 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:23:19.500 11:33:37 -- common/autotest_common.sh@82 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:23:19.500 11:33:37 -- common/autotest_common.sh@84 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:23:19.500 11:33:37 -- common/autotest_common.sh@86 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:23:19.500 11:33:37 -- common/autotest_common.sh@88 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:23:19.500 11:33:37 -- common/autotest_common.sh@90 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:19.500 11:33:37 -- common/autotest_common.sh@92 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:23:19.500 11:33:37 -- common/autotest_common.sh@94 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:23:19.500 11:33:37 -- common/autotest_common.sh@96 -- # : rdma 00:23:19.500 11:33:37 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:19.500 11:33:37 -- common/autotest_common.sh@98 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:23:19.500 11:33:37 -- common/autotest_common.sh@100 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:23:19.500 11:33:37 -- common/autotest_common.sh@102 -- # : 1 00:23:19.500 11:33:37 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:23:19.500 11:33:37 -- common/autotest_common.sh@104 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:23:19.500 11:33:37 -- common/autotest_common.sh@106 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:23:19.500 11:33:37 -- common/autotest_common.sh@108 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:23:19.500 11:33:37 -- common/autotest_common.sh@110 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:23:19.500 11:33:37 -- common/autotest_common.sh@112 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:19.500 11:33:37 -- common/autotest_common.sh@114 -- # : 1 00:23:19.500 11:33:37 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:23:19.500 11:33:37 -- common/autotest_common.sh@116 -- # : 1 00:23:19.500 11:33:37 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:23:19.500 11:33:37 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:23:19.500 11:33:37 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:19.500 11:33:37 -- common/autotest_common.sh@120 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:23:19.500 11:33:37 -- common/autotest_common.sh@122 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:23:19.500 11:33:37 -- common/autotest_common.sh@124 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:23:19.500 11:33:37 -- common/autotest_common.sh@126 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:23:19.500 11:33:37 -- common/autotest_common.sh@128 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:23:19.500 11:33:37 -- common/autotest_common.sh@130 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:23:19.500 11:33:37 -- common/autotest_common.sh@132 -- # : v23.11 00:23:19.500 11:33:37 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:23:19.500 11:33:37 -- common/autotest_common.sh@134 -- # : true 00:23:19.500 11:33:37 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:23:19.500 11:33:37 -- common/autotest_common.sh@136 -- # : 1 00:23:19.500 11:33:37 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:23:19.500 11:33:37 -- common/autotest_common.sh@138 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:23:19.500 11:33:37 -- common/autotest_common.sh@140 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:23:19.500 11:33:37 -- common/autotest_common.sh@142 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:23:19.500 11:33:37 -- common/autotest_common.sh@144 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:23:19.500 11:33:37 -- common/autotest_common.sh@146 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:23:19.500 11:33:37 -- common/autotest_common.sh@148 -- # : 00:23:19.500 11:33:37 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:23:19.500 11:33:37 -- common/autotest_common.sh@150 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:23:19.500 11:33:37 -- common/autotest_common.sh@152 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:23:19.500 11:33:37 -- common/autotest_common.sh@154 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:23:19.500 11:33:37 -- common/autotest_common.sh@156 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:23:19.500 11:33:37 -- common/autotest_common.sh@158 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:23:19.500 11:33:37 -- common/autotest_common.sh@160 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:23:19.500 11:33:37 -- common/autotest_common.sh@163 -- # : 00:23:19.500 11:33:37 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:23:19.500 11:33:37 -- common/autotest_common.sh@165 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:23:19.500 11:33:37 -- common/autotest_common.sh@167 -- # : 0 00:23:19.500 11:33:37 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:19.500 11:33:37 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.500 11:33:37 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:19.500 11:33:37 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:19.500 11:33:37 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:19.500 11:33:37 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:19.500 11:33:37 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:19.500 11:33:37 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:23:19.500 11:33:37 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:19.500 11:33:37 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:19.500 11:33:37 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:19.500 11:33:37 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:19.500 11:33:37 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:19.500 11:33:37 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:23:19.500 11:33:37 -- common/autotest_common.sh@196 -- # cat 00:23:19.500 11:33:37 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:23:19.500 11:33:37 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:19.500 11:33:37 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:19.501 11:33:37 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:19.501 11:33:37 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:19.501 11:33:37 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:23:19.501 11:33:37 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:23:19.501 11:33:37 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.501 11:33:37 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.501 11:33:37 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.501 11:33:37 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.501 11:33:37 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:23:19.501 11:33:37 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:23:19.501 11:33:37 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:19.501 11:33:37 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:19.501 11:33:37 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:19.501 11:33:37 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:19.501 11:33:37 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:19.501 11:33:37 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:19.501 11:33:37 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:23:19.501 11:33:37 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:23:19.501 11:33:37 -- common/autotest_common.sh@249 -- # _LCOV= 00:23:19.501 11:33:37 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:19.501 11:33:37 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:23:19.501 11:33:37 -- common/autotest_common.sh@255 -- # lcov_opt= 00:23:19.501 11:33:37 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:23:19.501 11:33:37 -- common/autotest_common.sh@259 -- # export valgrind= 00:23:19.501 11:33:37 -- common/autotest_common.sh@259 -- # valgrind= 00:23:19.501 11:33:37 -- common/autotest_common.sh@265 -- # uname -s 00:23:19.501 11:33:37 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:23:19.501 11:33:37 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:23:19.501 11:33:37 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:23:19.501 11:33:37 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:23:19.501 11:33:37 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@275 -- # MAKE=make 00:23:19.501 11:33:37 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:23:19.501 11:33:37 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:23:19.501 11:33:37 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:23:19.501 11:33:37 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:19.501 11:33:37 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:23:19.501 11:33:37 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:23:19.501 11:33:37 -- common/autotest_common.sh@319 -- # [[ -z 97511 ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@319 -- # kill -0 97511 00:23:19.501 11:33:37 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:23:19.501 11:33:37 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:23:19.501 11:33:37 -- common/autotest_common.sh@332 -- # local mount target_dir 00:23:19.501 11:33:37 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:23:19.501 11:33:37 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:23:19.501 11:33:37 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:23:19.501 11:33:37 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:23:19.501 11:33:37 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.OLeK6Z 00:23:19.501 11:33:37 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:19.501 11:33:37 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:23:19.501 11:33:37 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.OLeK6Z/tests/interrupt /tmp/spdk.OLeK6Z 00:23:19.501 11:33:37 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@328 -- # df -T 00:23:19.501 11:33:37 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=9056169984 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=10608582656 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=6268858368 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:23:19.501 11:33:37 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # avails["$mount"]=98302382080 00:23:19.501 11:33:37 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:23:19.501 11:33:37 -- common/autotest_common.sh@364 -- # uses["$mount"]=1400397824 00:23:19.501 11:33:37 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:19.501 11:33:37 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:23:19.501 * Looking for test storage... 00:23:19.501 11:33:37 -- common/autotest_common.sh@369 -- # local target_space new_size 00:23:19.501 11:33:37 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:23:19.501 11:33:37 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.501 11:33:37 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:19.501 11:33:37 -- common/autotest_common.sh@373 -- # mount=/ 00:23:19.501 11:33:37 -- common/autotest_common.sh@375 -- # target_space=9056169984 00:23:19.501 11:33:37 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:23:19.502 11:33:37 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:23:19.502 11:33:37 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@382 -- # new_size=12823175168 00:23:19.502 11:33:37 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:19.502 11:33:37 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.502 11:33:37 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.502 11:33:37 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:19.502 11:33:37 -- common/autotest_common.sh@390 -- # return 0 00:23:19.502 11:33:37 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:23:19.502 11:33:37 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:23:19.502 11:33:37 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:19.502 11:33:37 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:19.502 11:33:37 -- common/autotest_common.sh@1682 -- # true 00:23:19.502 11:33:37 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:23:19.502 11:33:37 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@27 -- # exec 00:23:19.502 11:33:37 -- common/autotest_common.sh@29 -- # exec 00:23:19.502 11:33:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:19.502 11:33:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:19.502 11:33:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:19.502 11:33:37 -- common/autotest_common.sh@18 -- # set -x 00:23:19.502 11:33:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:19.502 11:33:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:19.502 11:33:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:19.762 11:33:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:19.762 11:33:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:19.762 11:33:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:19.762 11:33:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:19.762 11:33:37 -- scripts/common.sh@335 -- # IFS=.-: 00:23:19.762 11:33:37 -- scripts/common.sh@335 -- # read -ra ver1 00:23:19.762 11:33:37 -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.762 11:33:37 -- scripts/common.sh@336 -- # read -ra ver2 00:23:19.762 11:33:37 -- scripts/common.sh@337 -- # local 'op=<' 00:23:19.762 11:33:37 -- scripts/common.sh@339 -- # ver1_l=2 00:23:19.762 11:33:37 -- scripts/common.sh@340 -- # ver2_l=1 00:23:19.762 11:33:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:19.762 11:33:37 -- scripts/common.sh@343 -- # case "$op" in 00:23:19.762 11:33:37 -- scripts/common.sh@344 -- # : 1 00:23:19.762 11:33:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:19.762 11:33:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.762 11:33:37 -- scripts/common.sh@364 -- # decimal 1 00:23:19.762 11:33:37 -- scripts/common.sh@352 -- # local d=1 00:23:19.762 11:33:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.762 11:33:37 -- scripts/common.sh@354 -- # echo 1 00:23:19.762 11:33:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:19.762 11:33:37 -- scripts/common.sh@365 -- # decimal 2 00:23:19.762 11:33:37 -- scripts/common.sh@352 -- # local d=2 00:23:19.762 11:33:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.762 11:33:37 -- scripts/common.sh@354 -- # echo 2 00:23:19.762 11:33:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:19.762 11:33:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:19.762 11:33:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:19.762 11:33:37 -- scripts/common.sh@367 -- # return 0 00:23:19.762 11:33:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.762 11:33:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:19.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.762 --rc genhtml_branch_coverage=1 00:23:19.762 --rc genhtml_function_coverage=1 00:23:19.762 --rc genhtml_legend=1 00:23:19.762 --rc geninfo_all_blocks=1 00:23:19.762 --rc geninfo_unexecuted_blocks=1 00:23:19.762 00:23:19.762 ' 00:23:19.762 11:33:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:19.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.762 --rc genhtml_branch_coverage=1 00:23:19.762 --rc genhtml_function_coverage=1 00:23:19.762 --rc genhtml_legend=1 00:23:19.762 --rc geninfo_all_blocks=1 00:23:19.762 --rc geninfo_unexecuted_blocks=1 00:23:19.762 00:23:19.762 ' 00:23:19.762 11:33:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:19.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.762 --rc genhtml_branch_coverage=1 00:23:19.762 --rc genhtml_function_coverage=1 00:23:19.762 --rc genhtml_legend=1 00:23:19.762 --rc geninfo_all_blocks=1 00:23:19.762 --rc geninfo_unexecuted_blocks=1 00:23:19.762 00:23:19.762 ' 00:23:19.762 11:33:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:19.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.762 --rc genhtml_branch_coverage=1 00:23:19.762 --rc genhtml_function_coverage=1 00:23:19.762 --rc genhtml_legend=1 00:23:19.762 --rc geninfo_all_blocks=1 00:23:19.762 --rc geninfo_unexecuted_blocks=1 00:23:19.762 00:23:19.762 ' 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:19.762 11:33:37 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:19.762 11:33:37 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:19.762 11:33:37 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=97573 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.762 11:33:37 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 97573 /var/tmp/spdk.sock 00:23:19.762 11:33:37 -- common/autotest_common.sh@829 -- # '[' -z 97573 ']' 00:23:19.762 11:33:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.762 11:33:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.762 11:33:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.762 11:33:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.762 11:33:37 -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 [2024-11-26 11:33:37.827589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:19.762 [2024-11-26 11:33:37.828009] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97573 ] 00:23:19.762 [2024-11-26 11:33:37.992609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.046 [2024-11-26 11:33:38.031201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.046 [2024-11-26 11:33:38.031228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.046 [2024-11-26 11:33:38.031273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.046 [2024-11-26 11:33:38.073270] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:20.645 11:33:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.645 11:33:38 -- common/autotest_common.sh@862 -- # return 0 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:23:20.645 11:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.645 11:33:38 -- common/autotest_common.sh@10 -- # set +x 00:23:20.645 11:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:23:20.645 "name": "app_thread", 00:23:20.645 "id": 1, 00:23:20.645 "active_pollers": [], 00:23:20.645 "timed_pollers": [ 00:23:20.645 { 00:23:20.645 "name": "rpc_subsystem_poll", 00:23:20.645 "id": 1, 00:23:20.645 "state": "waiting", 00:23:20.645 "run_count": 0, 00:23:20.645 "busy_count": 0, 00:23:20.645 "period_ticks": 8800000 00:23:20.645 } 00:23:20.645 ], 00:23:20.645 "paused_pollers": [] 00:23:20.645 }' 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:23:20.645 11:33:38 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:23:20.645 11:33:38 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:20.645 11:33:38 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:20.645 11:33:38 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:20.645 5000+0 records in 00:23:20.645 5000+0 records out 00:23:20.645 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0213451 s, 480 MB/s 00:23:20.645 11:33:38 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:20.905 AIO0 00:23:20.905 11:33:39 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:23:21.163 11:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.163 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:23:21.163 11:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:23:21.163 "name": "app_thread", 00:23:21.163 "id": 1, 00:23:21.163 "active_pollers": [], 00:23:21.163 "timed_pollers": [ 00:23:21.163 { 00:23:21.163 "name": "rpc_subsystem_poll", 00:23:21.163 "id": 1, 00:23:21.163 "state": "waiting", 00:23:21.163 "run_count": 0, 00:23:21.163 "busy_count": 0, 00:23:21.163 "period_ticks": 8800000 00:23:21.163 } 00:23:21.163 ], 00:23:21.163 "paused_pollers": [] 00:23:21.163 }' 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:23:21.163 11:33:39 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 97573 00:23:21.163 11:33:39 -- common/autotest_common.sh@936 -- # '[' -z 97573 ']' 00:23:21.164 11:33:39 -- common/autotest_common.sh@940 -- # kill -0 97573 00:23:21.164 11:33:39 -- common/autotest_common.sh@941 -- # uname 00:23:21.164 11:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.164 11:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97573 00:23:21.422 11:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:21.422 11:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:21.422 killing process with pid 97573 00:23:21.422 11:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97573' 00:23:21.422 11:33:39 -- common/autotest_common.sh@955 -- # kill 97573 00:23:21.422 11:33:39 -- common/autotest_common.sh@960 -- # wait 97573 00:23:21.422 11:33:39 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:23:21.422 11:33:39 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:21.422 ************************************ 00:23:21.422 END TEST reap_unregistered_poller 00:23:21.422 ************************************ 00:23:21.422 00:23:21.422 real 0m2.301s 00:23:21.422 user 0m1.366s 00:23:21.422 sys 0m0.534s 00:23:21.422 11:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:21.422 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:23:21.682 11:33:39 -- spdk/autotest.sh@191 -- # uname -s 00:23:21.682 11:33:39 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:23:21.682 11:33:39 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:23:21.682 11:33:39 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:23:21.682 11:33:39 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:23:21.682 11:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.682 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:23:21.682 ************************************ 00:23:21.682 START TEST spdk_dd 00:23:21.682 ************************************ 00:23:21.682 11:33:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:23:21.682 * Looking for test storage... 00:23:21.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:21.682 11:33:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:21.682 11:33:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:21.682 11:33:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:21.682 11:33:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:21.682 11:33:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:21.682 11:33:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:21.682 11:33:39 -- scripts/common.sh@335 -- # IFS=.-: 00:23:21.682 11:33:39 -- scripts/common.sh@335 -- # read -ra ver1 00:23:21.682 11:33:39 -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.682 11:33:39 -- scripts/common.sh@336 -- # read -ra ver2 00:23:21.682 11:33:39 -- scripts/common.sh@337 -- # local 'op=<' 00:23:21.682 11:33:39 -- scripts/common.sh@339 -- # ver1_l=2 00:23:21.682 11:33:39 -- scripts/common.sh@340 -- # ver2_l=1 00:23:21.682 11:33:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:21.682 11:33:39 -- scripts/common.sh@343 -- # case "$op" in 00:23:21.682 11:33:39 -- scripts/common.sh@344 -- # : 1 00:23:21.682 11:33:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:21.682 11:33:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.682 11:33:39 -- scripts/common.sh@364 -- # decimal 1 00:23:21.682 11:33:39 -- scripts/common.sh@352 -- # local d=1 00:23:21.682 11:33:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.682 11:33:39 -- scripts/common.sh@354 -- # echo 1 00:23:21.682 11:33:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:21.682 11:33:39 -- scripts/common.sh@365 -- # decimal 2 00:23:21.682 11:33:39 -- scripts/common.sh@352 -- # local d=2 00:23:21.682 11:33:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.682 11:33:39 -- scripts/common.sh@354 -- # echo 2 00:23:21.682 11:33:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:21.682 11:33:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:21.682 11:33:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:21.682 11:33:39 -- scripts/common.sh@367 -- # return 0 00:23:21.682 11:33:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.682 --rc genhtml_branch_coverage=1 00:23:21.682 --rc genhtml_function_coverage=1 00:23:21.682 --rc genhtml_legend=1 00:23:21.682 --rc geninfo_all_blocks=1 00:23:21.682 --rc geninfo_unexecuted_blocks=1 00:23:21.682 00:23:21.682 ' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.682 --rc genhtml_branch_coverage=1 00:23:21.682 --rc genhtml_function_coverage=1 00:23:21.682 --rc genhtml_legend=1 00:23:21.682 --rc geninfo_all_blocks=1 00:23:21.682 --rc geninfo_unexecuted_blocks=1 00:23:21.682 00:23:21.682 ' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.682 --rc genhtml_branch_coverage=1 00:23:21.682 --rc genhtml_function_coverage=1 00:23:21.682 --rc genhtml_legend=1 00:23:21.682 --rc geninfo_all_blocks=1 00:23:21.682 --rc geninfo_unexecuted_blocks=1 00:23:21.682 00:23:21.682 ' 00:23:21.682 11:33:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.682 --rc genhtml_branch_coverage=1 00:23:21.682 --rc genhtml_function_coverage=1 00:23:21.682 --rc genhtml_legend=1 00:23:21.682 --rc geninfo_all_blocks=1 00:23:21.682 --rc geninfo_unexecuted_blocks=1 00:23:21.682 00:23:21.682 ' 00:23:21.682 11:33:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.682 11:33:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.682 11:33:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.682 11:33:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.682 11:33:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:21.682 11:33:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:21.682 11:33:39 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:21.682 11:33:39 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:21.682 11:33:39 -- paths/export.sh@6 -- # export PATH 00:23:21.683 11:33:39 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:21.683 11:33:39 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:21.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:22.201 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:22.769 11:33:40 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:23:22.769 11:33:40 -- dd/dd.sh@11 -- # nvme_in_userspace 00:23:22.769 11:33:40 -- scripts/common.sh@311 -- # local bdf bdfs 00:23:22.769 11:33:40 -- scripts/common.sh@312 -- # local nvmes 00:23:22.769 11:33:40 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:23:22.769 11:33:40 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:22.770 11:33:40 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:23:22.770 11:33:40 -- scripts/common.sh@297 -- # local bdf= 00:23:22.770 11:33:40 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:23:22.770 11:33:40 -- scripts/common.sh@232 -- # local class 00:23:22.770 11:33:40 -- scripts/common.sh@233 -- # local subclass 00:23:22.770 11:33:40 -- scripts/common.sh@234 -- # local progif 00:23:22.770 11:33:40 -- scripts/common.sh@235 -- # printf %02x 1 00:23:22.770 11:33:40 -- scripts/common.sh@235 -- # class=01 00:23:22.770 11:33:40 -- scripts/common.sh@236 -- # printf %02x 8 00:23:22.770 11:33:40 -- scripts/common.sh@236 -- # subclass=08 00:23:22.770 11:33:40 -- scripts/common.sh@237 -- # printf %02x 2 00:23:22.770 11:33:40 -- scripts/common.sh@237 -- # progif=02 00:23:22.770 11:33:40 -- scripts/common.sh@239 -- # hash lspci 00:23:22.770 11:33:40 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:23:22.770 11:33:40 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:23:22.770 11:33:40 -- scripts/common.sh@242 -- # grep -i -- -p02 00:23:22.770 11:33:40 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:22.770 11:33:40 -- scripts/common.sh@244 -- # tr -d '"' 00:23:22.770 11:33:40 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:22.770 11:33:40 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:23:22.770 11:33:40 -- scripts/common.sh@15 -- # local i 00:23:22.770 11:33:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:23:22.770 11:33:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:22.770 11:33:40 -- scripts/common.sh@24 -- # return 0 00:23:22.770 11:33:40 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:23:22.770 11:33:40 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:23:22.770 11:33:40 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:23:22.770 11:33:40 -- scripts/common.sh@322 -- # uname -s 00:23:22.770 11:33:40 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:23:22.770 11:33:40 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:23:22.770 11:33:40 -- scripts/common.sh@327 -- # (( 1 )) 00:23:22.770 11:33:40 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:23:22.770 11:33:40 -- dd/dd.sh@13 -- # check_liburing 00:23:22.770 11:33:40 -- dd/common.sh@139 -- # local lib so 00:23:22.770 11:33:40 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:23:22.770 11:33:40 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:23:22.770 11:33:40 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:23:22.770 11:33:40 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:23:22.770 * spdk_dd linked to liburing 00:23:22.770 11:33:40 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:22.770 11:33:40 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:22.770 11:33:40 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:22.770 11:33:40 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:22.770 11:33:40 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:22.770 11:33:40 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:22.770 11:33:40 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:22.770 11:33:40 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:22.770 11:33:40 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:22.770 11:33:40 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:22.770 11:33:40 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:22.770 11:33:40 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:22.770 11:33:40 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:22.770 11:33:40 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:22.770 11:33:40 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:22.770 11:33:40 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:22.770 11:33:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:22.770 11:33:40 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:22.770 11:33:40 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:22.770 11:33:40 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:22.770 11:33:40 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:22.770 11:33:40 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:22.770 11:33:40 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:22.770 11:33:40 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:22.770 11:33:40 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:22.770 11:33:40 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:22.770 11:33:40 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:22.770 11:33:40 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:22.770 11:33:40 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:22.770 11:33:40 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:22.770 11:33:40 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:22.770 11:33:40 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:22.770 11:33:40 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:22.770 11:33:40 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:22.770 11:33:40 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:22.770 11:33:40 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:22.770 11:33:40 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:22.770 11:33:40 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:22.770 11:33:40 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:22.770 11:33:40 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:22.770 11:33:40 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:22.770 11:33:40 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:22.770 11:33:40 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:22.770 11:33:40 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:22.770 11:33:40 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:22.770 11:33:40 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:22.770 11:33:40 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:22.770 11:33:40 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:22.770 11:33:40 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:22.770 11:33:40 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:22.771 11:33:40 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:22.771 11:33:40 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:22.771 11:33:40 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:22.771 11:33:40 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:22.771 11:33:40 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:22.771 11:33:40 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:22.771 11:33:40 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:22.771 11:33:40 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:22.771 11:33:40 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:22.771 11:33:40 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:22.771 11:33:40 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:22.771 11:33:40 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:23:22.771 11:33:40 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:22.771 11:33:40 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:22.771 11:33:40 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:22.771 11:33:40 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:22.771 11:33:40 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:22.771 11:33:40 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:22.771 11:33:40 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:22.771 11:33:40 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:22.771 11:33:40 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:22.771 11:33:40 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:23:22.771 11:33:40 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:22.771 11:33:40 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:22.771 11:33:40 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:22.771 11:33:40 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:22.771 11:33:40 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:22.771 11:33:40 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:22.771 11:33:40 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:22.771 11:33:40 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:22.771 11:33:40 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:22.771 11:33:40 -- dd/common.sh@149 -- # [[ n != y ]] 00:23:22.771 11:33:40 -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:23:22.771 * spdk_dd built with liburing, but no liburing support requested? 00:23:22.771 11:33:40 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:23:22.771 11:33:40 -- dd/common.sh@156 -- # export liburing_in_use=1 00:23:22.771 11:33:40 -- dd/common.sh@156 -- # liburing_in_use=1 00:23:22.771 11:33:40 -- dd/common.sh@157 -- # return 0 00:23:22.771 11:33:40 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:23:22.771 11:33:40 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:23:22.771 11:33:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:22.771 11:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:22.771 11:33:40 -- common/autotest_common.sh@10 -- # set +x 00:23:22.771 ************************************ 00:23:22.771 START TEST spdk_dd_basic_rw 00:23:22.771 ************************************ 00:23:22.771 11:33:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:23:22.771 * Looking for test storage... 00:23:22.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:22.771 11:33:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:22.771 11:33:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:22.771 11:33:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:23.031 11:33:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:23.031 11:33:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:23.031 11:33:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:23.031 11:33:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:23.031 11:33:41 -- scripts/common.sh@335 -- # IFS=.-: 00:23:23.031 11:33:41 -- scripts/common.sh@335 -- # read -ra ver1 00:23:23.031 11:33:41 -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.031 11:33:41 -- scripts/common.sh@336 -- # read -ra ver2 00:23:23.031 11:33:41 -- scripts/common.sh@337 -- # local 'op=<' 00:23:23.031 11:33:41 -- scripts/common.sh@339 -- # ver1_l=2 00:23:23.031 11:33:41 -- scripts/common.sh@340 -- # ver2_l=1 00:23:23.031 11:33:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:23.031 11:33:41 -- scripts/common.sh@343 -- # case "$op" in 00:23:23.031 11:33:41 -- scripts/common.sh@344 -- # : 1 00:23:23.031 11:33:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:23.031 11:33:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.031 11:33:41 -- scripts/common.sh@364 -- # decimal 1 00:23:23.031 11:33:41 -- scripts/common.sh@352 -- # local d=1 00:23:23.031 11:33:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.031 11:33:41 -- scripts/common.sh@354 -- # echo 1 00:23:23.031 11:33:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:23.031 11:33:41 -- scripts/common.sh@365 -- # decimal 2 00:23:23.031 11:33:41 -- scripts/common.sh@352 -- # local d=2 00:23:23.031 11:33:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.031 11:33:41 -- scripts/common.sh@354 -- # echo 2 00:23:23.031 11:33:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:23.031 11:33:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:23.031 11:33:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:23.031 11:33:41 -- scripts/common.sh@367 -- # return 0 00:23:23.031 11:33:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.031 11:33:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.031 --rc genhtml_branch_coverage=1 00:23:23.031 --rc genhtml_function_coverage=1 00:23:23.031 --rc genhtml_legend=1 00:23:23.031 --rc geninfo_all_blocks=1 00:23:23.031 --rc geninfo_unexecuted_blocks=1 00:23:23.031 00:23:23.031 ' 00:23:23.031 11:33:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.031 --rc genhtml_branch_coverage=1 00:23:23.031 --rc genhtml_function_coverage=1 00:23:23.031 --rc genhtml_legend=1 00:23:23.031 --rc geninfo_all_blocks=1 00:23:23.031 --rc geninfo_unexecuted_blocks=1 00:23:23.031 00:23:23.031 ' 00:23:23.031 11:33:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.031 --rc genhtml_branch_coverage=1 00:23:23.031 --rc genhtml_function_coverage=1 00:23:23.031 --rc genhtml_legend=1 00:23:23.031 --rc geninfo_all_blocks=1 00:23:23.031 --rc geninfo_unexecuted_blocks=1 00:23:23.031 00:23:23.031 ' 00:23:23.031 11:33:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.031 --rc genhtml_branch_coverage=1 00:23:23.031 --rc genhtml_function_coverage=1 00:23:23.031 --rc genhtml_legend=1 00:23:23.031 --rc geninfo_all_blocks=1 00:23:23.031 --rc geninfo_unexecuted_blocks=1 00:23:23.031 00:23:23.031 ' 00:23:23.031 11:33:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.031 11:33:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.031 11:33:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.031 11:33:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.031 11:33:41 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:23.031 11:33:41 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:23.031 11:33:41 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:23.031 11:33:41 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:23.031 11:33:41 -- paths/export.sh@6 -- # export PATH 00:23:23.031 11:33:41 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:23.031 11:33:41 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:23:23.031 11:33:41 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:23:23.031 11:33:41 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:23:23.031 11:33:41 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:23:23.031 11:33:41 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:23:23.031 11:33:41 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:23:23.031 11:33:41 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:23:23.031 11:33:41 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:23.031 11:33:41 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:23.031 11:33:41 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:23:23.031 11:33:41 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:23:23.031 11:33:41 -- dd/common.sh@126 -- # mapfile -t id 00:23:23.031 11:33:41 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:23:23.293 11:33:41 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2338 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:23:23.293 11:33:41 -- dd/common.sh@130 -- # lbaf=04 00:23:23.293 11:33:41 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2338 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:23:23.293 11:33:41 -- dd/common.sh@132 -- # lbaf=4096 00:23:23.293 11:33:41 -- dd/common.sh@134 -- # echo 4096 00:23:23.293 11:33:41 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:23:23.293 11:33:41 -- dd/basic_rw.sh@96 -- # : 00:23:23.293 11:33:41 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:23:23.293 11:33:41 -- dd/basic_rw.sh@96 -- # gen_conf 00:23:23.293 11:33:41 -- dd/common.sh@31 -- # xtrace_disable 00:23:23.293 11:33:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:23:23.293 11:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:23.293 11:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.293 11:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:23.293 ************************************ 00:23:23.293 START TEST dd_bs_lt_native_bs 00:23:23.293 ************************************ 00:23:23.293 11:33:41 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:23:23.294 11:33:41 -- common/autotest_common.sh@650 -- # local es=0 00:23:23.294 11:33:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:23:23.294 { 00:23:23.294 "subsystems": [ 00:23:23.294 { 00:23:23.294 "subsystem": "bdev", 00:23:23.294 "config": [ 00:23:23.294 { 00:23:23.294 "params": { 00:23:23.294 "trtype": "pcie", 00:23:23.294 "traddr": "0000:00:06.0", 00:23:23.294 "name": "Nvme0" 00:23:23.294 }, 00:23:23.294 "method": "bdev_nvme_attach_controller" 00:23:23.294 }, 00:23:23.294 { 00:23:23.294 "method": "bdev_wait_for_examine" 00:23:23.294 } 00:23:23.294 ] 00:23:23.294 } 00:23:23.294 ] 00:23:23.294 } 00:23:23.294 11:33:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.294 11:33:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.294 11:33:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.294 11:33:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.294 11:33:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.294 11:33:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.294 11:33:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.294 11:33:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:23.294 11:33:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:23:23.294 [2024-11-26 11:33:41.382957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:23.294 [2024-11-26 11:33:41.383129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97852 ] 00:23:23.553 [2024-11-26 11:33:41.553445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.553 [2024-11-26 11:33:41.601280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.553 [2024-11-26 11:33:41.734413] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:23:23.553 [2024-11-26 11:33:41.734521] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:23.813 [2024-11-26 11:33:41.819145] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:23.813 11:33:41 -- common/autotest_common.sh@653 -- # es=234 00:23:23.813 11:33:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.813 11:33:41 -- common/autotest_common.sh@662 -- # es=106 00:23:23.813 11:33:41 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:23.813 11:33:41 -- common/autotest_common.sh@670 -- # es=1 00:23:23.813 11:33:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.813 00:23:23.813 real 0m0.600s 00:23:23.813 user 0m0.330s 00:23:23.813 sys 0m0.188s 00:23:23.813 11:33:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:23.813 ************************************ 00:23:23.813 END TEST dd_bs_lt_native_bs 00:23:23.813 ************************************ 00:23:23.813 11:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 11:33:41 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:23:23.813 11:33:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:23.813 11:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.813 11:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 ************************************ 00:23:23.813 START TEST dd_rw 00:23:23.813 ************************************ 00:23:23.813 11:33:41 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:23:23.813 11:33:41 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:23:23.813 11:33:41 -- dd/basic_rw.sh@12 -- # local count size 00:23:23.813 11:33:41 -- dd/basic_rw.sh@13 -- # local qds bss 00:23:23.813 11:33:41 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:23:23.813 11:33:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:23:23.813 11:33:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:23:23.813 11:33:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:23:23.813 11:33:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:23:23.813 11:33:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:23:23.813 11:33:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:23:23.813 11:33:41 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:23:23.813 11:33:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:23.813 11:33:41 -- dd/basic_rw.sh@23 -- # count=15 00:23:23.813 11:33:41 -- dd/basic_rw.sh@24 -- # count=15 00:23:23.813 11:33:41 -- dd/basic_rw.sh@25 -- # size=61440 00:23:23.813 11:33:41 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:23:23.813 11:33:41 -- dd/common.sh@98 -- # xtrace_disable 00:23:23.813 11:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:24.381 11:33:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:23:24.381 11:33:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:24.381 11:33:42 -- dd/common.sh@31 -- # xtrace_disable 00:23:24.381 11:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:24.381 { 00:23:24.381 "subsystems": [ 00:23:24.381 { 00:23:24.381 "subsystem": "bdev", 00:23:24.381 "config": [ 00:23:24.381 { 00:23:24.381 "params": { 00:23:24.381 "trtype": "pcie", 00:23:24.381 "traddr": "0000:00:06.0", 00:23:24.381 "name": "Nvme0" 00:23:24.381 }, 00:23:24.381 "method": "bdev_nvme_attach_controller" 00:23:24.381 }, 00:23:24.381 { 00:23:24.381 "method": "bdev_wait_for_examine" 00:23:24.381 } 00:23:24.381 ] 00:23:24.381 } 00:23:24.381 ] 00:23:24.381 } 00:23:24.381 [2024-11-26 11:33:42.583238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:24.381 [2024-11-26 11:33:42.583413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97884 ] 00:23:24.641 [2024-11-26 11:33:42.747142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.641 [2024-11-26 11:33:42.784999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.900  [2024-11-26T11:33:43.130Z] Copying: 60/60 [kB] (average 19 MBps) 00:23:24.900 00:23:24.900 11:33:43 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:23:24.900 11:33:43 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:24.900 11:33:43 -- dd/common.sh@31 -- # xtrace_disable 00:23:24.900 11:33:43 -- common/autotest_common.sh@10 -- # set +x 00:23:24.900 { 00:23:24.900 "subsystems": [ 00:23:24.900 { 00:23:24.900 "subsystem": "bdev", 00:23:24.900 "config": [ 00:23:24.900 { 00:23:24.900 "params": { 00:23:24.900 "trtype": "pcie", 00:23:24.900 "traddr": "0000:00:06.0", 00:23:24.900 "name": "Nvme0" 00:23:24.900 }, 00:23:24.900 "method": "bdev_nvme_attach_controller" 00:23:24.900 }, 00:23:24.900 { 00:23:24.900 "method": "bdev_wait_for_examine" 00:23:24.900 } 00:23:24.900 ] 00:23:24.900 } 00:23:24.900 ] 00:23:24.900 } 00:23:25.159 [2024-11-26 11:33:43.160712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:25.159 [2024-11-26 11:33:43.160938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97898 ] 00:23:25.159 [2024-11-26 11:33:43.319749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.159 [2024-11-26 11:33:43.351266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.418  [2024-11-26T11:33:43.648Z] Copying: 60/60 [kB] (average 19 MBps) 00:23:25.418 00:23:25.676 11:33:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:25.676 11:33:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:23:25.676 11:33:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:25.676 11:33:43 -- dd/common.sh@11 -- # local nvme_ref= 00:23:25.676 11:33:43 -- dd/common.sh@12 -- # local size=61440 00:23:25.676 11:33:43 -- dd/common.sh@14 -- # local bs=1048576 00:23:25.676 11:33:43 -- dd/common.sh@15 -- # local count=1 00:23:25.676 11:33:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:25.676 11:33:43 -- dd/common.sh@18 -- # gen_conf 00:23:25.676 11:33:43 -- dd/common.sh@31 -- # xtrace_disable 00:23:25.676 11:33:43 -- common/autotest_common.sh@10 -- # set +x 00:23:25.676 { 00:23:25.676 "subsystems": [ 00:23:25.676 { 00:23:25.676 "subsystem": "bdev", 00:23:25.676 "config": [ 00:23:25.676 { 00:23:25.676 "params": { 00:23:25.676 "trtype": "pcie", 00:23:25.676 "traddr": "0000:00:06.0", 00:23:25.676 "name": "Nvme0" 00:23:25.676 }, 00:23:25.676 "method": "bdev_nvme_attach_controller" 00:23:25.676 }, 00:23:25.676 { 00:23:25.676 "method": "bdev_wait_for_examine" 00:23:25.676 } 00:23:25.676 ] 00:23:25.676 } 00:23:25.676 ] 00:23:25.676 } 00:23:25.676 [2024-11-26 11:33:43.720521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:25.676 [2024-11-26 11:33:43.720689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97912 ] 00:23:25.676 [2024-11-26 11:33:43.881505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.934 [2024-11-26 11:33:43.920920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.934  [2024-11-26T11:33:44.424Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:23:26.194 00:23:26.194 11:33:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:26.194 11:33:44 -- dd/basic_rw.sh@23 -- # count=15 00:23:26.194 11:33:44 -- dd/basic_rw.sh@24 -- # count=15 00:23:26.194 11:33:44 -- dd/basic_rw.sh@25 -- # size=61440 00:23:26.194 11:33:44 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:23:26.194 11:33:44 -- dd/common.sh@98 -- # xtrace_disable 00:23:26.194 11:33:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 11:33:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:23:26.763 11:33:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:26.763 11:33:44 -- dd/common.sh@31 -- # xtrace_disable 00:23:26.763 11:33:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 { 00:23:26.763 "subsystems": [ 00:23:26.763 { 00:23:26.763 "subsystem": "bdev", 00:23:26.763 "config": [ 00:23:26.763 { 00:23:26.763 "params": { 00:23:26.763 "trtype": "pcie", 00:23:26.763 "traddr": "0000:00:06.0", 00:23:26.763 "name": "Nvme0" 00:23:26.763 }, 00:23:26.763 "method": "bdev_nvme_attach_controller" 00:23:26.763 }, 00:23:26.763 { 00:23:26.763 "method": "bdev_wait_for_examine" 00:23:26.763 } 00:23:26.763 ] 00:23:26.763 } 00:23:26.763 ] 00:23:26.763 } 00:23:26.763 [2024-11-26 11:33:44.837299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:26.763 [2024-11-26 11:33:44.837459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97931 ] 00:23:27.023 [2024-11-26 11:33:45.003220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.023 [2024-11-26 11:33:45.040152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.023  [2024-11-26T11:33:45.513Z] Copying: 60/60 [kB] (average 58 MBps) 00:23:27.283 00:23:27.283 11:33:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:23:27.283 11:33:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:27.283 11:33:45 -- dd/common.sh@31 -- # xtrace_disable 00:23:27.283 11:33:45 -- common/autotest_common.sh@10 -- # set +x 00:23:27.283 { 00:23:27.283 "subsystems": [ 00:23:27.283 { 00:23:27.283 "subsystem": "bdev", 00:23:27.283 "config": [ 00:23:27.283 { 00:23:27.283 "params": { 00:23:27.283 "trtype": "pcie", 00:23:27.283 "traddr": "0000:00:06.0", 00:23:27.283 "name": "Nvme0" 00:23:27.283 }, 00:23:27.283 "method": "bdev_nvme_attach_controller" 00:23:27.283 }, 00:23:27.283 { 00:23:27.283 "method": "bdev_wait_for_examine" 00:23:27.283 } 00:23:27.283 ] 00:23:27.283 } 00:23:27.283 ] 00:23:27.283 } 00:23:27.283 [2024-11-26 11:33:45.412958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.283 [2024-11-26 11:33:45.413137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97945 ] 00:23:27.542 [2024-11-26 11:33:45.578671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.542 [2024-11-26 11:33:45.616554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.542  [2024-11-26T11:33:46.030Z] Copying: 60/60 [kB] (average 58 MBps) 00:23:27.800 00:23:27.800 11:33:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:27.800 11:33:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:23:27.800 11:33:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:27.800 11:33:45 -- dd/common.sh@11 -- # local nvme_ref= 00:23:27.800 11:33:45 -- dd/common.sh@12 -- # local size=61440 00:23:27.800 11:33:45 -- dd/common.sh@14 -- # local bs=1048576 00:23:27.800 11:33:45 -- dd/common.sh@15 -- # local count=1 00:23:27.800 11:33:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:27.800 11:33:45 -- dd/common.sh@18 -- # gen_conf 00:23:27.800 11:33:45 -- dd/common.sh@31 -- # xtrace_disable 00:23:27.800 11:33:45 -- common/autotest_common.sh@10 -- # set +x 00:23:27.800 { 00:23:27.800 "subsystems": [ 00:23:27.800 { 00:23:27.800 "subsystem": "bdev", 00:23:27.800 "config": [ 00:23:27.800 { 00:23:27.800 "params": { 00:23:27.800 "trtype": "pcie", 00:23:27.800 "traddr": "0000:00:06.0", 00:23:27.800 "name": "Nvme0" 00:23:27.800 }, 00:23:27.800 "method": "bdev_nvme_attach_controller" 00:23:27.800 }, 00:23:27.800 { 00:23:27.800 "method": "bdev_wait_for_examine" 00:23:27.800 } 00:23:27.800 ] 00:23:27.800 } 00:23:27.800 ] 00:23:27.800 } 00:23:27.800 [2024-11-26 11:33:46.007745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.800 [2024-11-26 11:33:46.007939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97965 ] 00:23:28.058 [2024-11-26 11:33:46.168802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.058 [2024-11-26 11:33:46.206401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.316  [2024-11-26T11:33:46.546Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:28.316 00:23:28.316 11:33:46 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:23:28.316 11:33:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:28.316 11:33:46 -- dd/basic_rw.sh@23 -- # count=7 00:23:28.316 11:33:46 -- dd/basic_rw.sh@24 -- # count=7 00:23:28.316 11:33:46 -- dd/basic_rw.sh@25 -- # size=57344 00:23:28.316 11:33:46 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:23:28.316 11:33:46 -- dd/common.sh@98 -- # xtrace_disable 00:23:28.317 11:33:46 -- common/autotest_common.sh@10 -- # set +x 00:23:28.885 11:33:47 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:23:28.885 11:33:47 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:28.885 11:33:47 -- dd/common.sh@31 -- # xtrace_disable 00:23:28.885 11:33:47 -- common/autotest_common.sh@10 -- # set +x 00:23:28.885 { 00:23:28.885 "subsystems": [ 00:23:28.885 { 00:23:28.885 "subsystem": "bdev", 00:23:28.885 "config": [ 00:23:28.885 { 00:23:28.885 "params": { 00:23:28.885 "trtype": "pcie", 00:23:28.885 "traddr": "0000:00:06.0", 00:23:28.885 "name": "Nvme0" 00:23:28.885 }, 00:23:28.885 "method": "bdev_nvme_attach_controller" 00:23:28.885 }, 00:23:28.885 { 00:23:28.885 "method": "bdev_wait_for_examine" 00:23:28.885 } 00:23:28.885 ] 00:23:28.885 } 00:23:28.885 ] 00:23:28.885 } 00:23:28.885 [2024-11-26 11:33:47.082863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:28.885 [2024-11-26 11:33:47.083054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97983 ] 00:23:29.144 [2024-11-26 11:33:47.247066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.144 [2024-11-26 11:33:47.282795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.403  [2024-11-26T11:33:47.633Z] Copying: 56/56 [kB] (average 27 MBps) 00:23:29.403 00:23:29.403 11:33:47 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:23:29.403 11:33:47 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:29.403 11:33:47 -- dd/common.sh@31 -- # xtrace_disable 00:23:29.403 11:33:47 -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 { 00:23:29.403 "subsystems": [ 00:23:29.403 { 00:23:29.403 "subsystem": "bdev", 00:23:29.403 "config": [ 00:23:29.403 { 00:23:29.403 "params": { 00:23:29.403 "trtype": "pcie", 00:23:29.403 "traddr": "0000:00:06.0", 00:23:29.403 "name": "Nvme0" 00:23:29.403 }, 00:23:29.403 "method": "bdev_nvme_attach_controller" 00:23:29.403 }, 00:23:29.403 { 00:23:29.403 "method": "bdev_wait_for_examine" 00:23:29.403 } 00:23:29.403 ] 00:23:29.403 } 00:23:29.403 ] 00:23:29.403 } 00:23:29.662 [2024-11-26 11:33:47.648398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:29.663 [2024-11-26 11:33:47.648549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97992 ] 00:23:29.663 [2024-11-26 11:33:47.811270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.663 [2024-11-26 11:33:47.846938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.921  [2024-11-26T11:33:48.151Z] Copying: 56/56 [kB] (average 54 MBps) 00:23:29.921 00:23:30.180 11:33:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:30.180 11:33:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:23:30.180 11:33:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:30.180 11:33:48 -- dd/common.sh@11 -- # local nvme_ref= 00:23:30.180 11:33:48 -- dd/common.sh@12 -- # local size=57344 00:23:30.180 11:33:48 -- dd/common.sh@14 -- # local bs=1048576 00:23:30.180 11:33:48 -- dd/common.sh@15 -- # local count=1 00:23:30.180 11:33:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:30.180 11:33:48 -- dd/common.sh@18 -- # gen_conf 00:23:30.180 11:33:48 -- dd/common.sh@31 -- # xtrace_disable 00:23:30.180 11:33:48 -- common/autotest_common.sh@10 -- # set +x 00:23:30.180 { 00:23:30.180 "subsystems": [ 00:23:30.180 { 00:23:30.180 "subsystem": "bdev", 00:23:30.180 "config": [ 00:23:30.180 { 00:23:30.180 "params": { 00:23:30.180 "trtype": "pcie", 00:23:30.180 "traddr": "0000:00:06.0", 00:23:30.180 "name": "Nvme0" 00:23:30.180 }, 00:23:30.180 "method": "bdev_nvme_attach_controller" 00:23:30.180 }, 00:23:30.180 { 00:23:30.180 "method": "bdev_wait_for_examine" 00:23:30.180 } 00:23:30.180 ] 00:23:30.180 } 00:23:30.180 ] 00:23:30.180 } 00:23:30.180 [2024-11-26 11:33:48.226799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:30.180 [2024-11-26 11:33:48.226990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98011 ] 00:23:30.180 [2024-11-26 11:33:48.391697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.439 [2024-11-26 11:33:48.428821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.439  [2024-11-26T11:33:48.928Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:30.698 00:23:30.698 11:33:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:30.698 11:33:48 -- dd/basic_rw.sh@23 -- # count=7 00:23:30.698 11:33:48 -- dd/basic_rw.sh@24 -- # count=7 00:23:30.698 11:33:48 -- dd/basic_rw.sh@25 -- # size=57344 00:23:30.698 11:33:48 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:23:30.698 11:33:48 -- dd/common.sh@98 -- # xtrace_disable 00:23:30.698 11:33:48 -- common/autotest_common.sh@10 -- # set +x 00:23:31.265 11:33:49 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:23:31.265 11:33:49 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:31.265 11:33:49 -- dd/common.sh@31 -- # xtrace_disable 00:23:31.265 11:33:49 -- common/autotest_common.sh@10 -- # set +x 00:23:31.265 { 00:23:31.265 "subsystems": [ 00:23:31.265 { 00:23:31.265 "subsystem": "bdev", 00:23:31.265 "config": [ 00:23:31.265 { 00:23:31.265 "params": { 00:23:31.265 "trtype": "pcie", 00:23:31.265 "traddr": "0000:00:06.0", 00:23:31.265 "name": "Nvme0" 00:23:31.265 }, 00:23:31.265 "method": "bdev_nvme_attach_controller" 00:23:31.265 }, 00:23:31.265 { 00:23:31.265 "method": "bdev_wait_for_examine" 00:23:31.265 } 00:23:31.265 ] 00:23:31.265 } 00:23:31.265 ] 00:23:31.265 } 00:23:31.265 [2024-11-26 11:33:49.303221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:31.265 [2024-11-26 11:33:49.303397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98026 ] 00:23:31.265 [2024-11-26 11:33:49.466471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.265 [2024-11-26 11:33:49.499951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.524  [2024-11-26T11:33:50.014Z] Copying: 56/56 [kB] (average 54 MBps) 00:23:31.784 00:23:31.784 11:33:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:23:31.784 11:33:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:31.784 11:33:49 -- dd/common.sh@31 -- # xtrace_disable 00:23:31.784 11:33:49 -- common/autotest_common.sh@10 -- # set +x 00:23:31.784 { 00:23:31.784 "subsystems": [ 00:23:31.784 { 00:23:31.784 "subsystem": "bdev", 00:23:31.784 "config": [ 00:23:31.784 { 00:23:31.784 "params": { 00:23:31.784 "trtype": "pcie", 00:23:31.784 "traddr": "0000:00:06.0", 00:23:31.784 "name": "Nvme0" 00:23:31.784 }, 00:23:31.784 "method": "bdev_nvme_attach_controller" 00:23:31.784 }, 00:23:31.784 { 00:23:31.784 "method": "bdev_wait_for_examine" 00:23:31.784 } 00:23:31.784 ] 00:23:31.784 } 00:23:31.784 ] 00:23:31.784 } 00:23:31.784 [2024-11-26 11:33:49.873256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:31.784 [2024-11-26 11:33:49.873429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98039 ] 00:23:32.043 [2024-11-26 11:33:50.039851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.043 [2024-11-26 11:33:50.079306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.043  [2024-11-26T11:33:50.532Z] Copying: 56/56 [kB] (average 54 MBps) 00:23:32.302 00:23:32.302 11:33:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:32.302 11:33:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:23:32.302 11:33:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:32.302 11:33:50 -- dd/common.sh@11 -- # local nvme_ref= 00:23:32.302 11:33:50 -- dd/common.sh@12 -- # local size=57344 00:23:32.302 11:33:50 -- dd/common.sh@14 -- # local bs=1048576 00:23:32.302 11:33:50 -- dd/common.sh@15 -- # local count=1 00:23:32.302 11:33:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:32.302 11:33:50 -- dd/common.sh@18 -- # gen_conf 00:23:32.302 11:33:50 -- dd/common.sh@31 -- # xtrace_disable 00:23:32.302 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:23:32.302 { 00:23:32.302 "subsystems": [ 00:23:32.302 { 00:23:32.302 "subsystem": "bdev", 00:23:32.302 "config": [ 00:23:32.302 { 00:23:32.302 "params": { 00:23:32.302 "trtype": "pcie", 00:23:32.303 "traddr": "0000:00:06.0", 00:23:32.303 "name": "Nvme0" 00:23:32.303 }, 00:23:32.303 "method": "bdev_nvme_attach_controller" 00:23:32.303 }, 00:23:32.303 { 00:23:32.303 "method": "bdev_wait_for_examine" 00:23:32.303 } 00:23:32.303 ] 00:23:32.303 } 00:23:32.303 ] 00:23:32.303 } 00:23:32.303 [2024-11-26 11:33:50.446011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:32.303 [2024-11-26 11:33:50.446174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98054 ] 00:23:32.562 [2024-11-26 11:33:50.595041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.562 [2024-11-26 11:33:50.628107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.562  [2024-11-26T11:33:51.052Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:32.822 00:23:32.822 11:33:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:23:32.822 11:33:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:32.822 11:33:50 -- dd/basic_rw.sh@23 -- # count=3 00:23:32.822 11:33:50 -- dd/basic_rw.sh@24 -- # count=3 00:23:32.822 11:33:50 -- dd/basic_rw.sh@25 -- # size=49152 00:23:32.822 11:33:50 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:23:32.822 11:33:50 -- dd/common.sh@98 -- # xtrace_disable 00:23:32.822 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:23:33.391 11:33:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:23:33.391 11:33:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:33.391 11:33:51 -- dd/common.sh@31 -- # xtrace_disable 00:23:33.391 11:33:51 -- common/autotest_common.sh@10 -- # set +x 00:23:33.391 { 00:23:33.391 "subsystems": [ 00:23:33.391 { 00:23:33.391 "subsystem": "bdev", 00:23:33.391 "config": [ 00:23:33.391 { 00:23:33.391 "params": { 00:23:33.391 "trtype": "pcie", 00:23:33.391 "traddr": "0000:00:06.0", 00:23:33.391 "name": "Nvme0" 00:23:33.391 }, 00:23:33.391 "method": "bdev_nvme_attach_controller" 00:23:33.391 }, 00:23:33.391 { 00:23:33.391 "method": "bdev_wait_for_examine" 00:23:33.391 } 00:23:33.391 ] 00:23:33.391 } 00:23:33.391 ] 00:23:33.391 } 00:23:33.391 [2024-11-26 11:33:51.444645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.391 [2024-11-26 11:33:51.444829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98073 ] 00:23:33.391 [2024-11-26 11:33:51.610486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.650 [2024-11-26 11:33:51.649798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.650  [2024-11-26T11:33:52.140Z] Copying: 48/48 [kB] (average 46 MBps) 00:23:33.910 00:23:33.910 11:33:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:23:33.910 11:33:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:33.910 11:33:51 -- dd/common.sh@31 -- # xtrace_disable 00:23:33.910 11:33:51 -- common/autotest_common.sh@10 -- # set +x 00:23:33.910 { 00:23:33.910 "subsystems": [ 00:23:33.910 { 00:23:33.910 "subsystem": "bdev", 00:23:33.910 "config": [ 00:23:33.910 { 00:23:33.910 "params": { 00:23:33.910 "trtype": "pcie", 00:23:33.910 "traddr": "0000:00:06.0", 00:23:33.910 "name": "Nvme0" 00:23:33.910 }, 00:23:33.910 "method": "bdev_nvme_attach_controller" 00:23:33.910 }, 00:23:33.910 { 00:23:33.910 "method": "bdev_wait_for_examine" 00:23:33.910 } 00:23:33.910 ] 00:23:33.910 } 00:23:33.910 ] 00:23:33.910 } 00:23:33.910 [2024-11-26 11:33:52.036285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.910 [2024-11-26 11:33:52.036467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98086 ] 00:23:34.169 [2024-11-26 11:33:52.201076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.169 [2024-11-26 11:33:52.240449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.169  [2024-11-26T11:33:52.658Z] Copying: 48/48 [kB] (average 46 MBps) 00:23:34.428 00:23:34.428 11:33:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:34.428 11:33:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:23:34.428 11:33:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:34.428 11:33:52 -- dd/common.sh@11 -- # local nvme_ref= 00:23:34.428 11:33:52 -- dd/common.sh@12 -- # local size=49152 00:23:34.428 11:33:52 -- dd/common.sh@14 -- # local bs=1048576 00:23:34.428 11:33:52 -- dd/common.sh@15 -- # local count=1 00:23:34.428 11:33:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:34.428 11:33:52 -- dd/common.sh@18 -- # gen_conf 00:23:34.428 11:33:52 -- dd/common.sh@31 -- # xtrace_disable 00:23:34.428 11:33:52 -- common/autotest_common.sh@10 -- # set +x 00:23:34.428 { 00:23:34.428 "subsystems": [ 00:23:34.428 { 00:23:34.428 "subsystem": "bdev", 00:23:34.428 "config": [ 00:23:34.428 { 00:23:34.428 "params": { 00:23:34.428 "trtype": "pcie", 00:23:34.428 "traddr": "0000:00:06.0", 00:23:34.428 "name": "Nvme0" 00:23:34.428 }, 00:23:34.428 "method": "bdev_nvme_attach_controller" 00:23:34.428 }, 00:23:34.428 { 00:23:34.428 "method": "bdev_wait_for_examine" 00:23:34.428 } 00:23:34.428 ] 00:23:34.428 } 00:23:34.428 ] 00:23:34.428 } 00:23:34.428 [2024-11-26 11:33:52.621046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:34.428 [2024-11-26 11:33:52.621245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98101 ] 00:23:34.688 [2024-11-26 11:33:52.786392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.688 [2024-11-26 11:33:52.825216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.947  [2024-11-26T11:33:53.177Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:34.947 00:23:34.947 11:33:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:34.947 11:33:53 -- dd/basic_rw.sh@23 -- # count=3 00:23:34.947 11:33:53 -- dd/basic_rw.sh@24 -- # count=3 00:23:34.947 11:33:53 -- dd/basic_rw.sh@25 -- # size=49152 00:23:34.947 11:33:53 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:23:34.947 11:33:53 -- dd/common.sh@98 -- # xtrace_disable 00:23:34.947 11:33:53 -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 11:33:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:23:35.516 11:33:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:35.516 11:33:53 -- dd/common.sh@31 -- # xtrace_disable 00:23:35.516 11:33:53 -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 { 00:23:35.516 "subsystems": [ 00:23:35.516 { 00:23:35.516 "subsystem": "bdev", 00:23:35.516 "config": [ 00:23:35.516 { 00:23:35.516 "params": { 00:23:35.516 "trtype": "pcie", 00:23:35.516 "traddr": "0000:00:06.0", 00:23:35.516 "name": "Nvme0" 00:23:35.516 }, 00:23:35.516 "method": "bdev_nvme_attach_controller" 00:23:35.516 }, 00:23:35.516 { 00:23:35.516 "method": "bdev_wait_for_examine" 00:23:35.516 } 00:23:35.516 ] 00:23:35.516 } 00:23:35.516 ] 00:23:35.516 } 00:23:35.516 [2024-11-26 11:33:53.657056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:35.516 [2024-11-26 11:33:53.657226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98120 ] 00:23:35.776 [2024-11-26 11:33:53.825939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.776 [2024-11-26 11:33:53.872473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.776  [2024-11-26T11:33:54.265Z] Copying: 48/48 [kB] (average 46 MBps) 00:23:36.036 00:23:36.036 11:33:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:23:36.036 11:33:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:36.036 11:33:54 -- dd/common.sh@31 -- # xtrace_disable 00:23:36.036 11:33:54 -- common/autotest_common.sh@10 -- # set +x 00:23:36.036 { 00:23:36.036 "subsystems": [ 00:23:36.036 { 00:23:36.036 "subsystem": "bdev", 00:23:36.036 "config": [ 00:23:36.036 { 00:23:36.036 "params": { 00:23:36.036 "trtype": "pcie", 00:23:36.036 "traddr": "0000:00:06.0", 00:23:36.036 "name": "Nvme0" 00:23:36.036 }, 00:23:36.036 "method": "bdev_nvme_attach_controller" 00:23:36.036 }, 00:23:36.036 { 00:23:36.036 "method": "bdev_wait_for_examine" 00:23:36.036 } 00:23:36.036 ] 00:23:36.036 } 00:23:36.036 ] 00:23:36.036 } 00:23:36.036 [2024-11-26 11:33:54.240886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:36.036 [2024-11-26 11:33:54.241049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98138 ] 00:23:36.295 [2024-11-26 11:33:54.387959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.295 [2024-11-26 11:33:54.421162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.295  [2024-11-26T11:33:54.784Z] Copying: 48/48 [kB] (average 46 MBps) 00:23:36.554 00:23:36.554 11:33:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:36.554 11:33:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:23:36.554 11:33:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:36.554 11:33:54 -- dd/common.sh@11 -- # local nvme_ref= 00:23:36.554 11:33:54 -- dd/common.sh@12 -- # local size=49152 00:23:36.554 11:33:54 -- dd/common.sh@14 -- # local bs=1048576 00:23:36.554 11:33:54 -- dd/common.sh@15 -- # local count=1 00:23:36.554 11:33:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:36.554 11:33:54 -- dd/common.sh@18 -- # gen_conf 00:23:36.554 11:33:54 -- dd/common.sh@31 -- # xtrace_disable 00:23:36.554 11:33:54 -- common/autotest_common.sh@10 -- # set +x 00:23:36.554 { 00:23:36.554 "subsystems": [ 00:23:36.554 { 00:23:36.554 "subsystem": "bdev", 00:23:36.554 "config": [ 00:23:36.554 { 00:23:36.554 "params": { 00:23:36.554 "trtype": "pcie", 00:23:36.554 "traddr": "0000:00:06.0", 00:23:36.554 "name": "Nvme0" 00:23:36.554 }, 00:23:36.554 "method": "bdev_nvme_attach_controller" 00:23:36.554 }, 00:23:36.554 { 00:23:36.554 "method": "bdev_wait_for_examine" 00:23:36.554 } 00:23:36.554 ] 00:23:36.555 } 00:23:36.555 ] 00:23:36.555 } 00:23:36.555 [2024-11-26 11:33:54.783476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:36.555 [2024-11-26 11:33:54.783679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98148 ] 00:23:36.814 [2024-11-26 11:33:54.949756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.814 [2024-11-26 11:33:54.987208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.073  [2024-11-26T11:33:55.303Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:37.073 00:23:37.073 00:23:37.073 real 0m13.339s 00:23:37.073 user 0m8.294s 00:23:37.073 sys 0m3.277s 00:23:37.073 ************************************ 00:23:37.073 END TEST dd_rw 00:23:37.073 ************************************ 00:23:37.073 11:33:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:37.073 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.333 11:33:55 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:23:37.333 11:33:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:37.333 11:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:37.333 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.333 ************************************ 00:23:37.333 START TEST dd_rw_offset 00:23:37.333 ************************************ 00:23:37.333 11:33:55 -- common/autotest_common.sh@1114 -- # basic_offset 00:23:37.333 11:33:55 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:23:37.333 11:33:55 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:23:37.333 11:33:55 -- dd/common.sh@98 -- # xtrace_disable 00:23:37.333 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.333 11:33:55 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:23:37.333 11:33:55 -- dd/basic_rw.sh@56 -- # data=531ksljl9zjigt3bl7ecrsjae4vglhqzd95mnybsnu5fn2wq8au0uw8ck21ri6s4570anneq1t25mue1xxooqsp96v4tj9573vgmd4ndariqakampwdkimmpdi5wn6ler5lbe0b4blr4se1kqpln687zlnw2912kn47b5khigfam19hi15e5bjt38hf37wry9ejbfz73soq5598iatt3cvxw9uu92mewvd4l4j9zryy28gp2ztfrt02odtgmrtxa7ped8jap068o4kh8jdljoltamzmnyd5797m3dschmvc037odfue849lk57bfxhl4jqo1phms7dj3nkkfwitpuzjlgl3cx86v4a7uou6fnuu3os21ajll8b4irspyqb2ujr96ejqe164rrvdmcmbodkgdtm2cw53fqmrkkqzaat15ad5d0020itogzuc6ice0ebvtzi56w21rkl555ye2sr83cd4oanj7ohz2em73zwwg2liu9f5363kjlb7zkv2d6gsbjuwrkel8d7j8oaztfick5ftj1ilqzxq6j4wzv6oy584cyk91f4xkk8kvmaynz9ldrzaf2iqg2613yooe835boya8e2g40b2i19djtcoca3r68qync8e84p5eo7jf27cx7ab5r0w0ea0igkrsw7advkdvr1p49l6dgok51yyyvahc46yuqcv8o8xx810dw05ishbgwig1b09002ewd1p7k6mr59fww2qxm6e7y8iujn8po7f9uimt1uxcpvb3lokp2ate370ykzr7izsm0hmynw6qb0xs54w6jheynsqd3t7cqh0fks1znwzrkssvcaafxdr1u5t3c4bekak7l0amgesnzs68d1g75zaokuvakw9urxlwewwkoua7h3yi6jhdkznagsya3g0lcx6h9op6882huhpchjx9to96unzbi4nw75afwmvw9n1a0mf2bk7jv1earisrjoyizhahwytakvou9nqhf4zpr0sd37scxmwfb3orz4umphgdlzrzefs0no12vljthpltbuxzadla58iarlb4nfbr9kpmzokmpayxmf02r9wb8da6irxqtq339lstk0u0dk3syxtgxg4k9ydg6ytwsfwcksekjshpqivpvys4i4zmalugqjdcnpt2kvegcs8t1cb91op3yh162dcznhctm60b2zmvp5n6ovl8nth45haasc2g1z994dnh1yezkvu8tl75r5nc99eb363zzixki4cs7iw503iy78txlehojbhe6wb7n2rq6kn546dq7k969h1mks588m2ceo0i5iyfaolyq67ts2ptriboql92bhak6hyj4pu10jcrg09wnr7jqm1g3fb0660qwu48cszpzpjby0d2cqrqej4ksdkyknv1u74f8tk10vqekozhsujhztzyogitdb6si9u9xly3olcqiruowbdgw4z6vd337bgsk5tlina6wump87q9p86rwjh99rfs27vk1pg5dttot5c9exw4bs3mt6lq9dlqcdhrzgqb0d7m4ew7i70u6o6yjzuhlfgvpk55r7ax5f8um0hb5j9fypla85xq3li87ej6tsumae00eztknasad76u6wjh6j2to91eyhu7xww561cyr4sm23z7wxi6ts25w8343hi5x8t6t11r7g4y75ox57phvgxppp8fhzbw85e5ytyqow6j6nfmsobraiol5rv6zy0a5to44s388xrvv7x2ohpkt7rwyfhxflqxna0xpivqs34ubsou9yvlwih6uuqg69sm2egqqz0jgdnnzcsy2jhxlmgcwqinbe6uzxtk8s1s113yz7pqxxu8nlo2bc2ekuh4mydqc7zkhsrk65u9mdro30xvr6atswgrskbldx3pxhvw9hqf91dql9p9kbcd99qw1eam2vj7aw7aq0j09ifd0su9rs59qedfgip3azq2sxsg7mlthyaudytc2l850vqloe9d6db4cnqj0d2vs247kkh7s5narhx4wdkdh8piqw2519b454zu32e7oeboesa8rqtld0kzxnbdu3ds7lyg7yz0ct1gmmkd85souhwevhknqqxhitimpp3ynk91eyue165y7ima4ogo43swqom56zuyc0u7gqyvkkbff02cbq6kgdj83onzjjrd1vtz7t6tetfehhhtclilw8gekt1jmqoiiuhnv8vh353amq2axwqlxm95e3i4z0hjnukvdhwitlkdys9wedvs4p5p8i9kzi4uzadu41dqghc07vcukbvq2ns69ti2kh9nyo8jbfc76pwdyzv4ty0ywwn4lg2m37cyhof6mf1hy6hvt7mciqd5wmj5un3ptb4p4n4on2t22v7gn3eba8wtydpdvftr30ywp9huvjmi94t2ymtwva1xmqybvzlaf36e821b71co4vo0ek791i77kun44xu5kh018uuo5vy1mi039dx0pebxtxsijgyayel475fc2q19w6bwbof36qwp4zpxwrd0ndz5vhewyu4q6obgm49qqfis93s0t75p3talliyi34qfnxx4fcjyx18oupecr805rdaftoa7pkbbej1igsn7o8hsg53xrxyraq3l64z1ekix0zit2rs6hju83loaftjcuu0q6vbvughik504hyaiz6v3lu0746rybh1l4v125ia4a37kr3o3kabk9r1et4vqzz97cld7tmlutefxkh7y3on71prnz65i0xvto1mtsrwxykdh8cztrn1bjc9xe6eumzo8ifo2wdqhfilbepcn6z10jz74ldbltxc592hzp0l99e2mrgfi93k8e0n7hu0p9bqh2ixtcs2ikld4upnauqezbfqttvic9rx8rr0vkk22eczxq785gv9n9z94yerbp544g9dpjq18ds9izaec8qkc77li8h6yupyks7eoltxvo4tvdvqrne2oy6ro59a0x3dz7qv3op5i1sczn0txyzqbg3zhgnxvkp2gx3s1l61biaydbhylbl958z8m96f36t1c5n10v0zzqz9r9yf6f6wax7s1zsapombyk03d42x8qek0seco8uon2640zgksf3m05oqh5iace83vuywuibm60bevwqz5ugw4nrkumhvyvfxxirk8ugiovx2mpfw1e5oiqhtu7xosr7hj19b6te7eufymuix9529pwojkwbhkp4xit25evxxc9d6fgzhl9luqsedicpld5z54qhhhbgx42e18uqqysennw1iip401ydh4rcu5xfoeqekgiqjr2lncnvo4ox2hcwllhs0rbjc0dkpz7z07hcuc9ae8mie6wni60gn01uyfme6ez5syyqzoz7v0ofq1567daj0mdbkyvbw8yeanizyvqtpc7um2ztb6j88ufle2b87g50vddbi11cjvinxpnmssvxjlco21ji5it22bnjn5yott3zerv2lvvnfi21yj6xhwnch6pm5uy2im6nadlszexed9haj1799971xrkdvhq9spl80c6u1132vktnuwn7j4y4cg3yeaqal5yhj222o707mbt82675b7l93r46qvt59v5ucl4drgazxeb1sv5uhwz6xb0175y3d7cxnjnuth8981i7wca4defo172c56q0vz3kmi9k69tz1aoxp8wpph2ercmck4gx8cld2tdeao68vknw04box78tytejtrouorpy2dy1iiciw21ybnxcr7tvda4wms3yufc18iug12iurk5v9q29zwzlodtcwp67fsewdno1zaoo4m0f7qpw9jpe5mh3sa2qxrt2as7zwg5yj498l3lfzwyu3sc0bi7wbtmiemk56c97caiwzdxesmtcs12oip5whgvui9jyrpnswztmfe8v8h1hm9c2ndl599b620i49kg3ks0m3qcyp79kd5i69779hkrazbf9kmveml9o0dwg7rybbrayganlg8gp6m8uhknw349wdb8ilx3gczqy4wqqkdj6l5d3qf2ebqf95sb4mc5vxfmupu0fvge7ukinhjwvh3uib0ybaysy5d7p619m7lsb25jglnrwrvqrl9hq1kp8e2v9vzwj3d6t7qpdlk 00:23:37.333 11:33:55 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:23:37.333 11:33:55 -- dd/basic_rw.sh@59 -- # gen_conf 00:23:37.333 11:33:55 -- dd/common.sh@31 -- # xtrace_disable 00:23:37.333 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.333 { 00:23:37.333 "subsystems": [ 00:23:37.333 { 00:23:37.333 "subsystem": "bdev", 00:23:37.333 "config": [ 00:23:37.333 { 00:23:37.333 "params": { 00:23:37.333 "trtype": "pcie", 00:23:37.333 "traddr": "0000:00:06.0", 00:23:37.333 "name": "Nvme0" 00:23:37.333 }, 00:23:37.333 "method": "bdev_nvme_attach_controller" 00:23:37.333 }, 00:23:37.333 { 00:23:37.333 "method": "bdev_wait_for_examine" 00:23:37.333 } 00:23:37.333 ] 00:23:37.333 } 00:23:37.333 ] 00:23:37.333 } 00:23:37.333 [2024-11-26 11:33:55.467748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.333 [2024-11-26 11:33:55.467948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98179 ] 00:23:37.593 [2024-11-26 11:33:55.633974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.593 [2024-11-26 11:33:55.666178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.593  [2024-11-26T11:33:56.082Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:23:37.852 00:23:37.852 11:33:55 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:23:37.852 11:33:55 -- dd/basic_rw.sh@65 -- # gen_conf 00:23:37.852 11:33:55 -- dd/common.sh@31 -- # xtrace_disable 00:23:37.852 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:23:37.852 { 00:23:37.852 "subsystems": [ 00:23:37.852 { 00:23:37.852 "subsystem": "bdev", 00:23:37.852 "config": [ 00:23:37.852 { 00:23:37.852 "params": { 00:23:37.852 "trtype": "pcie", 00:23:37.852 "traddr": "0000:00:06.0", 00:23:37.852 "name": "Nvme0" 00:23:37.852 }, 00:23:37.852 "method": "bdev_nvme_attach_controller" 00:23:37.852 }, 00:23:37.852 { 00:23:37.852 "method": "bdev_wait_for_examine" 00:23:37.852 } 00:23:37.852 ] 00:23:37.852 } 00:23:37.852 ] 00:23:37.852 } 00:23:37.852 [2024-11-26 11:33:56.023978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.852 [2024-11-26 11:33:56.024152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98192 ] 00:23:38.112 [2024-11-26 11:33:56.187325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.112 [2024-11-26 11:33:56.226914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.112  [2024-11-26T11:33:56.602Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:23:38.372 00:23:38.372 11:33:56 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:23:38.373 11:33:56 -- dd/basic_rw.sh@72 -- # [[ 531ksljl9zjigt3bl7ecrsjae4vglhqzd95mnybsnu5fn2wq8au0uw8ck21ri6s4570anneq1t25mue1xxooqsp96v4tj9573vgmd4ndariqakampwdkimmpdi5wn6ler5lbe0b4blr4se1kqpln687zlnw2912kn47b5khigfam19hi15e5bjt38hf37wry9ejbfz73soq5598iatt3cvxw9uu92mewvd4l4j9zryy28gp2ztfrt02odtgmrtxa7ped8jap068o4kh8jdljoltamzmnyd5797m3dschmvc037odfue849lk57bfxhl4jqo1phms7dj3nkkfwitpuzjlgl3cx86v4a7uou6fnuu3os21ajll8b4irspyqb2ujr96ejqe164rrvdmcmbodkgdtm2cw53fqmrkkqzaat15ad5d0020itogzuc6ice0ebvtzi56w21rkl555ye2sr83cd4oanj7ohz2em73zwwg2liu9f5363kjlb7zkv2d6gsbjuwrkel8d7j8oaztfick5ftj1ilqzxq6j4wzv6oy584cyk91f4xkk8kvmaynz9ldrzaf2iqg2613yooe835boya8e2g40b2i19djtcoca3r68qync8e84p5eo7jf27cx7ab5r0w0ea0igkrsw7advkdvr1p49l6dgok51yyyvahc46yuqcv8o8xx810dw05ishbgwig1b09002ewd1p7k6mr59fww2qxm6e7y8iujn8po7f9uimt1uxcpvb3lokp2ate370ykzr7izsm0hmynw6qb0xs54w6jheynsqd3t7cqh0fks1znwzrkssvcaafxdr1u5t3c4bekak7l0amgesnzs68d1g75zaokuvakw9urxlwewwkoua7h3yi6jhdkznagsya3g0lcx6h9op6882huhpchjx9to96unzbi4nw75afwmvw9n1a0mf2bk7jv1earisrjoyizhahwytakvou9nqhf4zpr0sd37scxmwfb3orz4umphgdlzrzefs0no12vljthpltbuxzadla58iarlb4nfbr9kpmzokmpayxmf02r9wb8da6irxqtq339lstk0u0dk3syxtgxg4k9ydg6ytwsfwcksekjshpqivpvys4i4zmalugqjdcnpt2kvegcs8t1cb91op3yh162dcznhctm60b2zmvp5n6ovl8nth45haasc2g1z994dnh1yezkvu8tl75r5nc99eb363zzixki4cs7iw503iy78txlehojbhe6wb7n2rq6kn546dq7k969h1mks588m2ceo0i5iyfaolyq67ts2ptriboql92bhak6hyj4pu10jcrg09wnr7jqm1g3fb0660qwu48cszpzpjby0d2cqrqej4ksdkyknv1u74f8tk10vqekozhsujhztzyogitdb6si9u9xly3olcqiruowbdgw4z6vd337bgsk5tlina6wump87q9p86rwjh99rfs27vk1pg5dttot5c9exw4bs3mt6lq9dlqcdhrzgqb0d7m4ew7i70u6o6yjzuhlfgvpk55r7ax5f8um0hb5j9fypla85xq3li87ej6tsumae00eztknasad76u6wjh6j2to91eyhu7xww561cyr4sm23z7wxi6ts25w8343hi5x8t6t11r7g4y75ox57phvgxppp8fhzbw85e5ytyqow6j6nfmsobraiol5rv6zy0a5to44s388xrvv7x2ohpkt7rwyfhxflqxna0xpivqs34ubsou9yvlwih6uuqg69sm2egqqz0jgdnnzcsy2jhxlmgcwqinbe6uzxtk8s1s113yz7pqxxu8nlo2bc2ekuh4mydqc7zkhsrk65u9mdro30xvr6atswgrskbldx3pxhvw9hqf91dql9p9kbcd99qw1eam2vj7aw7aq0j09ifd0su9rs59qedfgip3azq2sxsg7mlthyaudytc2l850vqloe9d6db4cnqj0d2vs247kkh7s5narhx4wdkdh8piqw2519b454zu32e7oeboesa8rqtld0kzxnbdu3ds7lyg7yz0ct1gmmkd85souhwevhknqqxhitimpp3ynk91eyue165y7ima4ogo43swqom56zuyc0u7gqyvkkbff02cbq6kgdj83onzjjrd1vtz7t6tetfehhhtclilw8gekt1jmqoiiuhnv8vh353amq2axwqlxm95e3i4z0hjnukvdhwitlkdys9wedvs4p5p8i9kzi4uzadu41dqghc07vcukbvq2ns69ti2kh9nyo8jbfc76pwdyzv4ty0ywwn4lg2m37cyhof6mf1hy6hvt7mciqd5wmj5un3ptb4p4n4on2t22v7gn3eba8wtydpdvftr30ywp9huvjmi94t2ymtwva1xmqybvzlaf36e821b71co4vo0ek791i77kun44xu5kh018uuo5vy1mi039dx0pebxtxsijgyayel475fc2q19w6bwbof36qwp4zpxwrd0ndz5vhewyu4q6obgm49qqfis93s0t75p3talliyi34qfnxx4fcjyx18oupecr805rdaftoa7pkbbej1igsn7o8hsg53xrxyraq3l64z1ekix0zit2rs6hju83loaftjcuu0q6vbvughik504hyaiz6v3lu0746rybh1l4v125ia4a37kr3o3kabk9r1et4vqzz97cld7tmlutefxkh7y3on71prnz65i0xvto1mtsrwxykdh8cztrn1bjc9xe6eumzo8ifo2wdqhfilbepcn6z10jz74ldbltxc592hzp0l99e2mrgfi93k8e0n7hu0p9bqh2ixtcs2ikld4upnauqezbfqttvic9rx8rr0vkk22eczxq785gv9n9z94yerbp544g9dpjq18ds9izaec8qkc77li8h6yupyks7eoltxvo4tvdvqrne2oy6ro59a0x3dz7qv3op5i1sczn0txyzqbg3zhgnxvkp2gx3s1l61biaydbhylbl958z8m96f36t1c5n10v0zzqz9r9yf6f6wax7s1zsapombyk03d42x8qek0seco8uon2640zgksf3m05oqh5iace83vuywuibm60bevwqz5ugw4nrkumhvyvfxxirk8ugiovx2mpfw1e5oiqhtu7xosr7hj19b6te7eufymuix9529pwojkwbhkp4xit25evxxc9d6fgzhl9luqsedicpld5z54qhhhbgx42e18uqqysennw1iip401ydh4rcu5xfoeqekgiqjr2lncnvo4ox2hcwllhs0rbjc0dkpz7z07hcuc9ae8mie6wni60gn01uyfme6ez5syyqzoz7v0ofq1567daj0mdbkyvbw8yeanizyvqtpc7um2ztb6j88ufle2b87g50vddbi11cjvinxpnmssvxjlco21ji5it22bnjn5yott3zerv2lvvnfi21yj6xhwnch6pm5uy2im6nadlszexed9haj1799971xrkdvhq9spl80c6u1132vktnuwn7j4y4cg3yeaqal5yhj222o707mbt82675b7l93r46qvt59v5ucl4drgazxeb1sv5uhwz6xb0175y3d7cxnjnuth8981i7wca4defo172c56q0vz3kmi9k69tz1aoxp8wpph2ercmck4gx8cld2tdeao68vknw04box78tytejtrouorpy2dy1iiciw21ybnxcr7tvda4wms3yufc18iug12iurk5v9q29zwzlodtcwp67fsewdno1zaoo4m0f7qpw9jpe5mh3sa2qxrt2as7zwg5yj498l3lfzwyu3sc0bi7wbtmiemk56c97caiwzdxesmtcs12oip5whgvui9jyrpnswztmfe8v8h1hm9c2ndl599b620i49kg3ks0m3qcyp79kd5i69779hkrazbf9kmveml9o0dwg7rybbrayganlg8gp6m8uhknw349wdb8ilx3gczqy4wqqkdj6l5d3qf2ebqf95sb4mc5vxfmupu0fvge7ukinhjwvh3uib0ybaysy5d7p619m7lsb25jglnrwrvqrl9hq1kp8e2v9vzwj3d6t7qpdlk == \5\3\1\k\s\l\j\l\9\z\j\i\g\t\3\b\l\7\e\c\r\s\j\a\e\4\v\g\l\h\q\z\d\9\5\m\n\y\b\s\n\u\5\f\n\2\w\q\8\a\u\0\u\w\8\c\k\2\1\r\i\6\s\4\5\7\0\a\n\n\e\q\1\t\2\5\m\u\e\1\x\x\o\o\q\s\p\9\6\v\4\t\j\9\5\7\3\v\g\m\d\4\n\d\a\r\i\q\a\k\a\m\p\w\d\k\i\m\m\p\d\i\5\w\n\6\l\e\r\5\l\b\e\0\b\4\b\l\r\4\s\e\1\k\q\p\l\n\6\8\7\z\l\n\w\2\9\1\2\k\n\4\7\b\5\k\h\i\g\f\a\m\1\9\h\i\1\5\e\5\b\j\t\3\8\h\f\3\7\w\r\y\9\e\j\b\f\z\7\3\s\o\q\5\5\9\8\i\a\t\t\3\c\v\x\w\9\u\u\9\2\m\e\w\v\d\4\l\4\j\9\z\r\y\y\2\8\g\p\2\z\t\f\r\t\0\2\o\d\t\g\m\r\t\x\a\7\p\e\d\8\j\a\p\0\6\8\o\4\k\h\8\j\d\l\j\o\l\t\a\m\z\m\n\y\d\5\7\9\7\m\3\d\s\c\h\m\v\c\0\3\7\o\d\f\u\e\8\4\9\l\k\5\7\b\f\x\h\l\4\j\q\o\1\p\h\m\s\7\d\j\3\n\k\k\f\w\i\t\p\u\z\j\l\g\l\3\c\x\8\6\v\4\a\7\u\o\u\6\f\n\u\u\3\o\s\2\1\a\j\l\l\8\b\4\i\r\s\p\y\q\b\2\u\j\r\9\6\e\j\q\e\1\6\4\r\r\v\d\m\c\m\b\o\d\k\g\d\t\m\2\c\w\5\3\f\q\m\r\k\k\q\z\a\a\t\1\5\a\d\5\d\0\0\2\0\i\t\o\g\z\u\c\6\i\c\e\0\e\b\v\t\z\i\5\6\w\2\1\r\k\l\5\5\5\y\e\2\s\r\8\3\c\d\4\o\a\n\j\7\o\h\z\2\e\m\7\3\z\w\w\g\2\l\i\u\9\f\5\3\6\3\k\j\l\b\7\z\k\v\2\d\6\g\s\b\j\u\w\r\k\e\l\8\d\7\j\8\o\a\z\t\f\i\c\k\5\f\t\j\1\i\l\q\z\x\q\6\j\4\w\z\v\6\o\y\5\8\4\c\y\k\9\1\f\4\x\k\k\8\k\v\m\a\y\n\z\9\l\d\r\z\a\f\2\i\q\g\2\6\1\3\y\o\o\e\8\3\5\b\o\y\a\8\e\2\g\4\0\b\2\i\1\9\d\j\t\c\o\c\a\3\r\6\8\q\y\n\c\8\e\8\4\p\5\e\o\7\j\f\2\7\c\x\7\a\b\5\r\0\w\0\e\a\0\i\g\k\r\s\w\7\a\d\v\k\d\v\r\1\p\4\9\l\6\d\g\o\k\5\1\y\y\y\v\a\h\c\4\6\y\u\q\c\v\8\o\8\x\x\8\1\0\d\w\0\5\i\s\h\b\g\w\i\g\1\b\0\9\0\0\2\e\w\d\1\p\7\k\6\m\r\5\9\f\w\w\2\q\x\m\6\e\7\y\8\i\u\j\n\8\p\o\7\f\9\u\i\m\t\1\u\x\c\p\v\b\3\l\o\k\p\2\a\t\e\3\7\0\y\k\z\r\7\i\z\s\m\0\h\m\y\n\w\6\q\b\0\x\s\5\4\w\6\j\h\e\y\n\s\q\d\3\t\7\c\q\h\0\f\k\s\1\z\n\w\z\r\k\s\s\v\c\a\a\f\x\d\r\1\u\5\t\3\c\4\b\e\k\a\k\7\l\0\a\m\g\e\s\n\z\s\6\8\d\1\g\7\5\z\a\o\k\u\v\a\k\w\9\u\r\x\l\w\e\w\w\k\o\u\a\7\h\3\y\i\6\j\h\d\k\z\n\a\g\s\y\a\3\g\0\l\c\x\6\h\9\o\p\6\8\8\2\h\u\h\p\c\h\j\x\9\t\o\9\6\u\n\z\b\i\4\n\w\7\5\a\f\w\m\v\w\9\n\1\a\0\m\f\2\b\k\7\j\v\1\e\a\r\i\s\r\j\o\y\i\z\h\a\h\w\y\t\a\k\v\o\u\9\n\q\h\f\4\z\p\r\0\s\d\3\7\s\c\x\m\w\f\b\3\o\r\z\4\u\m\p\h\g\d\l\z\r\z\e\f\s\0\n\o\1\2\v\l\j\t\h\p\l\t\b\u\x\z\a\d\l\a\5\8\i\a\r\l\b\4\n\f\b\r\9\k\p\m\z\o\k\m\p\a\y\x\m\f\0\2\r\9\w\b\8\d\a\6\i\r\x\q\t\q\3\3\9\l\s\t\k\0\u\0\d\k\3\s\y\x\t\g\x\g\4\k\9\y\d\g\6\y\t\w\s\f\w\c\k\s\e\k\j\s\h\p\q\i\v\p\v\y\s\4\i\4\z\m\a\l\u\g\q\j\d\c\n\p\t\2\k\v\e\g\c\s\8\t\1\c\b\9\1\o\p\3\y\h\1\6\2\d\c\z\n\h\c\t\m\6\0\b\2\z\m\v\p\5\n\6\o\v\l\8\n\t\h\4\5\h\a\a\s\c\2\g\1\z\9\9\4\d\n\h\1\y\e\z\k\v\u\8\t\l\7\5\r\5\n\c\9\9\e\b\3\6\3\z\z\i\x\k\i\4\c\s\7\i\w\5\0\3\i\y\7\8\t\x\l\e\h\o\j\b\h\e\6\w\b\7\n\2\r\q\6\k\n\5\4\6\d\q\7\k\9\6\9\h\1\m\k\s\5\8\8\m\2\c\e\o\0\i\5\i\y\f\a\o\l\y\q\6\7\t\s\2\p\t\r\i\b\o\q\l\9\2\b\h\a\k\6\h\y\j\4\p\u\1\0\j\c\r\g\0\9\w\n\r\7\j\q\m\1\g\3\f\b\0\6\6\0\q\w\u\4\8\c\s\z\p\z\p\j\b\y\0\d\2\c\q\r\q\e\j\4\k\s\d\k\y\k\n\v\1\u\7\4\f\8\t\k\1\0\v\q\e\k\o\z\h\s\u\j\h\z\t\z\y\o\g\i\t\d\b\6\s\i\9\u\9\x\l\y\3\o\l\c\q\i\r\u\o\w\b\d\g\w\4\z\6\v\d\3\3\7\b\g\s\k\5\t\l\i\n\a\6\w\u\m\p\8\7\q\9\p\8\6\r\w\j\h\9\9\r\f\s\2\7\v\k\1\p\g\5\d\t\t\o\t\5\c\9\e\x\w\4\b\s\3\m\t\6\l\q\9\d\l\q\c\d\h\r\z\g\q\b\0\d\7\m\4\e\w\7\i\7\0\u\6\o\6\y\j\z\u\h\l\f\g\v\p\k\5\5\r\7\a\x\5\f\8\u\m\0\h\b\5\j\9\f\y\p\l\a\8\5\x\q\3\l\i\8\7\e\j\6\t\s\u\m\a\e\0\0\e\z\t\k\n\a\s\a\d\7\6\u\6\w\j\h\6\j\2\t\o\9\1\e\y\h\u\7\x\w\w\5\6\1\c\y\r\4\s\m\2\3\z\7\w\x\i\6\t\s\2\5\w\8\3\4\3\h\i\5\x\8\t\6\t\1\1\r\7\g\4\y\7\5\o\x\5\7\p\h\v\g\x\p\p\p\8\f\h\z\b\w\8\5\e\5\y\t\y\q\o\w\6\j\6\n\f\m\s\o\b\r\a\i\o\l\5\r\v\6\z\y\0\a\5\t\o\4\4\s\3\8\8\x\r\v\v\7\x\2\o\h\p\k\t\7\r\w\y\f\h\x\f\l\q\x\n\a\0\x\p\i\v\q\s\3\4\u\b\s\o\u\9\y\v\l\w\i\h\6\u\u\q\g\6\9\s\m\2\e\g\q\q\z\0\j\g\d\n\n\z\c\s\y\2\j\h\x\l\m\g\c\w\q\i\n\b\e\6\u\z\x\t\k\8\s\1\s\1\1\3\y\z\7\p\q\x\x\u\8\n\l\o\2\b\c\2\e\k\u\h\4\m\y\d\q\c\7\z\k\h\s\r\k\6\5\u\9\m\d\r\o\3\0\x\v\r\6\a\t\s\w\g\r\s\k\b\l\d\x\3\p\x\h\v\w\9\h\q\f\9\1\d\q\l\9\p\9\k\b\c\d\9\9\q\w\1\e\a\m\2\v\j\7\a\w\7\a\q\0\j\0\9\i\f\d\0\s\u\9\r\s\5\9\q\e\d\f\g\i\p\3\a\z\q\2\s\x\s\g\7\m\l\t\h\y\a\u\d\y\t\c\2\l\8\5\0\v\q\l\o\e\9\d\6\d\b\4\c\n\q\j\0\d\2\v\s\2\4\7\k\k\h\7\s\5\n\a\r\h\x\4\w\d\k\d\h\8\p\i\q\w\2\5\1\9\b\4\5\4\z\u\3\2\e\7\o\e\b\o\e\s\a\8\r\q\t\l\d\0\k\z\x\n\b\d\u\3\d\s\7\l\y\g\7\y\z\0\c\t\1\g\m\m\k\d\8\5\s\o\u\h\w\e\v\h\k\n\q\q\x\h\i\t\i\m\p\p\3\y\n\k\9\1\e\y\u\e\1\6\5\y\7\i\m\a\4\o\g\o\4\3\s\w\q\o\m\5\6\z\u\y\c\0\u\7\g\q\y\v\k\k\b\f\f\0\2\c\b\q\6\k\g\d\j\8\3\o\n\z\j\j\r\d\1\v\t\z\7\t\6\t\e\t\f\e\h\h\h\t\c\l\i\l\w\8\g\e\k\t\1\j\m\q\o\i\i\u\h\n\v\8\v\h\3\5\3\a\m\q\2\a\x\w\q\l\x\m\9\5\e\3\i\4\z\0\h\j\n\u\k\v\d\h\w\i\t\l\k\d\y\s\9\w\e\d\v\s\4\p\5\p\8\i\9\k\z\i\4\u\z\a\d\u\4\1\d\q\g\h\c\0\7\v\c\u\k\b\v\q\2\n\s\6\9\t\i\2\k\h\9\n\y\o\8\j\b\f\c\7\6\p\w\d\y\z\v\4\t\y\0\y\w\w\n\4\l\g\2\m\3\7\c\y\h\o\f\6\m\f\1\h\y\6\h\v\t\7\m\c\i\q\d\5\w\m\j\5\u\n\3\p\t\b\4\p\4\n\4\o\n\2\t\2\2\v\7\g\n\3\e\b\a\8\w\t\y\d\p\d\v\f\t\r\3\0\y\w\p\9\h\u\v\j\m\i\9\4\t\2\y\m\t\w\v\a\1\x\m\q\y\b\v\z\l\a\f\3\6\e\8\2\1\b\7\1\c\o\4\v\o\0\e\k\7\9\1\i\7\7\k\u\n\4\4\x\u\5\k\h\0\1\8\u\u\o\5\v\y\1\m\i\0\3\9\d\x\0\p\e\b\x\t\x\s\i\j\g\y\a\y\e\l\4\7\5\f\c\2\q\1\9\w\6\b\w\b\o\f\3\6\q\w\p\4\z\p\x\w\r\d\0\n\d\z\5\v\h\e\w\y\u\4\q\6\o\b\g\m\4\9\q\q\f\i\s\9\3\s\0\t\7\5\p\3\t\a\l\l\i\y\i\3\4\q\f\n\x\x\4\f\c\j\y\x\1\8\o\u\p\e\c\r\8\0\5\r\d\a\f\t\o\a\7\p\k\b\b\e\j\1\i\g\s\n\7\o\8\h\s\g\5\3\x\r\x\y\r\a\q\3\l\6\4\z\1\e\k\i\x\0\z\i\t\2\r\s\6\h\j\u\8\3\l\o\a\f\t\j\c\u\u\0\q\6\v\b\v\u\g\h\i\k\5\0\4\h\y\a\i\z\6\v\3\l\u\0\7\4\6\r\y\b\h\1\l\4\v\1\2\5\i\a\4\a\3\7\k\r\3\o\3\k\a\b\k\9\r\1\e\t\4\v\q\z\z\9\7\c\l\d\7\t\m\l\u\t\e\f\x\k\h\7\y\3\o\n\7\1\p\r\n\z\6\5\i\0\x\v\t\o\1\m\t\s\r\w\x\y\k\d\h\8\c\z\t\r\n\1\b\j\c\9\x\e\6\e\u\m\z\o\8\i\f\o\2\w\d\q\h\f\i\l\b\e\p\c\n\6\z\1\0\j\z\7\4\l\d\b\l\t\x\c\5\9\2\h\z\p\0\l\9\9\e\2\m\r\g\f\i\9\3\k\8\e\0\n\7\h\u\0\p\9\b\q\h\2\i\x\t\c\s\2\i\k\l\d\4\u\p\n\a\u\q\e\z\b\f\q\t\t\v\i\c\9\r\x\8\r\r\0\v\k\k\2\2\e\c\z\x\q\7\8\5\g\v\9\n\9\z\9\4\y\e\r\b\p\5\4\4\g\9\d\p\j\q\1\8\d\s\9\i\z\a\e\c\8\q\k\c\7\7\l\i\8\h\6\y\u\p\y\k\s\7\e\o\l\t\x\v\o\4\t\v\d\v\q\r\n\e\2\o\y\6\r\o\5\9\a\0\x\3\d\z\7\q\v\3\o\p\5\i\1\s\c\z\n\0\t\x\y\z\q\b\g\3\z\h\g\n\x\v\k\p\2\g\x\3\s\1\l\6\1\b\i\a\y\d\b\h\y\l\b\l\9\5\8\z\8\m\9\6\f\3\6\t\1\c\5\n\1\0\v\0\z\z\q\z\9\r\9\y\f\6\f\6\w\a\x\7\s\1\z\s\a\p\o\m\b\y\k\0\3\d\4\2\x\8\q\e\k\0\s\e\c\o\8\u\o\n\2\6\4\0\z\g\k\s\f\3\m\0\5\o\q\h\5\i\a\c\e\8\3\v\u\y\w\u\i\b\m\6\0\b\e\v\w\q\z\5\u\g\w\4\n\r\k\u\m\h\v\y\v\f\x\x\i\r\k\8\u\g\i\o\v\x\2\m\p\f\w\1\e\5\o\i\q\h\t\u\7\x\o\s\r\7\h\j\1\9\b\6\t\e\7\e\u\f\y\m\u\i\x\9\5\2\9\p\w\o\j\k\w\b\h\k\p\4\x\i\t\2\5\e\v\x\x\c\9\d\6\f\g\z\h\l\9\l\u\q\s\e\d\i\c\p\l\d\5\z\5\4\q\h\h\h\b\g\x\4\2\e\1\8\u\q\q\y\s\e\n\n\w\1\i\i\p\4\0\1\y\d\h\4\r\c\u\5\x\f\o\e\q\e\k\g\i\q\j\r\2\l\n\c\n\v\o\4\o\x\2\h\c\w\l\l\h\s\0\r\b\j\c\0\d\k\p\z\7\z\0\7\h\c\u\c\9\a\e\8\m\i\e\6\w\n\i\6\0\g\n\0\1\u\y\f\m\e\6\e\z\5\s\y\y\q\z\o\z\7\v\0\o\f\q\1\5\6\7\d\a\j\0\m\d\b\k\y\v\b\w\8\y\e\a\n\i\z\y\v\q\t\p\c\7\u\m\2\z\t\b\6\j\8\8\u\f\l\e\2\b\8\7\g\5\0\v\d\d\b\i\1\1\c\j\v\i\n\x\p\n\m\s\s\v\x\j\l\c\o\2\1\j\i\5\i\t\2\2\b\n\j\n\5\y\o\t\t\3\z\e\r\v\2\l\v\v\n\f\i\2\1\y\j\6\x\h\w\n\c\h\6\p\m\5\u\y\2\i\m\6\n\a\d\l\s\z\e\x\e\d\9\h\a\j\1\7\9\9\9\7\1\x\r\k\d\v\h\q\9\s\p\l\8\0\c\6\u\1\1\3\2\v\k\t\n\u\w\n\7\j\4\y\4\c\g\3\y\e\a\q\a\l\5\y\h\j\2\2\2\o\7\0\7\m\b\t\8\2\6\7\5\b\7\l\9\3\r\4\6\q\v\t\5\9\v\5\u\c\l\4\d\r\g\a\z\x\e\b\1\s\v\5\u\h\w\z\6\x\b\0\1\7\5\y\3\d\7\c\x\n\j\n\u\t\h\8\9\8\1\i\7\w\c\a\4\d\e\f\o\1\7\2\c\5\6\q\0\v\z\3\k\m\i\9\k\6\9\t\z\1\a\o\x\p\8\w\p\p\h\2\e\r\c\m\c\k\4\g\x\8\c\l\d\2\t\d\e\a\o\6\8\v\k\n\w\0\4\b\o\x\7\8\t\y\t\e\j\t\r\o\u\o\r\p\y\2\d\y\1\i\i\c\i\w\2\1\y\b\n\x\c\r\7\t\v\d\a\4\w\m\s\3\y\u\f\c\1\8\i\u\g\1\2\i\u\r\k\5\v\9\q\2\9\z\w\z\l\o\d\t\c\w\p\6\7\f\s\e\w\d\n\o\1\z\a\o\o\4\m\0\f\7\q\p\w\9\j\p\e\5\m\h\3\s\a\2\q\x\r\t\2\a\s\7\z\w\g\5\y\j\4\9\8\l\3\l\f\z\w\y\u\3\s\c\0\b\i\7\w\b\t\m\i\e\m\k\5\6\c\9\7\c\a\i\w\z\d\x\e\s\m\t\c\s\1\2\o\i\p\5\w\h\g\v\u\i\9\j\y\r\p\n\s\w\z\t\m\f\e\8\v\8\h\1\h\m\9\c\2\n\d\l\5\9\9\b\6\2\0\i\4\9\k\g\3\k\s\0\m\3\q\c\y\p\7\9\k\d\5\i\6\9\7\7\9\h\k\r\a\z\b\f\9\k\m\v\e\m\l\9\o\0\d\w\g\7\r\y\b\b\r\a\y\g\a\n\l\g\8\g\p\6\m\8\u\h\k\n\w\3\4\9\w\d\b\8\i\l\x\3\g\c\z\q\y\4\w\q\q\k\d\j\6\l\5\d\3\q\f\2\e\b\q\f\9\5\s\b\4\m\c\5\v\x\f\m\u\p\u\0\f\v\g\e\7\u\k\i\n\h\j\w\v\h\3\u\i\b\0\y\b\a\y\s\y\5\d\7\p\6\1\9\m\7\l\s\b\2\5\j\g\l\n\r\w\r\v\q\r\l\9\h\q\1\k\p\8\e\2\v\9\v\z\w\j\3\d\6\t\7\q\p\d\l\k ]] 00:23:38.373 00:23:38.373 real 0m1.187s 00:23:38.373 user 0m0.648s 00:23:38.373 sys 0m0.339s 00:23:38.373 ************************************ 00:23:38.373 END TEST dd_rw_offset 00:23:38.373 ************************************ 00:23:38.373 11:33:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.373 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:23:38.373 11:33:56 -- dd/basic_rw.sh@1 -- # cleanup 00:23:38.373 11:33:56 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:23:38.373 11:33:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:38.373 11:33:56 -- dd/common.sh@11 -- # local nvme_ref= 00:23:38.373 11:33:56 -- dd/common.sh@12 -- # local size=0xffff 00:23:38.373 11:33:56 -- dd/common.sh@14 -- # local bs=1048576 00:23:38.373 11:33:56 -- dd/common.sh@15 -- # local count=1 00:23:38.373 11:33:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:38.373 11:33:56 -- dd/common.sh@18 -- # gen_conf 00:23:38.373 11:33:56 -- dd/common.sh@31 -- # xtrace_disable 00:23:38.373 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:23:38.632 { 00:23:38.632 "subsystems": [ 00:23:38.632 { 00:23:38.632 "subsystem": "bdev", 00:23:38.632 "config": [ 00:23:38.632 { 00:23:38.632 "params": { 00:23:38.632 "trtype": "pcie", 00:23:38.632 "traddr": "0000:00:06.0", 00:23:38.632 "name": "Nvme0" 00:23:38.632 }, 00:23:38.632 "method": "bdev_nvme_attach_controller" 00:23:38.632 }, 00:23:38.632 { 00:23:38.632 "method": "bdev_wait_for_examine" 00:23:38.632 } 00:23:38.632 ] 00:23:38.632 } 00:23:38.632 ] 00:23:38.632 } 00:23:38.632 [2024-11-26 11:33:56.650531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:38.632 [2024-11-26 11:33:56.650702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98221 ] 00:23:38.632 [2024-11-26 11:33:56.815136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.633 [2024-11-26 11:33:56.847943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.891  [2024-11-26T11:33:57.381Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:39.151 00:23:39.151 11:33:57 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:39.151 00:23:39.151 real 0m16.309s 00:23:39.151 user 0m9.837s 00:23:39.151 sys 0m4.238s 00:23:39.151 11:33:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.151 ************************************ 00:23:39.151 END TEST spdk_dd_basic_rw 00:23:39.151 ************************************ 00:23:39.151 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.151 11:33:57 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:23:39.151 11:33:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.151 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.151 ************************************ 00:23:39.151 START TEST spdk_dd_posix 00:23:39.151 ************************************ 00:23:39.151 11:33:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:23:39.151 * Looking for test storage... 00:23:39.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:39.151 11:33:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:39.151 11:33:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:39.151 11:33:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:39.151 11:33:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:39.151 11:33:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:39.151 11:33:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:39.151 11:33:57 -- scripts/common.sh@335 -- # IFS=.-: 00:23:39.151 11:33:57 -- scripts/common.sh@335 -- # read -ra ver1 00:23:39.151 11:33:57 -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.151 11:33:57 -- scripts/common.sh@336 -- # read -ra ver2 00:23:39.151 11:33:57 -- scripts/common.sh@337 -- # local 'op=<' 00:23:39.151 11:33:57 -- scripts/common.sh@339 -- # ver1_l=2 00:23:39.151 11:33:57 -- scripts/common.sh@340 -- # ver2_l=1 00:23:39.151 11:33:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:39.151 11:33:57 -- scripts/common.sh@343 -- # case "$op" in 00:23:39.151 11:33:57 -- scripts/common.sh@344 -- # : 1 00:23:39.151 11:33:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:39.151 11:33:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.151 11:33:57 -- scripts/common.sh@364 -- # decimal 1 00:23:39.151 11:33:57 -- scripts/common.sh@352 -- # local d=1 00:23:39.151 11:33:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.151 11:33:57 -- scripts/common.sh@354 -- # echo 1 00:23:39.151 11:33:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:39.151 11:33:57 -- scripts/common.sh@365 -- # decimal 2 00:23:39.151 11:33:57 -- scripts/common.sh@352 -- # local d=2 00:23:39.151 11:33:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.151 11:33:57 -- scripts/common.sh@354 -- # echo 2 00:23:39.151 11:33:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:39.151 11:33:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:39.151 11:33:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:39.151 11:33:57 -- scripts/common.sh@367 -- # return 0 00:23:39.151 11:33:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.151 --rc genhtml_branch_coverage=1 00:23:39.151 --rc genhtml_function_coverage=1 00:23:39.151 --rc genhtml_legend=1 00:23:39.151 --rc geninfo_all_blocks=1 00:23:39.151 --rc geninfo_unexecuted_blocks=1 00:23:39.151 00:23:39.151 ' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.151 --rc genhtml_branch_coverage=1 00:23:39.151 --rc genhtml_function_coverage=1 00:23:39.151 --rc genhtml_legend=1 00:23:39.151 --rc geninfo_all_blocks=1 00:23:39.151 --rc geninfo_unexecuted_blocks=1 00:23:39.151 00:23:39.151 ' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.151 --rc genhtml_branch_coverage=1 00:23:39.151 --rc genhtml_function_coverage=1 00:23:39.151 --rc genhtml_legend=1 00:23:39.151 --rc geninfo_all_blocks=1 00:23:39.151 --rc geninfo_unexecuted_blocks=1 00:23:39.151 00:23:39.151 ' 00:23:39.151 11:33:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:39.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.151 --rc genhtml_branch_coverage=1 00:23:39.151 --rc genhtml_function_coverage=1 00:23:39.151 --rc genhtml_legend=1 00:23:39.151 --rc geninfo_all_blocks=1 00:23:39.151 --rc geninfo_unexecuted_blocks=1 00:23:39.151 00:23:39.151 ' 00:23:39.151 11:33:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.151 11:33:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.152 11:33:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.152 11:33:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.152 11:33:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:39.152 11:33:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:39.152 11:33:57 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:39.152 11:33:57 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:39.152 11:33:57 -- paths/export.sh@6 -- # export PATH 00:23:39.152 11:33:57 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:39.152 11:33:57 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:23:39.152 11:33:57 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:23:39.152 11:33:57 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:23:39.152 11:33:57 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:23:39.152 11:33:57 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:39.152 11:33:57 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:39.152 11:33:57 -- dd/posix.sh@130 -- # tests 00:23:39.152 11:33:57 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:23:39.152 * First test run, liburing in use 00:23:39.152 11:33:57 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:23:39.152 11:33:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:39.152 11:33:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.152 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 ************************************ 00:23:39.411 START TEST dd_flag_append 00:23:39.411 ************************************ 00:23:39.411 11:33:57 -- common/autotest_common.sh@1114 -- # append 00:23:39.411 11:33:57 -- dd/posix.sh@16 -- # local dump0 00:23:39.411 11:33:57 -- dd/posix.sh@17 -- # local dump1 00:23:39.411 11:33:57 -- dd/posix.sh@19 -- # gen_bytes 32 00:23:39.411 11:33:57 -- dd/common.sh@98 -- # xtrace_disable 00:23:39.411 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 11:33:57 -- dd/posix.sh@19 -- # dump0=bbkl4857ygotvg5q1f2c3symc3y93y9u 00:23:39.411 11:33:57 -- dd/posix.sh@20 -- # gen_bytes 32 00:23:39.411 11:33:57 -- dd/common.sh@98 -- # xtrace_disable 00:23:39.411 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 11:33:57 -- dd/posix.sh@20 -- # dump1=wy81mxu9lwd6f2ntyamfb67x9fyjvw20 00:23:39.411 11:33:57 -- dd/posix.sh@22 -- # printf %s bbkl4857ygotvg5q1f2c3symc3y93y9u 00:23:39.411 11:33:57 -- dd/posix.sh@23 -- # printf %s wy81mxu9lwd6f2ntyamfb67x9fyjvw20 00:23:39.411 11:33:57 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:23:39.411 [2024-11-26 11:33:57.453685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:39.411 [2024-11-26 11:33:57.453865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98292 ] 00:23:39.411 [2024-11-26 11:33:57.618081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.671 [2024-11-26 11:33:57.652055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.671  [2024-11-26T11:33:57.901Z] Copying: 32/32 [B] (average 31 kBps) 00:23:39.671 00:23:39.671 11:33:57 -- dd/posix.sh@27 -- # [[ wy81mxu9lwd6f2ntyamfb67x9fyjvw20bbkl4857ygotvg5q1f2c3symc3y93y9u == \w\y\8\1\m\x\u\9\l\w\d\6\f\2\n\t\y\a\m\f\b\6\7\x\9\f\y\j\v\w\2\0\b\b\k\l\4\8\5\7\y\g\o\t\v\g\5\q\1\f\2\c\3\s\y\m\c\3\y\9\3\y\9\u ]] 00:23:39.671 00:23:39.671 real 0m0.487s 00:23:39.671 user 0m0.236s 00:23:39.671 sys 0m0.133s 00:23:39.671 ************************************ 00:23:39.671 END TEST dd_flag_append 00:23:39.671 ************************************ 00:23:39.671 11:33:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:39.671 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.930 11:33:57 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:23:39.930 11:33:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:39.930 11:33:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.930 11:33:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.930 ************************************ 00:23:39.930 START TEST dd_flag_directory 00:23:39.930 ************************************ 00:23:39.930 11:33:57 -- common/autotest_common.sh@1114 -- # directory 00:23:39.930 11:33:57 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:39.930 11:33:57 -- common/autotest_common.sh@650 -- # local es=0 00:23:39.930 11:33:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:39.930 11:33:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.930 11:33:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.930 11:33:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.930 11:33:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.930 11:33:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.930 11:33:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.930 11:33:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.930 11:33:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:39.930 11:33:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:39.930 [2024-11-26 11:33:57.988257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:39.930 [2024-11-26 11:33:57.988455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98318 ] 00:23:39.930 [2024-11-26 11:33:58.155772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.190 [2024-11-26 11:33:58.187601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.190 [2024-11-26 11:33:58.228952] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:40.190 [2024-11-26 11:33:58.229420] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:40.190 [2024-11-26 11:33:58.229527] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:40.190 [2024-11-26 11:33:58.295097] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:40.190 11:33:58 -- common/autotest_common.sh@653 -- # es=236 00:23:40.190 11:33:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.190 11:33:58 -- common/autotest_common.sh@662 -- # es=108 00:23:40.190 11:33:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:40.190 11:33:58 -- common/autotest_common.sh@670 -- # es=1 00:23:40.190 11:33:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.190 11:33:58 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:40.190 11:33:58 -- common/autotest_common.sh@650 -- # local es=0 00:23:40.190 11:33:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:40.190 11:33:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.190 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.190 11:33:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.190 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.190 11:33:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.190 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.190 11:33:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.190 11:33:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:40.190 11:33:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:40.449 [2024-11-26 11:33:58.461578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:40.449 [2024-11-26 11:33:58.461768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98330 ] 00:23:40.449 [2024-11-26 11:33:58.626607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.449 [2024-11-26 11:33:58.662979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.709 [2024-11-26 11:33:58.711236] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:40.709 [2024-11-26 11:33:58.711309] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:40.709 [2024-11-26 11:33:58.711336] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:40.709 [2024-11-26 11:33:58.777357] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:40.709 11:33:58 -- common/autotest_common.sh@653 -- # es=236 00:23:40.709 11:33:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.709 11:33:58 -- common/autotest_common.sh@662 -- # es=108 00:23:40.709 11:33:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:40.709 11:33:58 -- common/autotest_common.sh@670 -- # es=1 00:23:40.709 11:33:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.709 00:23:40.709 real 0m0.953s 00:23:40.709 user 0m0.470s 00:23:40.709 sys 0m0.281s 00:23:40.709 ************************************ 00:23:40.709 END TEST dd_flag_directory 00:23:40.709 ************************************ 00:23:40.709 11:33:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:40.709 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:23:40.709 11:33:58 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:23:40.709 11:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:40.709 11:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:40.709 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:23:40.709 ************************************ 00:23:40.709 START TEST dd_flag_nofollow 00:23:40.709 ************************************ 00:23:40.709 11:33:58 -- common/autotest_common.sh@1114 -- # nofollow 00:23:40.709 11:33:58 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:40.709 11:33:58 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:40.709 11:33:58 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:40.709 11:33:58 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:40.709 11:33:58 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:40.709 11:33:58 -- common/autotest_common.sh@650 -- # local es=0 00:23:40.709 11:33:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:40.709 11:33:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.709 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.709 11:33:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.968 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.968 11:33:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.968 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.968 11:33:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.968 11:33:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:40.968 11:33:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:40.968 [2024-11-26 11:33:58.991347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:40.968 [2024-11-26 11:33:58.991483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98354 ] 00:23:40.968 [2024-11-26 11:33:59.138070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.968 [2024-11-26 11:33:59.171939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.226 [2024-11-26 11:33:59.217907] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:41.226 [2024-11-26 11:33:59.218231] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:41.226 [2024-11-26 11:33:59.218264] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:41.226 [2024-11-26 11:33:59.285500] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:41.226 11:33:59 -- common/autotest_common.sh@653 -- # es=216 00:23:41.226 11:33:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.226 11:33:59 -- common/autotest_common.sh@662 -- # es=88 00:23:41.226 11:33:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:41.226 11:33:59 -- common/autotest_common.sh@670 -- # es=1 00:23:41.226 11:33:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.226 11:33:59 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:41.226 11:33:59 -- common/autotest_common.sh@650 -- # local es=0 00:23:41.226 11:33:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:41.226 11:33:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.226 11:33:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.226 11:33:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.226 11:33:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.226 11:33:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.226 11:33:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.226 11:33:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.226 11:33:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:41.226 11:33:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:41.226 [2024-11-26 11:33:59.437242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:41.226 [2024-11-26 11:33:59.437435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98370 ] 00:23:41.485 [2024-11-26 11:33:59.602334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.485 [2024-11-26 11:33:59.638951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.485 [2024-11-26 11:33:59.686133] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:41.485 [2024-11-26 11:33:59.686212] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:41.485 [2024-11-26 11:33:59.686232] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:41.744 [2024-11-26 11:33:59.754429] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:41.744 11:33:59 -- common/autotest_common.sh@653 -- # es=216 00:23:41.744 11:33:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.744 11:33:59 -- common/autotest_common.sh@662 -- # es=88 00:23:41.744 11:33:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:41.744 11:33:59 -- common/autotest_common.sh@670 -- # es=1 00:23:41.744 11:33:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.744 11:33:59 -- dd/posix.sh@46 -- # gen_bytes 512 00:23:41.744 11:33:59 -- dd/common.sh@98 -- # xtrace_disable 00:23:41.744 11:33:59 -- common/autotest_common.sh@10 -- # set +x 00:23:41.744 11:33:59 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:41.744 [2024-11-26 11:33:59.909480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:41.744 [2024-11-26 11:33:59.909678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98373 ] 00:23:42.003 [2024-11-26 11:34:00.074386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.003 [2024-11-26 11:34:00.111172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.003  [2024-11-26T11:34:00.492Z] Copying: 512/512 [B] (average 500 kBps) 00:23:42.262 00:23:42.262 ************************************ 00:23:42.262 END TEST dd_flag_nofollow 00:23:42.262 ************************************ 00:23:42.262 11:34:00 -- dd/posix.sh@49 -- # [[ d415i1rh8wcamjet9bx9z5cetov39e4z26h6nl7rmyx9e61lwshtiksmpru26io5a1na9lqw8mrvqolwtzy8w6g4eu41nbzri0kad7tqiawjpohjvpjuq9akqz3ib47f8rhfhpzdhb5laplhrpu6v5xhm2vp8e9ysq8s27j45n3t4avjdrcbs2yq07hgb834ny3no4s80d8mm086ey4kel8nfxe2ly4wcgpwdjzq6lh87b6xcv0wuf5beeg5mmli9g8depf8nvztey972swj2wjadajfzu4ep3qs07enevx94tr04u227w27viv0zsavjmd7h60kh2e1ykwaxls0cc0m5jtxtu6tcozdneyt3816kl6teqvcm99kl5rxo5y2ergew94jvdiu32inukhoi10pd7r6oo5ui1dg0vt97ohwqsg9sg8dttfxxut2ncakl2kainij2xo2qd9wq1v5ion3jzj49rtq0ayxpwlv2mlngt5oyxxsydu6rpw972lx == \d\4\1\5\i\1\r\h\8\w\c\a\m\j\e\t\9\b\x\9\z\5\c\e\t\o\v\3\9\e\4\z\2\6\h\6\n\l\7\r\m\y\x\9\e\6\1\l\w\s\h\t\i\k\s\m\p\r\u\2\6\i\o\5\a\1\n\a\9\l\q\w\8\m\r\v\q\o\l\w\t\z\y\8\w\6\g\4\e\u\4\1\n\b\z\r\i\0\k\a\d\7\t\q\i\a\w\j\p\o\h\j\v\p\j\u\q\9\a\k\q\z\3\i\b\4\7\f\8\r\h\f\h\p\z\d\h\b\5\l\a\p\l\h\r\p\u\6\v\5\x\h\m\2\v\p\8\e\9\y\s\q\8\s\2\7\j\4\5\n\3\t\4\a\v\j\d\r\c\b\s\2\y\q\0\7\h\g\b\8\3\4\n\y\3\n\o\4\s\8\0\d\8\m\m\0\8\6\e\y\4\k\e\l\8\n\f\x\e\2\l\y\4\w\c\g\p\w\d\j\z\q\6\l\h\8\7\b\6\x\c\v\0\w\u\f\5\b\e\e\g\5\m\m\l\i\9\g\8\d\e\p\f\8\n\v\z\t\e\y\9\7\2\s\w\j\2\w\j\a\d\a\j\f\z\u\4\e\p\3\q\s\0\7\e\n\e\v\x\9\4\t\r\0\4\u\2\2\7\w\2\7\v\i\v\0\z\s\a\v\j\m\d\7\h\6\0\k\h\2\e\1\y\k\w\a\x\l\s\0\c\c\0\m\5\j\t\x\t\u\6\t\c\o\z\d\n\e\y\t\3\8\1\6\k\l\6\t\e\q\v\c\m\9\9\k\l\5\r\x\o\5\y\2\e\r\g\e\w\9\4\j\v\d\i\u\3\2\i\n\u\k\h\o\i\1\0\p\d\7\r\6\o\o\5\u\i\1\d\g\0\v\t\9\7\o\h\w\q\s\g\9\s\g\8\d\t\t\f\x\x\u\t\2\n\c\a\k\l\2\k\a\i\n\i\j\2\x\o\2\q\d\9\w\q\1\v\5\i\o\n\3\j\z\j\4\9\r\t\q\0\a\y\x\p\w\l\v\2\m\l\n\g\t\5\o\y\x\x\s\y\d\u\6\r\p\w\9\7\2\l\x ]] 00:23:42.262 00:23:42.262 real 0m1.421s 00:23:42.262 user 0m0.663s 00:23:42.262 sys 0m0.435s 00:23:42.263 11:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:42.263 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:42.263 11:34:00 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:23:42.263 11:34:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:42.263 11:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.263 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:42.263 ************************************ 00:23:42.263 START TEST dd_flag_noatime 00:23:42.263 ************************************ 00:23:42.263 11:34:00 -- common/autotest_common.sh@1114 -- # noatime 00:23:42.263 11:34:00 -- dd/posix.sh@53 -- # local atime_if 00:23:42.263 11:34:00 -- dd/posix.sh@54 -- # local atime_of 00:23:42.263 11:34:00 -- dd/posix.sh@58 -- # gen_bytes 512 00:23:42.263 11:34:00 -- dd/common.sh@98 -- # xtrace_disable 00:23:42.263 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:42.263 11:34:00 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:42.263 11:34:00 -- dd/posix.sh@60 -- # atime_if=1732620840 00:23:42.263 11:34:00 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:42.263 11:34:00 -- dd/posix.sh@61 -- # atime_of=1732620840 00:23:42.263 11:34:00 -- dd/posix.sh@66 -- # sleep 1 00:23:43.199 11:34:01 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:43.458 [2024-11-26 11:34:01.495862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:43.458 [2024-11-26 11:34:01.496139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98415 ] 00:23:43.458 [2024-11-26 11:34:01.663139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.458 [2024-11-26 11:34:01.693374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.718  [2024-11-26T11:34:01.948Z] Copying: 512/512 [B] (average 500 kBps) 00:23:43.718 00:23:43.718 11:34:01 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:43.718 11:34:01 -- dd/posix.sh@69 -- # (( atime_if == 1732620840 )) 00:23:43.718 11:34:01 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:43.718 11:34:01 -- dd/posix.sh@70 -- # (( atime_of == 1732620840 )) 00:23:43.718 11:34:01 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:43.978 [2024-11-26 11:34:01.993764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:43.978 [2024-11-26 11:34:01.994013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98427 ] 00:23:43.978 [2024-11-26 11:34:02.159015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.978 [2024-11-26 11:34:02.196139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.238  [2024-11-26T11:34:02.468Z] Copying: 512/512 [B] (average 500 kBps) 00:23:44.238 00:23:44.238 11:34:02 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:44.238 ************************************ 00:23:44.238 END TEST dd_flag_noatime 00:23:44.238 ************************************ 00:23:44.238 11:34:02 -- dd/posix.sh@73 -- # (( atime_if < 1732620842 )) 00:23:44.238 00:23:44.238 real 0m2.033s 00:23:44.238 user 0m0.457s 00:23:44.238 sys 0m0.340s 00:23:44.238 11:34:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:44.238 11:34:02 -- common/autotest_common.sh@10 -- # set +x 00:23:44.498 11:34:02 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:23:44.498 11:34:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:44.498 11:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:44.498 11:34:02 -- common/autotest_common.sh@10 -- # set +x 00:23:44.498 ************************************ 00:23:44.498 START TEST dd_flags_misc 00:23:44.498 ************************************ 00:23:44.498 11:34:02 -- common/autotest_common.sh@1114 -- # io 00:23:44.498 11:34:02 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:23:44.498 11:34:02 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:23:44.498 11:34:02 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:23:44.498 11:34:02 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:44.498 11:34:02 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:44.498 11:34:02 -- dd/common.sh@98 -- # xtrace_disable 00:23:44.498 11:34:02 -- common/autotest_common.sh@10 -- # set +x 00:23:44.498 11:34:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:44.498 11:34:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:44.498 [2024-11-26 11:34:02.550598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:44.498 [2024-11-26 11:34:02.550744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98455 ] 00:23:44.498 [2024-11-26 11:34:02.697154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.498 [2024-11-26 11:34:02.729987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.759  [2024-11-26T11:34:02.989Z] Copying: 512/512 [B] (average 500 kBps) 00:23:44.759 00:23:44.759 11:34:02 -- dd/posix.sh@93 -- # [[ mg0x3vfduyehwqhk1id3cspilpkgrbqpdybf2hhtv887qpbi1py3h4h79109s9qltttfdk08e4bo004fcj01vvvod40ljuier13avxuzjvlc1givhwmbkkhkct8fc5rn0qhq0w5al11vppkt8po8scbagolaxxjgis311tldjh3mixpzg1t6ujrzqt1ynemay5ukhr6e6csm13hxqcbi3vsdfgr1uzektg9i69l093wp0rmrh4b7yhk2s89qbmiurdwbuhm4ksj9b9f9tvn998kg6ma08t1p0ctxmukeji84xqwqa4rm183v86wztalcvxhqusckpqjn336932o8yjt1n3bgmlt4valv97b1gy2rtzzhydhipa4lasx3msyrf16n1mbrysbznd3ra2kiodynu5yd55juj4v2b7ez8l8e7jnequxbeosrynpkjifkeaij70l9li9gof41c9adynkz3kf8pmq83wu4u01lpkf87tuigdqkgxflxydexq7q == \m\g\0\x\3\v\f\d\u\y\e\h\w\q\h\k\1\i\d\3\c\s\p\i\l\p\k\g\r\b\q\p\d\y\b\f\2\h\h\t\v\8\8\7\q\p\b\i\1\p\y\3\h\4\h\7\9\1\0\9\s\9\q\l\t\t\t\f\d\k\0\8\e\4\b\o\0\0\4\f\c\j\0\1\v\v\v\o\d\4\0\l\j\u\i\e\r\1\3\a\v\x\u\z\j\v\l\c\1\g\i\v\h\w\m\b\k\k\h\k\c\t\8\f\c\5\r\n\0\q\h\q\0\w\5\a\l\1\1\v\p\p\k\t\8\p\o\8\s\c\b\a\g\o\l\a\x\x\j\g\i\s\3\1\1\t\l\d\j\h\3\m\i\x\p\z\g\1\t\6\u\j\r\z\q\t\1\y\n\e\m\a\y\5\u\k\h\r\6\e\6\c\s\m\1\3\h\x\q\c\b\i\3\v\s\d\f\g\r\1\u\z\e\k\t\g\9\i\6\9\l\0\9\3\w\p\0\r\m\r\h\4\b\7\y\h\k\2\s\8\9\q\b\m\i\u\r\d\w\b\u\h\m\4\k\s\j\9\b\9\f\9\t\v\n\9\9\8\k\g\6\m\a\0\8\t\1\p\0\c\t\x\m\u\k\e\j\i\8\4\x\q\w\q\a\4\r\m\1\8\3\v\8\6\w\z\t\a\l\c\v\x\h\q\u\s\c\k\p\q\j\n\3\3\6\9\3\2\o\8\y\j\t\1\n\3\b\g\m\l\t\4\v\a\l\v\9\7\b\1\g\y\2\r\t\z\z\h\y\d\h\i\p\a\4\l\a\s\x\3\m\s\y\r\f\1\6\n\1\m\b\r\y\s\b\z\n\d\3\r\a\2\k\i\o\d\y\n\u\5\y\d\5\5\j\u\j\4\v\2\b\7\e\z\8\l\8\e\7\j\n\e\q\u\x\b\e\o\s\r\y\n\p\k\j\i\f\k\e\a\i\j\7\0\l\9\l\i\9\g\o\f\4\1\c\9\a\d\y\n\k\z\3\k\f\8\p\m\q\8\3\w\u\4\u\0\1\l\p\k\f\8\7\t\u\i\g\d\q\k\g\x\f\l\x\y\d\e\x\q\7\q ]] 00:23:44.759 11:34:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:44.759 11:34:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:45.018 [2024-11-26 11:34:03.022784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:45.019 [2024-11-26 11:34:03.022973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98463 ] 00:23:45.019 [2024-11-26 11:34:03.186958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.019 [2024-11-26 11:34:03.222306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.278  [2024-11-26T11:34:03.508Z] Copying: 512/512 [B] (average 500 kBps) 00:23:45.278 00:23:45.278 11:34:03 -- dd/posix.sh@93 -- # [[ mg0x3vfduyehwqhk1id3cspilpkgrbqpdybf2hhtv887qpbi1py3h4h79109s9qltttfdk08e4bo004fcj01vvvod40ljuier13avxuzjvlc1givhwmbkkhkct8fc5rn0qhq0w5al11vppkt8po8scbagolaxxjgis311tldjh3mixpzg1t6ujrzqt1ynemay5ukhr6e6csm13hxqcbi3vsdfgr1uzektg9i69l093wp0rmrh4b7yhk2s89qbmiurdwbuhm4ksj9b9f9tvn998kg6ma08t1p0ctxmukeji84xqwqa4rm183v86wztalcvxhqusckpqjn336932o8yjt1n3bgmlt4valv97b1gy2rtzzhydhipa4lasx3msyrf16n1mbrysbznd3ra2kiodynu5yd55juj4v2b7ez8l8e7jnequxbeosrynpkjifkeaij70l9li9gof41c9adynkz3kf8pmq83wu4u01lpkf87tuigdqkgxflxydexq7q == \m\g\0\x\3\v\f\d\u\y\e\h\w\q\h\k\1\i\d\3\c\s\p\i\l\p\k\g\r\b\q\p\d\y\b\f\2\h\h\t\v\8\8\7\q\p\b\i\1\p\y\3\h\4\h\7\9\1\0\9\s\9\q\l\t\t\t\f\d\k\0\8\e\4\b\o\0\0\4\f\c\j\0\1\v\v\v\o\d\4\0\l\j\u\i\e\r\1\3\a\v\x\u\z\j\v\l\c\1\g\i\v\h\w\m\b\k\k\h\k\c\t\8\f\c\5\r\n\0\q\h\q\0\w\5\a\l\1\1\v\p\p\k\t\8\p\o\8\s\c\b\a\g\o\l\a\x\x\j\g\i\s\3\1\1\t\l\d\j\h\3\m\i\x\p\z\g\1\t\6\u\j\r\z\q\t\1\y\n\e\m\a\y\5\u\k\h\r\6\e\6\c\s\m\1\3\h\x\q\c\b\i\3\v\s\d\f\g\r\1\u\z\e\k\t\g\9\i\6\9\l\0\9\3\w\p\0\r\m\r\h\4\b\7\y\h\k\2\s\8\9\q\b\m\i\u\r\d\w\b\u\h\m\4\k\s\j\9\b\9\f\9\t\v\n\9\9\8\k\g\6\m\a\0\8\t\1\p\0\c\t\x\m\u\k\e\j\i\8\4\x\q\w\q\a\4\r\m\1\8\3\v\8\6\w\z\t\a\l\c\v\x\h\q\u\s\c\k\p\q\j\n\3\3\6\9\3\2\o\8\y\j\t\1\n\3\b\g\m\l\t\4\v\a\l\v\9\7\b\1\g\y\2\r\t\z\z\h\y\d\h\i\p\a\4\l\a\s\x\3\m\s\y\r\f\1\6\n\1\m\b\r\y\s\b\z\n\d\3\r\a\2\k\i\o\d\y\n\u\5\y\d\5\5\j\u\j\4\v\2\b\7\e\z\8\l\8\e\7\j\n\e\q\u\x\b\e\o\s\r\y\n\p\k\j\i\f\k\e\a\i\j\7\0\l\9\l\i\9\g\o\f\4\1\c\9\a\d\y\n\k\z\3\k\f\8\p\m\q\8\3\w\u\4\u\0\1\l\p\k\f\8\7\t\u\i\g\d\q\k\g\x\f\l\x\y\d\e\x\q\7\q ]] 00:23:45.278 11:34:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:45.278 11:34:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:45.278 [2024-11-26 11:34:03.515708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:45.278 [2024-11-26 11:34:03.515891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98472 ] 00:23:45.537 [2024-11-26 11:34:03.679170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.537 [2024-11-26 11:34:03.714022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.537  [2024-11-26T11:34:04.027Z] Copying: 512/512 [B] (average 55 kBps) 00:23:45.797 00:23:45.797 11:34:03 -- dd/posix.sh@93 -- # [[ mg0x3vfduyehwqhk1id3cspilpkgrbqpdybf2hhtv887qpbi1py3h4h79109s9qltttfdk08e4bo004fcj01vvvod40ljuier13avxuzjvlc1givhwmbkkhkct8fc5rn0qhq0w5al11vppkt8po8scbagolaxxjgis311tldjh3mixpzg1t6ujrzqt1ynemay5ukhr6e6csm13hxqcbi3vsdfgr1uzektg9i69l093wp0rmrh4b7yhk2s89qbmiurdwbuhm4ksj9b9f9tvn998kg6ma08t1p0ctxmukeji84xqwqa4rm183v86wztalcvxhqusckpqjn336932o8yjt1n3bgmlt4valv97b1gy2rtzzhydhipa4lasx3msyrf16n1mbrysbznd3ra2kiodynu5yd55juj4v2b7ez8l8e7jnequxbeosrynpkjifkeaij70l9li9gof41c9adynkz3kf8pmq83wu4u01lpkf87tuigdqkgxflxydexq7q == \m\g\0\x\3\v\f\d\u\y\e\h\w\q\h\k\1\i\d\3\c\s\p\i\l\p\k\g\r\b\q\p\d\y\b\f\2\h\h\t\v\8\8\7\q\p\b\i\1\p\y\3\h\4\h\7\9\1\0\9\s\9\q\l\t\t\t\f\d\k\0\8\e\4\b\o\0\0\4\f\c\j\0\1\v\v\v\o\d\4\0\l\j\u\i\e\r\1\3\a\v\x\u\z\j\v\l\c\1\g\i\v\h\w\m\b\k\k\h\k\c\t\8\f\c\5\r\n\0\q\h\q\0\w\5\a\l\1\1\v\p\p\k\t\8\p\o\8\s\c\b\a\g\o\l\a\x\x\j\g\i\s\3\1\1\t\l\d\j\h\3\m\i\x\p\z\g\1\t\6\u\j\r\z\q\t\1\y\n\e\m\a\y\5\u\k\h\r\6\e\6\c\s\m\1\3\h\x\q\c\b\i\3\v\s\d\f\g\r\1\u\z\e\k\t\g\9\i\6\9\l\0\9\3\w\p\0\r\m\r\h\4\b\7\y\h\k\2\s\8\9\q\b\m\i\u\r\d\w\b\u\h\m\4\k\s\j\9\b\9\f\9\t\v\n\9\9\8\k\g\6\m\a\0\8\t\1\p\0\c\t\x\m\u\k\e\j\i\8\4\x\q\w\q\a\4\r\m\1\8\3\v\8\6\w\z\t\a\l\c\v\x\h\q\u\s\c\k\p\q\j\n\3\3\6\9\3\2\o\8\y\j\t\1\n\3\b\g\m\l\t\4\v\a\l\v\9\7\b\1\g\y\2\r\t\z\z\h\y\d\h\i\p\a\4\l\a\s\x\3\m\s\y\r\f\1\6\n\1\m\b\r\y\s\b\z\n\d\3\r\a\2\k\i\o\d\y\n\u\5\y\d\5\5\j\u\j\4\v\2\b\7\e\z\8\l\8\e\7\j\n\e\q\u\x\b\e\o\s\r\y\n\p\k\j\i\f\k\e\a\i\j\7\0\l\9\l\i\9\g\o\f\4\1\c\9\a\d\y\n\k\z\3\k\f\8\p\m\q\8\3\w\u\4\u\0\1\l\p\k\f\8\7\t\u\i\g\d\q\k\g\x\f\l\x\y\d\e\x\q\7\q ]] 00:23:45.797 11:34:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:45.797 11:34:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:45.797 [2024-11-26 11:34:04.024352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:45.797 [2024-11-26 11:34:04.024536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98479 ] 00:23:46.056 [2024-11-26 11:34:04.190202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.056 [2024-11-26 11:34:04.226639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.056  [2024-11-26T11:34:04.546Z] Copying: 512/512 [B] (average 100 kBps) 00:23:46.316 00:23:46.316 11:34:04 -- dd/posix.sh@93 -- # [[ mg0x3vfduyehwqhk1id3cspilpkgrbqpdybf2hhtv887qpbi1py3h4h79109s9qltttfdk08e4bo004fcj01vvvod40ljuier13avxuzjvlc1givhwmbkkhkct8fc5rn0qhq0w5al11vppkt8po8scbagolaxxjgis311tldjh3mixpzg1t6ujrzqt1ynemay5ukhr6e6csm13hxqcbi3vsdfgr1uzektg9i69l093wp0rmrh4b7yhk2s89qbmiurdwbuhm4ksj9b9f9tvn998kg6ma08t1p0ctxmukeji84xqwqa4rm183v86wztalcvxhqusckpqjn336932o8yjt1n3bgmlt4valv97b1gy2rtzzhydhipa4lasx3msyrf16n1mbrysbznd3ra2kiodynu5yd55juj4v2b7ez8l8e7jnequxbeosrynpkjifkeaij70l9li9gof41c9adynkz3kf8pmq83wu4u01lpkf87tuigdqkgxflxydexq7q == \m\g\0\x\3\v\f\d\u\y\e\h\w\q\h\k\1\i\d\3\c\s\p\i\l\p\k\g\r\b\q\p\d\y\b\f\2\h\h\t\v\8\8\7\q\p\b\i\1\p\y\3\h\4\h\7\9\1\0\9\s\9\q\l\t\t\t\f\d\k\0\8\e\4\b\o\0\0\4\f\c\j\0\1\v\v\v\o\d\4\0\l\j\u\i\e\r\1\3\a\v\x\u\z\j\v\l\c\1\g\i\v\h\w\m\b\k\k\h\k\c\t\8\f\c\5\r\n\0\q\h\q\0\w\5\a\l\1\1\v\p\p\k\t\8\p\o\8\s\c\b\a\g\o\l\a\x\x\j\g\i\s\3\1\1\t\l\d\j\h\3\m\i\x\p\z\g\1\t\6\u\j\r\z\q\t\1\y\n\e\m\a\y\5\u\k\h\r\6\e\6\c\s\m\1\3\h\x\q\c\b\i\3\v\s\d\f\g\r\1\u\z\e\k\t\g\9\i\6\9\l\0\9\3\w\p\0\r\m\r\h\4\b\7\y\h\k\2\s\8\9\q\b\m\i\u\r\d\w\b\u\h\m\4\k\s\j\9\b\9\f\9\t\v\n\9\9\8\k\g\6\m\a\0\8\t\1\p\0\c\t\x\m\u\k\e\j\i\8\4\x\q\w\q\a\4\r\m\1\8\3\v\8\6\w\z\t\a\l\c\v\x\h\q\u\s\c\k\p\q\j\n\3\3\6\9\3\2\o\8\y\j\t\1\n\3\b\g\m\l\t\4\v\a\l\v\9\7\b\1\g\y\2\r\t\z\z\h\y\d\h\i\p\a\4\l\a\s\x\3\m\s\y\r\f\1\6\n\1\m\b\r\y\s\b\z\n\d\3\r\a\2\k\i\o\d\y\n\u\5\y\d\5\5\j\u\j\4\v\2\b\7\e\z\8\l\8\e\7\j\n\e\q\u\x\b\e\o\s\r\y\n\p\k\j\i\f\k\e\a\i\j\7\0\l\9\l\i\9\g\o\f\4\1\c\9\a\d\y\n\k\z\3\k\f\8\p\m\q\8\3\w\u\4\u\0\1\l\p\k\f\8\7\t\u\i\g\d\q\k\g\x\f\l\x\y\d\e\x\q\7\q ]] 00:23:46.316 11:34:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:46.316 11:34:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:46.316 11:34:04 -- dd/common.sh@98 -- # xtrace_disable 00:23:46.316 11:34:04 -- common/autotest_common.sh@10 -- # set +x 00:23:46.316 11:34:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:46.316 11:34:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:46.316 [2024-11-26 11:34:04.534691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:46.316 [2024-11-26 11:34:04.534860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98489 ] 00:23:46.575 [2024-11-26 11:34:04.699037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.576 [2024-11-26 11:34:04.736842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.576  [2024-11-26T11:34:05.065Z] Copying: 512/512 [B] (average 500 kBps) 00:23:46.835 00:23:46.835 11:34:04 -- dd/posix.sh@93 -- # [[ 9rcesk3j0flkqkpzx5y04xp0tgojzqwke8y7kgb1l2le2uxj9695juntyntidiavlavmjxv7ybzg64y3lc0irr32iiytbcjyu4aspjrmyu8cwcs48dk02ehwllhxvqn6efyym4xp9u1xxtcqayxcucihmp979ilv31qm7q955d6jtln7my37klnwbuxd04tmno338exmmenko6a4to37qa9d65l546wy2s6txiuvo22g761x5comeb5ljmuss50p0j7u3b999rhqxsx1z2tgdimypx7e5tw4bl17lhk2nb5zdtpzbv4g38g8efrgze5t4rdk6b43jls9v429q1ovtrujjiircz73swjoe8oay3z6x6wqejd0j21q263rlsuwutjuw0o1r5qf1zrb56z5rg7gmce5v8v2bg2mw4pze84du9vs06325w9yl3l7p9p7et8j6dkqnvvb8eib093n4vago9onqfd63pllo6sxf5mfhcshx304n2cwk5osybd4 == \9\r\c\e\s\k\3\j\0\f\l\k\q\k\p\z\x\5\y\0\4\x\p\0\t\g\o\j\z\q\w\k\e\8\y\7\k\g\b\1\l\2\l\e\2\u\x\j\9\6\9\5\j\u\n\t\y\n\t\i\d\i\a\v\l\a\v\m\j\x\v\7\y\b\z\g\6\4\y\3\l\c\0\i\r\r\3\2\i\i\y\t\b\c\j\y\u\4\a\s\p\j\r\m\y\u\8\c\w\c\s\4\8\d\k\0\2\e\h\w\l\l\h\x\v\q\n\6\e\f\y\y\m\4\x\p\9\u\1\x\x\t\c\q\a\y\x\c\u\c\i\h\m\p\9\7\9\i\l\v\3\1\q\m\7\q\9\5\5\d\6\j\t\l\n\7\m\y\3\7\k\l\n\w\b\u\x\d\0\4\t\m\n\o\3\3\8\e\x\m\m\e\n\k\o\6\a\4\t\o\3\7\q\a\9\d\6\5\l\5\4\6\w\y\2\s\6\t\x\i\u\v\o\2\2\g\7\6\1\x\5\c\o\m\e\b\5\l\j\m\u\s\s\5\0\p\0\j\7\u\3\b\9\9\9\r\h\q\x\s\x\1\z\2\t\g\d\i\m\y\p\x\7\e\5\t\w\4\b\l\1\7\l\h\k\2\n\b\5\z\d\t\p\z\b\v\4\g\3\8\g\8\e\f\r\g\z\e\5\t\4\r\d\k\6\b\4\3\j\l\s\9\v\4\2\9\q\1\o\v\t\r\u\j\j\i\i\r\c\z\7\3\s\w\j\o\e\8\o\a\y\3\z\6\x\6\w\q\e\j\d\0\j\2\1\q\2\6\3\r\l\s\u\w\u\t\j\u\w\0\o\1\r\5\q\f\1\z\r\b\5\6\z\5\r\g\7\g\m\c\e\5\v\8\v\2\b\g\2\m\w\4\p\z\e\8\4\d\u\9\v\s\0\6\3\2\5\w\9\y\l\3\l\7\p\9\p\7\e\t\8\j\6\d\k\q\n\v\v\b\8\e\i\b\0\9\3\n\4\v\a\g\o\9\o\n\q\f\d\6\3\p\l\l\o\6\s\x\f\5\m\f\h\c\s\h\x\3\0\4\n\2\c\w\k\5\o\s\y\b\d\4 ]] 00:23:46.835 11:34:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:46.835 11:34:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:46.835 [2024-11-26 11:34:05.031675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:46.835 [2024-11-26 11:34:05.031865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98492 ] 00:23:47.095 [2024-11-26 11:34:05.194998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.095 [2024-11-26 11:34:05.227434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.095  [2024-11-26T11:34:05.585Z] Copying: 512/512 [B] (average 500 kBps) 00:23:47.355 00:23:47.355 11:34:05 -- dd/posix.sh@93 -- # [[ 9rcesk3j0flkqkpzx5y04xp0tgojzqwke8y7kgb1l2le2uxj9695juntyntidiavlavmjxv7ybzg64y3lc0irr32iiytbcjyu4aspjrmyu8cwcs48dk02ehwllhxvqn6efyym4xp9u1xxtcqayxcucihmp979ilv31qm7q955d6jtln7my37klnwbuxd04tmno338exmmenko6a4to37qa9d65l546wy2s6txiuvo22g761x5comeb5ljmuss50p0j7u3b999rhqxsx1z2tgdimypx7e5tw4bl17lhk2nb5zdtpzbv4g38g8efrgze5t4rdk6b43jls9v429q1ovtrujjiircz73swjoe8oay3z6x6wqejd0j21q263rlsuwutjuw0o1r5qf1zrb56z5rg7gmce5v8v2bg2mw4pze84du9vs06325w9yl3l7p9p7et8j6dkqnvvb8eib093n4vago9onqfd63pllo6sxf5mfhcshx304n2cwk5osybd4 == \9\r\c\e\s\k\3\j\0\f\l\k\q\k\p\z\x\5\y\0\4\x\p\0\t\g\o\j\z\q\w\k\e\8\y\7\k\g\b\1\l\2\l\e\2\u\x\j\9\6\9\5\j\u\n\t\y\n\t\i\d\i\a\v\l\a\v\m\j\x\v\7\y\b\z\g\6\4\y\3\l\c\0\i\r\r\3\2\i\i\y\t\b\c\j\y\u\4\a\s\p\j\r\m\y\u\8\c\w\c\s\4\8\d\k\0\2\e\h\w\l\l\h\x\v\q\n\6\e\f\y\y\m\4\x\p\9\u\1\x\x\t\c\q\a\y\x\c\u\c\i\h\m\p\9\7\9\i\l\v\3\1\q\m\7\q\9\5\5\d\6\j\t\l\n\7\m\y\3\7\k\l\n\w\b\u\x\d\0\4\t\m\n\o\3\3\8\e\x\m\m\e\n\k\o\6\a\4\t\o\3\7\q\a\9\d\6\5\l\5\4\6\w\y\2\s\6\t\x\i\u\v\o\2\2\g\7\6\1\x\5\c\o\m\e\b\5\l\j\m\u\s\s\5\0\p\0\j\7\u\3\b\9\9\9\r\h\q\x\s\x\1\z\2\t\g\d\i\m\y\p\x\7\e\5\t\w\4\b\l\1\7\l\h\k\2\n\b\5\z\d\t\p\z\b\v\4\g\3\8\g\8\e\f\r\g\z\e\5\t\4\r\d\k\6\b\4\3\j\l\s\9\v\4\2\9\q\1\o\v\t\r\u\j\j\i\i\r\c\z\7\3\s\w\j\o\e\8\o\a\y\3\z\6\x\6\w\q\e\j\d\0\j\2\1\q\2\6\3\r\l\s\u\w\u\t\j\u\w\0\o\1\r\5\q\f\1\z\r\b\5\6\z\5\r\g\7\g\m\c\e\5\v\8\v\2\b\g\2\m\w\4\p\z\e\8\4\d\u\9\v\s\0\6\3\2\5\w\9\y\l\3\l\7\p\9\p\7\e\t\8\j\6\d\k\q\n\v\v\b\8\e\i\b\0\9\3\n\4\v\a\g\o\9\o\n\q\f\d\6\3\p\l\l\o\6\s\x\f\5\m\f\h\c\s\h\x\3\0\4\n\2\c\w\k\5\o\s\y\b\d\4 ]] 00:23:47.355 11:34:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:47.355 11:34:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:47.355 [2024-11-26 11:34:05.525263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:47.355 [2024-11-26 11:34:05.525440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98506 ] 00:23:47.615 [2024-11-26 11:34:05.690503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.615 [2024-11-26 11:34:05.726892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.615  [2024-11-26T11:34:06.128Z] Copying: 512/512 [B] (average 100 kBps) 00:23:47.898 00:23:47.898 11:34:05 -- dd/posix.sh@93 -- # [[ 9rcesk3j0flkqkpzx5y04xp0tgojzqwke8y7kgb1l2le2uxj9695juntyntidiavlavmjxv7ybzg64y3lc0irr32iiytbcjyu4aspjrmyu8cwcs48dk02ehwllhxvqn6efyym4xp9u1xxtcqayxcucihmp979ilv31qm7q955d6jtln7my37klnwbuxd04tmno338exmmenko6a4to37qa9d65l546wy2s6txiuvo22g761x5comeb5ljmuss50p0j7u3b999rhqxsx1z2tgdimypx7e5tw4bl17lhk2nb5zdtpzbv4g38g8efrgze5t4rdk6b43jls9v429q1ovtrujjiircz73swjoe8oay3z6x6wqejd0j21q263rlsuwutjuw0o1r5qf1zrb56z5rg7gmce5v8v2bg2mw4pze84du9vs06325w9yl3l7p9p7et8j6dkqnvvb8eib093n4vago9onqfd63pllo6sxf5mfhcshx304n2cwk5osybd4 == \9\r\c\e\s\k\3\j\0\f\l\k\q\k\p\z\x\5\y\0\4\x\p\0\t\g\o\j\z\q\w\k\e\8\y\7\k\g\b\1\l\2\l\e\2\u\x\j\9\6\9\5\j\u\n\t\y\n\t\i\d\i\a\v\l\a\v\m\j\x\v\7\y\b\z\g\6\4\y\3\l\c\0\i\r\r\3\2\i\i\y\t\b\c\j\y\u\4\a\s\p\j\r\m\y\u\8\c\w\c\s\4\8\d\k\0\2\e\h\w\l\l\h\x\v\q\n\6\e\f\y\y\m\4\x\p\9\u\1\x\x\t\c\q\a\y\x\c\u\c\i\h\m\p\9\7\9\i\l\v\3\1\q\m\7\q\9\5\5\d\6\j\t\l\n\7\m\y\3\7\k\l\n\w\b\u\x\d\0\4\t\m\n\o\3\3\8\e\x\m\m\e\n\k\o\6\a\4\t\o\3\7\q\a\9\d\6\5\l\5\4\6\w\y\2\s\6\t\x\i\u\v\o\2\2\g\7\6\1\x\5\c\o\m\e\b\5\l\j\m\u\s\s\5\0\p\0\j\7\u\3\b\9\9\9\r\h\q\x\s\x\1\z\2\t\g\d\i\m\y\p\x\7\e\5\t\w\4\b\l\1\7\l\h\k\2\n\b\5\z\d\t\p\z\b\v\4\g\3\8\g\8\e\f\r\g\z\e\5\t\4\r\d\k\6\b\4\3\j\l\s\9\v\4\2\9\q\1\o\v\t\r\u\j\j\i\i\r\c\z\7\3\s\w\j\o\e\8\o\a\y\3\z\6\x\6\w\q\e\j\d\0\j\2\1\q\2\6\3\r\l\s\u\w\u\t\j\u\w\0\o\1\r\5\q\f\1\z\r\b\5\6\z\5\r\g\7\g\m\c\e\5\v\8\v\2\b\g\2\m\w\4\p\z\e\8\4\d\u\9\v\s\0\6\3\2\5\w\9\y\l\3\l\7\p\9\p\7\e\t\8\j\6\d\k\q\n\v\v\b\8\e\i\b\0\9\3\n\4\v\a\g\o\9\o\n\q\f\d\6\3\p\l\l\o\6\s\x\f\5\m\f\h\c\s\h\x\3\0\4\n\2\c\w\k\5\o\s\y\b\d\4 ]] 00:23:47.898 11:34:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:47.898 11:34:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:47.898 [2024-11-26 11:34:06.024191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:47.898 [2024-11-26 11:34:06.024354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98509 ] 00:23:48.178 [2024-11-26 11:34:06.189598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.178 [2024-11-26 11:34:06.229785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.178  [2024-11-26T11:34:06.681Z] Copying: 512/512 [B] (average 125 kBps) 00:23:48.451 00:23:48.451 11:34:06 -- dd/posix.sh@93 -- # [[ 9rcesk3j0flkqkpzx5y04xp0tgojzqwke8y7kgb1l2le2uxj9695juntyntidiavlavmjxv7ybzg64y3lc0irr32iiytbcjyu4aspjrmyu8cwcs48dk02ehwllhxvqn6efyym4xp9u1xxtcqayxcucihmp979ilv31qm7q955d6jtln7my37klnwbuxd04tmno338exmmenko6a4to37qa9d65l546wy2s6txiuvo22g761x5comeb5ljmuss50p0j7u3b999rhqxsx1z2tgdimypx7e5tw4bl17lhk2nb5zdtpzbv4g38g8efrgze5t4rdk6b43jls9v429q1ovtrujjiircz73swjoe8oay3z6x6wqejd0j21q263rlsuwutjuw0o1r5qf1zrb56z5rg7gmce5v8v2bg2mw4pze84du9vs06325w9yl3l7p9p7et8j6dkqnvvb8eib093n4vago9onqfd63pllo6sxf5mfhcshx304n2cwk5osybd4 == \9\r\c\e\s\k\3\j\0\f\l\k\q\k\p\z\x\5\y\0\4\x\p\0\t\g\o\j\z\q\w\k\e\8\y\7\k\g\b\1\l\2\l\e\2\u\x\j\9\6\9\5\j\u\n\t\y\n\t\i\d\i\a\v\l\a\v\m\j\x\v\7\y\b\z\g\6\4\y\3\l\c\0\i\r\r\3\2\i\i\y\t\b\c\j\y\u\4\a\s\p\j\r\m\y\u\8\c\w\c\s\4\8\d\k\0\2\e\h\w\l\l\h\x\v\q\n\6\e\f\y\y\m\4\x\p\9\u\1\x\x\t\c\q\a\y\x\c\u\c\i\h\m\p\9\7\9\i\l\v\3\1\q\m\7\q\9\5\5\d\6\j\t\l\n\7\m\y\3\7\k\l\n\w\b\u\x\d\0\4\t\m\n\o\3\3\8\e\x\m\m\e\n\k\o\6\a\4\t\o\3\7\q\a\9\d\6\5\l\5\4\6\w\y\2\s\6\t\x\i\u\v\o\2\2\g\7\6\1\x\5\c\o\m\e\b\5\l\j\m\u\s\s\5\0\p\0\j\7\u\3\b\9\9\9\r\h\q\x\s\x\1\z\2\t\g\d\i\m\y\p\x\7\e\5\t\w\4\b\l\1\7\l\h\k\2\n\b\5\z\d\t\p\z\b\v\4\g\3\8\g\8\e\f\r\g\z\e\5\t\4\r\d\k\6\b\4\3\j\l\s\9\v\4\2\9\q\1\o\v\t\r\u\j\j\i\i\r\c\z\7\3\s\w\j\o\e\8\o\a\y\3\z\6\x\6\w\q\e\j\d\0\j\2\1\q\2\6\3\r\l\s\u\w\u\t\j\u\w\0\o\1\r\5\q\f\1\z\r\b\5\6\z\5\r\g\7\g\m\c\e\5\v\8\v\2\b\g\2\m\w\4\p\z\e\8\4\d\u\9\v\s\0\6\3\2\5\w\9\y\l\3\l\7\p\9\p\7\e\t\8\j\6\d\k\q\n\v\v\b\8\e\i\b\0\9\3\n\4\v\a\g\o\9\o\n\q\f\d\6\3\p\l\l\o\6\s\x\f\5\m\f\h\c\s\h\x\3\0\4\n\2\c\w\k\5\o\s\y\b\d\4 ]] 00:23:48.451 00:23:48.451 real 0m3.999s 00:23:48.451 user 0m1.852s 00:23:48.451 sys 0m1.164s 00:23:48.452 11:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:48.452 ************************************ 00:23:48.452 END TEST dd_flags_misc 00:23:48.452 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.452 ************************************ 00:23:48.452 11:34:06 -- dd/posix.sh@131 -- # tests_forced_aio 00:23:48.452 11:34:06 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:23:48.452 * Second test run, disabling liburing, forcing AIO 00:23:48.452 11:34:06 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:23:48.452 11:34:06 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:23:48.452 11:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:48.452 11:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:48.452 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.452 ************************************ 00:23:48.452 START TEST dd_flag_append_forced_aio 00:23:48.452 ************************************ 00:23:48.452 11:34:06 -- common/autotest_common.sh@1114 -- # append 00:23:48.452 11:34:06 -- dd/posix.sh@16 -- # local dump0 00:23:48.452 11:34:06 -- dd/posix.sh@17 -- # local dump1 00:23:48.452 11:34:06 -- dd/posix.sh@19 -- # gen_bytes 32 00:23:48.452 11:34:06 -- dd/common.sh@98 -- # xtrace_disable 00:23:48.452 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.452 11:34:06 -- dd/posix.sh@19 -- # dump0=xj75ugcyjjjcv7awx9v6rj93e6viwmih 00:23:48.452 11:34:06 -- dd/posix.sh@20 -- # gen_bytes 32 00:23:48.452 11:34:06 -- dd/common.sh@98 -- # xtrace_disable 00:23:48.452 11:34:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.452 11:34:06 -- dd/posix.sh@20 -- # dump1=m862lph9p7fds1qu0eggbv11u0osihe2 00:23:48.452 11:34:06 -- dd/posix.sh@22 -- # printf %s xj75ugcyjjjcv7awx9v6rj93e6viwmih 00:23:48.452 11:34:06 -- dd/posix.sh@23 -- # printf %s m862lph9p7fds1qu0eggbv11u0osihe2 00:23:48.452 11:34:06 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:23:48.452 [2024-11-26 11:34:06.612503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:48.452 [2024-11-26 11:34:06.612677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98542 ] 00:23:48.711 [2024-11-26 11:34:06.782063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.711 [2024-11-26 11:34:06.820738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.711  [2024-11-26T11:34:07.200Z] Copying: 32/32 [B] (average 31 kBps) 00:23:48.970 00:23:48.970 11:34:07 -- dd/posix.sh@27 -- # [[ m862lph9p7fds1qu0eggbv11u0osihe2xj75ugcyjjjcv7awx9v6rj93e6viwmih == \m\8\6\2\l\p\h\9\p\7\f\d\s\1\q\u\0\e\g\g\b\v\1\1\u\0\o\s\i\h\e\2\x\j\7\5\u\g\c\y\j\j\j\c\v\7\a\w\x\9\v\6\r\j\9\3\e\6\v\i\w\m\i\h ]] 00:23:48.970 00:23:48.970 real 0m0.513s 00:23:48.970 user 0m0.228s 00:23:48.970 sys 0m0.166s 00:23:48.970 11:34:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:48.970 ************************************ 00:23:48.970 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:23:48.970 END TEST dd_flag_append_forced_aio 00:23:48.970 ************************************ 00:23:48.970 11:34:07 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:23:48.970 11:34:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:48.970 11:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:48.970 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:23:48.970 ************************************ 00:23:48.970 START TEST dd_flag_directory_forced_aio 00:23:48.970 ************************************ 00:23:48.970 11:34:07 -- common/autotest_common.sh@1114 -- # directory 00:23:48.970 11:34:07 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:48.970 11:34:07 -- common/autotest_common.sh@650 -- # local es=0 00:23:48.970 11:34:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:48.970 11:34:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:48.970 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.970 11:34:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:48.970 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.970 11:34:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:48.970 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.970 11:34:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:48.970 11:34:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:48.970 11:34:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:48.970 [2024-11-26 11:34:07.165921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:48.970 [2024-11-26 11:34:07.166062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98564 ] 00:23:49.229 [2024-11-26 11:34:07.314998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.229 [2024-11-26 11:34:07.353075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.229 [2024-11-26 11:34:07.399161] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:49.229 [2024-11-26 11:34:07.399225] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:49.229 [2024-11-26 11:34:07.399241] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.229 [2024-11-26 11:34:07.464674] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:49.488 11:34:07 -- common/autotest_common.sh@653 -- # es=236 00:23:49.488 11:34:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:49.488 11:34:07 -- common/autotest_common.sh@662 -- # es=108 00:23:49.488 11:34:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:49.488 11:34:07 -- common/autotest_common.sh@670 -- # es=1 00:23:49.488 11:34:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:49.488 11:34:07 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:49.488 11:34:07 -- common/autotest_common.sh@650 -- # local es=0 00:23:49.488 11:34:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:49.488 11:34:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.488 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.488 11:34:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.488 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.488 11:34:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.488 11:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.488 11:34:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.488 11:34:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:49.488 11:34:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:49.488 [2024-11-26 11:34:07.630971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:49.488 [2024-11-26 11:34:07.631136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98579 ] 00:23:49.747 [2024-11-26 11:34:07.794511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.747 [2024-11-26 11:34:07.824529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.747 [2024-11-26 11:34:07.865958] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:49.747 [2024-11-26 11:34:07.866027] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:49.747 [2024-11-26 11:34:07.866045] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.747 [2024-11-26 11:34:07.931191] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:50.006 11:34:08 -- common/autotest_common.sh@653 -- # es=236 00:23:50.006 11:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.006 11:34:08 -- common/autotest_common.sh@662 -- # es=108 00:23:50.006 11:34:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:50.006 11:34:08 -- common/autotest_common.sh@670 -- # es=1 00:23:50.006 11:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.006 00:23:50.006 real 0m0.925s 00:23:50.006 user 0m0.446s 00:23:50.006 sys 0m0.278s 00:23:50.006 11:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:50.006 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 ************************************ 00:23:50.006 END TEST dd_flag_directory_forced_aio 00:23:50.006 ************************************ 00:23:50.006 11:34:08 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:23:50.006 11:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:50.006 11:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:50.006 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 ************************************ 00:23:50.006 START TEST dd_flag_nofollow_forced_aio 00:23:50.006 ************************************ 00:23:50.006 11:34:08 -- common/autotest_common.sh@1114 -- # nofollow 00:23:50.006 11:34:08 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:50.006 11:34:08 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:50.006 11:34:08 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:50.006 11:34:08 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:50.006 11:34:08 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:50.006 11:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:23:50.006 11:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:50.006 11:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.006 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.006 11:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.006 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.006 11:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.006 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.006 11:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.006 11:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:50.006 11:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:50.006 [2024-11-26 11:34:08.156715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:50.006 [2024-11-26 11:34:08.156895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98604 ] 00:23:50.265 [2024-11-26 11:34:08.322308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.265 [2024-11-26 11:34:08.363054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.265 [2024-11-26 11:34:08.414240] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:50.265 [2024-11-26 11:34:08.414325] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:50.265 [2024-11-26 11:34:08.414344] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:50.265 [2024-11-26 11:34:08.481132] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:50.524 11:34:08 -- common/autotest_common.sh@653 -- # es=216 00:23:50.524 11:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.524 11:34:08 -- common/autotest_common.sh@662 -- # es=88 00:23:50.524 11:34:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:50.524 11:34:08 -- common/autotest_common.sh@670 -- # es=1 00:23:50.524 11:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.524 11:34:08 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:50.524 11:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:23:50.524 11:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:50.524 11:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.524 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.524 11:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.524 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.524 11:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.524 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.524 11:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:50.524 11:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:50.524 11:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:50.524 [2024-11-26 11:34:08.644860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:50.525 [2024-11-26 11:34:08.645076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98615 ] 00:23:50.783 [2024-11-26 11:34:08.810614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.783 [2024-11-26 11:34:08.843617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.783 [2024-11-26 11:34:08.887082] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:50.783 [2024-11-26 11:34:08.887149] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:50.783 [2024-11-26 11:34:08.887168] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:50.783 [2024-11-26 11:34:08.954652] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:51.042 11:34:09 -- common/autotest_common.sh@653 -- # es=216 00:23:51.042 11:34:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.042 11:34:09 -- common/autotest_common.sh@662 -- # es=88 00:23:51.042 11:34:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:51.042 11:34:09 -- common/autotest_common.sh@670 -- # es=1 00:23:51.042 11:34:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.042 11:34:09 -- dd/posix.sh@46 -- # gen_bytes 512 00:23:51.042 11:34:09 -- dd/common.sh@98 -- # xtrace_disable 00:23:51.042 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.042 11:34:09 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:51.042 [2024-11-26 11:34:09.126992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:51.042 [2024-11-26 11:34:09.127180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98623 ] 00:23:51.300 [2024-11-26 11:34:09.291268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.300 [2024-11-26 11:34:09.328984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.300  [2024-11-26T11:34:09.789Z] Copying: 512/512 [B] (average 500 kBps) 00:23:51.559 00:23:51.559 11:34:09 -- dd/posix.sh@49 -- # [[ f0hb8x8ftayjhc3yeya9hqtc9ugo22saijb9nx4949nt1t2of4kr0quu90z3zoh71h7zicw6i8l7bc7u99ckz9s3hvgf8b6zdbeuh023xuiqjqdqpnntwho9cmao965ed5d9f98rql8pfxnmhz61pbde73xg16zmhinkjaagkm5lb4fou6l7yfgl0mdoeiskqghtpoehk8mjkjuxwkqko7pv2tk5xrz2234zvjy2rac6nwkl5zv1nc2nslsagk5psr3nfzfu2q64fr8tu6c1o23z01kab02qtmwmks1v3qbz6t32ua216u179a8twph1i6tlm2tt2eo0stta0xgx213equ3fm2jll9p4fb1c0c1pwuma8ceamd87maazebib1vtarexi3wt9hnj4w9nxh7nxk8azzbo1tse60gpy4ba86gkwb6i8ggvrmzg0gpq2gnfm8dxx5di9t9hu13pet59s6z8oygmnjwk0a2ivn8ukqjasyhbxjdshl4faxbka == \f\0\h\b\8\x\8\f\t\a\y\j\h\c\3\y\e\y\a\9\h\q\t\c\9\u\g\o\2\2\s\a\i\j\b\9\n\x\4\9\4\9\n\t\1\t\2\o\f\4\k\r\0\q\u\u\9\0\z\3\z\o\h\7\1\h\7\z\i\c\w\6\i\8\l\7\b\c\7\u\9\9\c\k\z\9\s\3\h\v\g\f\8\b\6\z\d\b\e\u\h\0\2\3\x\u\i\q\j\q\d\q\p\n\n\t\w\h\o\9\c\m\a\o\9\6\5\e\d\5\d\9\f\9\8\r\q\l\8\p\f\x\n\m\h\z\6\1\p\b\d\e\7\3\x\g\1\6\z\m\h\i\n\k\j\a\a\g\k\m\5\l\b\4\f\o\u\6\l\7\y\f\g\l\0\m\d\o\e\i\s\k\q\g\h\t\p\o\e\h\k\8\m\j\k\j\u\x\w\k\q\k\o\7\p\v\2\t\k\5\x\r\z\2\2\3\4\z\v\j\y\2\r\a\c\6\n\w\k\l\5\z\v\1\n\c\2\n\s\l\s\a\g\k\5\p\s\r\3\n\f\z\f\u\2\q\6\4\f\r\8\t\u\6\c\1\o\2\3\z\0\1\k\a\b\0\2\q\t\m\w\m\k\s\1\v\3\q\b\z\6\t\3\2\u\a\2\1\6\u\1\7\9\a\8\t\w\p\h\1\i\6\t\l\m\2\t\t\2\e\o\0\s\t\t\a\0\x\g\x\2\1\3\e\q\u\3\f\m\2\j\l\l\9\p\4\f\b\1\c\0\c\1\p\w\u\m\a\8\c\e\a\m\d\8\7\m\a\a\z\e\b\i\b\1\v\t\a\r\e\x\i\3\w\t\9\h\n\j\4\w\9\n\x\h\7\n\x\k\8\a\z\z\b\o\1\t\s\e\6\0\g\p\y\4\b\a\8\6\g\k\w\b\6\i\8\g\g\v\r\m\z\g\0\g\p\q\2\g\n\f\m\8\d\x\x\5\d\i\9\t\9\h\u\1\3\p\e\t\5\9\s\6\z\8\o\y\g\m\n\j\w\k\0\a\2\i\v\n\8\u\k\q\j\a\s\y\h\b\x\j\d\s\h\l\4\f\a\x\b\k\a ]] 00:23:51.559 00:23:51.559 real 0m1.479s 00:23:51.559 user 0m0.709s 00:23:51.559 sys 0m0.449s 00:23:51.559 11:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:51.559 ************************************ 00:23:51.559 END TEST dd_flag_nofollow_forced_aio 00:23:51.559 ************************************ 00:23:51.559 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 11:34:09 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:23:51.559 11:34:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:51.559 11:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.559 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 ************************************ 00:23:51.559 START TEST dd_flag_noatime_forced_aio 00:23:51.559 ************************************ 00:23:51.559 11:34:09 -- common/autotest_common.sh@1114 -- # noatime 00:23:51.559 11:34:09 -- dd/posix.sh@53 -- # local atime_if 00:23:51.559 11:34:09 -- dd/posix.sh@54 -- # local atime_of 00:23:51.559 11:34:09 -- dd/posix.sh@58 -- # gen_bytes 512 00:23:51.559 11:34:09 -- dd/common.sh@98 -- # xtrace_disable 00:23:51.559 11:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.559 11:34:09 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:51.559 11:34:09 -- dd/posix.sh@60 -- # atime_if=1732620849 00:23:51.559 11:34:09 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:51.559 11:34:09 -- dd/posix.sh@61 -- # atime_of=1732620849 00:23:51.559 11:34:09 -- dd/posix.sh@66 -- # sleep 1 00:23:52.497 11:34:10 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:52.497 [2024-11-26 11:34:10.711252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:52.497 [2024-11-26 11:34:10.711435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98665 ] 00:23:52.756 [2024-11-26 11:34:10.878342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.757 [2024-11-26 11:34:10.916458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.757  [2024-11-26T11:34:11.246Z] Copying: 512/512 [B] (average 500 kBps) 00:23:53.016 00:23:53.016 11:34:11 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:53.016 11:34:11 -- dd/posix.sh@69 -- # (( atime_if == 1732620849 )) 00:23:53.016 11:34:11 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:53.016 11:34:11 -- dd/posix.sh@70 -- # (( atime_of == 1732620849 )) 00:23:53.016 11:34:11 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:53.016 [2024-11-26 11:34:11.226107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.016 [2024-11-26 11:34:11.226276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98672 ] 00:23:53.276 [2024-11-26 11:34:11.388712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.276 [2024-11-26 11:34:11.419750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.276  [2024-11-26T11:34:11.766Z] Copying: 512/512 [B] (average 500 kBps) 00:23:53.536 00:23:53.536 11:34:11 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:53.536 11:34:11 -- dd/posix.sh@73 -- # (( atime_if < 1732620851 )) 00:23:53.536 00:23:53.536 real 0m2.033s 00:23:53.536 user 0m0.460s 00:23:53.536 sys 0m0.337s 00:23:53.536 11:34:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:53.536 ************************************ 00:23:53.536 END TEST dd_flag_noatime_forced_aio 00:23:53.536 ************************************ 00:23:53.536 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.536 11:34:11 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:23:53.536 11:34:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:53.536 11:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.536 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.536 ************************************ 00:23:53.536 START TEST dd_flags_misc_forced_aio 00:23:53.536 ************************************ 00:23:53.536 11:34:11 -- common/autotest_common.sh@1114 -- # io 00:23:53.536 11:34:11 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:23:53.536 11:34:11 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:23:53.536 11:34:11 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:23:53.536 11:34:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:53.536 11:34:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:53.536 11:34:11 -- dd/common.sh@98 -- # xtrace_disable 00:23:53.536 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.536 11:34:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:53.536 11:34:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:53.536 [2024-11-26 11:34:11.766542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.536 [2024-11-26 11:34:11.766684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98705 ] 00:23:53.796 [2024-11-26 11:34:11.918558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.796 [2024-11-26 11:34:11.950463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.796  [2024-11-26T11:34:12.285Z] Copying: 512/512 [B] (average 500 kBps) 00:23:54.055 00:23:54.055 11:34:12 -- dd/posix.sh@93 -- # [[ 6663knxh24akd4avw241efbscada44mxid6iyoshyev60ty1p0kq76d76iy2pq797rjy50b3g75ly901r6bgokban5egeasj0pc5rm4m36g4rj1fn3qj6keovwq7p4u9mx9by4nogzgbjl3fh23lvzf8ac2bzxlvamxt5331x25gww5p6cha53kumx46afr9bo1m09929p1bjyr0exynfu4y9mx9957yxo8zwtxou2qj2h2dmun6oo9seii3vx5gvlx3k0zfi4ikiwbksnh9ah2zrq0llxnn3tv8bnfjqy11g41m02j21app4j7ns6cumyvn7z7fbrj3zbikn7gglbsk6zjpemu7hd2u7y6csq4v77yygqrhjpkld00ltpsukt5rmwdp83vi2kf38y9b1x1w52wnplie3nwt0l3plz26dxv1oraedikd1k22l5qxlbwhp5sy8wemhizway0hzupf9g3mgpuu7kgzs645jfabuzxe9fs2itj9t967pg1b == \6\6\6\3\k\n\x\h\2\4\a\k\d\4\a\v\w\2\4\1\e\f\b\s\c\a\d\a\4\4\m\x\i\d\6\i\y\o\s\h\y\e\v\6\0\t\y\1\p\0\k\q\7\6\d\7\6\i\y\2\p\q\7\9\7\r\j\y\5\0\b\3\g\7\5\l\y\9\0\1\r\6\b\g\o\k\b\a\n\5\e\g\e\a\s\j\0\p\c\5\r\m\4\m\3\6\g\4\r\j\1\f\n\3\q\j\6\k\e\o\v\w\q\7\p\4\u\9\m\x\9\b\y\4\n\o\g\z\g\b\j\l\3\f\h\2\3\l\v\z\f\8\a\c\2\b\z\x\l\v\a\m\x\t\5\3\3\1\x\2\5\g\w\w\5\p\6\c\h\a\5\3\k\u\m\x\4\6\a\f\r\9\b\o\1\m\0\9\9\2\9\p\1\b\j\y\r\0\e\x\y\n\f\u\4\y\9\m\x\9\9\5\7\y\x\o\8\z\w\t\x\o\u\2\q\j\2\h\2\d\m\u\n\6\o\o\9\s\e\i\i\3\v\x\5\g\v\l\x\3\k\0\z\f\i\4\i\k\i\w\b\k\s\n\h\9\a\h\2\z\r\q\0\l\l\x\n\n\3\t\v\8\b\n\f\j\q\y\1\1\g\4\1\m\0\2\j\2\1\a\p\p\4\j\7\n\s\6\c\u\m\y\v\n\7\z\7\f\b\r\j\3\z\b\i\k\n\7\g\g\l\b\s\k\6\z\j\p\e\m\u\7\h\d\2\u\7\y\6\c\s\q\4\v\7\7\y\y\g\q\r\h\j\p\k\l\d\0\0\l\t\p\s\u\k\t\5\r\m\w\d\p\8\3\v\i\2\k\f\3\8\y\9\b\1\x\1\w\5\2\w\n\p\l\i\e\3\n\w\t\0\l\3\p\l\z\2\6\d\x\v\1\o\r\a\e\d\i\k\d\1\k\2\2\l\5\q\x\l\b\w\h\p\5\s\y\8\w\e\m\h\i\z\w\a\y\0\h\z\u\p\f\9\g\3\m\g\p\u\u\7\k\g\z\s\6\4\5\j\f\a\b\u\z\x\e\9\f\s\2\i\t\j\9\t\9\6\7\p\g\1\b ]] 00:23:54.055 11:34:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:54.055 11:34:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:54.055 [2024-11-26 11:34:12.237393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:54.055 [2024-11-26 11:34:12.237557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98708 ] 00:23:54.315 [2024-11-26 11:34:12.400942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.315 [2024-11-26 11:34:12.438228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.315  [2024-11-26T11:34:12.804Z] Copying: 512/512 [B] (average 500 kBps) 00:23:54.574 00:23:54.574 11:34:12 -- dd/posix.sh@93 -- # [[ 6663knxh24akd4avw241efbscada44mxid6iyoshyev60ty1p0kq76d76iy2pq797rjy50b3g75ly901r6bgokban5egeasj0pc5rm4m36g4rj1fn3qj6keovwq7p4u9mx9by4nogzgbjl3fh23lvzf8ac2bzxlvamxt5331x25gww5p6cha53kumx46afr9bo1m09929p1bjyr0exynfu4y9mx9957yxo8zwtxou2qj2h2dmun6oo9seii3vx5gvlx3k0zfi4ikiwbksnh9ah2zrq0llxnn3tv8bnfjqy11g41m02j21app4j7ns6cumyvn7z7fbrj3zbikn7gglbsk6zjpemu7hd2u7y6csq4v77yygqrhjpkld00ltpsukt5rmwdp83vi2kf38y9b1x1w52wnplie3nwt0l3plz26dxv1oraedikd1k22l5qxlbwhp5sy8wemhizway0hzupf9g3mgpuu7kgzs645jfabuzxe9fs2itj9t967pg1b == \6\6\6\3\k\n\x\h\2\4\a\k\d\4\a\v\w\2\4\1\e\f\b\s\c\a\d\a\4\4\m\x\i\d\6\i\y\o\s\h\y\e\v\6\0\t\y\1\p\0\k\q\7\6\d\7\6\i\y\2\p\q\7\9\7\r\j\y\5\0\b\3\g\7\5\l\y\9\0\1\r\6\b\g\o\k\b\a\n\5\e\g\e\a\s\j\0\p\c\5\r\m\4\m\3\6\g\4\r\j\1\f\n\3\q\j\6\k\e\o\v\w\q\7\p\4\u\9\m\x\9\b\y\4\n\o\g\z\g\b\j\l\3\f\h\2\3\l\v\z\f\8\a\c\2\b\z\x\l\v\a\m\x\t\5\3\3\1\x\2\5\g\w\w\5\p\6\c\h\a\5\3\k\u\m\x\4\6\a\f\r\9\b\o\1\m\0\9\9\2\9\p\1\b\j\y\r\0\e\x\y\n\f\u\4\y\9\m\x\9\9\5\7\y\x\o\8\z\w\t\x\o\u\2\q\j\2\h\2\d\m\u\n\6\o\o\9\s\e\i\i\3\v\x\5\g\v\l\x\3\k\0\z\f\i\4\i\k\i\w\b\k\s\n\h\9\a\h\2\z\r\q\0\l\l\x\n\n\3\t\v\8\b\n\f\j\q\y\1\1\g\4\1\m\0\2\j\2\1\a\p\p\4\j\7\n\s\6\c\u\m\y\v\n\7\z\7\f\b\r\j\3\z\b\i\k\n\7\g\g\l\b\s\k\6\z\j\p\e\m\u\7\h\d\2\u\7\y\6\c\s\q\4\v\7\7\y\y\g\q\r\h\j\p\k\l\d\0\0\l\t\p\s\u\k\t\5\r\m\w\d\p\8\3\v\i\2\k\f\3\8\y\9\b\1\x\1\w\5\2\w\n\p\l\i\e\3\n\w\t\0\l\3\p\l\z\2\6\d\x\v\1\o\r\a\e\d\i\k\d\1\k\2\2\l\5\q\x\l\b\w\h\p\5\s\y\8\w\e\m\h\i\z\w\a\y\0\h\z\u\p\f\9\g\3\m\g\p\u\u\7\k\g\z\s\6\4\5\j\f\a\b\u\z\x\e\9\f\s\2\i\t\j\9\t\9\6\7\p\g\1\b ]] 00:23:54.574 11:34:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:54.574 11:34:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:54.574 [2024-11-26 11:34:12.732959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:54.575 [2024-11-26 11:34:12.733158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98722 ] 00:23:54.834 [2024-11-26 11:34:12.894966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.834 [2024-11-26 11:34:12.929351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.834  [2024-11-26T11:34:13.324Z] Copying: 512/512 [B] (average 100 kBps) 00:23:55.094 00:23:55.094 11:34:13 -- dd/posix.sh@93 -- # [[ 6663knxh24akd4avw241efbscada44mxid6iyoshyev60ty1p0kq76d76iy2pq797rjy50b3g75ly901r6bgokban5egeasj0pc5rm4m36g4rj1fn3qj6keovwq7p4u9mx9by4nogzgbjl3fh23lvzf8ac2bzxlvamxt5331x25gww5p6cha53kumx46afr9bo1m09929p1bjyr0exynfu4y9mx9957yxo8zwtxou2qj2h2dmun6oo9seii3vx5gvlx3k0zfi4ikiwbksnh9ah2zrq0llxnn3tv8bnfjqy11g41m02j21app4j7ns6cumyvn7z7fbrj3zbikn7gglbsk6zjpemu7hd2u7y6csq4v77yygqrhjpkld00ltpsukt5rmwdp83vi2kf38y9b1x1w52wnplie3nwt0l3plz26dxv1oraedikd1k22l5qxlbwhp5sy8wemhizway0hzupf9g3mgpuu7kgzs645jfabuzxe9fs2itj9t967pg1b == \6\6\6\3\k\n\x\h\2\4\a\k\d\4\a\v\w\2\4\1\e\f\b\s\c\a\d\a\4\4\m\x\i\d\6\i\y\o\s\h\y\e\v\6\0\t\y\1\p\0\k\q\7\6\d\7\6\i\y\2\p\q\7\9\7\r\j\y\5\0\b\3\g\7\5\l\y\9\0\1\r\6\b\g\o\k\b\a\n\5\e\g\e\a\s\j\0\p\c\5\r\m\4\m\3\6\g\4\r\j\1\f\n\3\q\j\6\k\e\o\v\w\q\7\p\4\u\9\m\x\9\b\y\4\n\o\g\z\g\b\j\l\3\f\h\2\3\l\v\z\f\8\a\c\2\b\z\x\l\v\a\m\x\t\5\3\3\1\x\2\5\g\w\w\5\p\6\c\h\a\5\3\k\u\m\x\4\6\a\f\r\9\b\o\1\m\0\9\9\2\9\p\1\b\j\y\r\0\e\x\y\n\f\u\4\y\9\m\x\9\9\5\7\y\x\o\8\z\w\t\x\o\u\2\q\j\2\h\2\d\m\u\n\6\o\o\9\s\e\i\i\3\v\x\5\g\v\l\x\3\k\0\z\f\i\4\i\k\i\w\b\k\s\n\h\9\a\h\2\z\r\q\0\l\l\x\n\n\3\t\v\8\b\n\f\j\q\y\1\1\g\4\1\m\0\2\j\2\1\a\p\p\4\j\7\n\s\6\c\u\m\y\v\n\7\z\7\f\b\r\j\3\z\b\i\k\n\7\g\g\l\b\s\k\6\z\j\p\e\m\u\7\h\d\2\u\7\y\6\c\s\q\4\v\7\7\y\y\g\q\r\h\j\p\k\l\d\0\0\l\t\p\s\u\k\t\5\r\m\w\d\p\8\3\v\i\2\k\f\3\8\y\9\b\1\x\1\w\5\2\w\n\p\l\i\e\3\n\w\t\0\l\3\p\l\z\2\6\d\x\v\1\o\r\a\e\d\i\k\d\1\k\2\2\l\5\q\x\l\b\w\h\p\5\s\y\8\w\e\m\h\i\z\w\a\y\0\h\z\u\p\f\9\g\3\m\g\p\u\u\7\k\g\z\s\6\4\5\j\f\a\b\u\z\x\e\9\f\s\2\i\t\j\9\t\9\6\7\p\g\1\b ]] 00:23:55.094 11:34:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:55.094 11:34:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:55.094 [2024-11-26 11:34:13.224578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:55.094 [2024-11-26 11:34:13.224739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98725 ] 00:23:55.355 [2024-11-26 11:34:13.385054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.355 [2024-11-26 11:34:13.419778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.355  [2024-11-26T11:34:13.844Z] Copying: 512/512 [B] (average 166 kBps) 00:23:55.614 00:23:55.614 11:34:13 -- dd/posix.sh@93 -- # [[ 6663knxh24akd4avw241efbscada44mxid6iyoshyev60ty1p0kq76d76iy2pq797rjy50b3g75ly901r6bgokban5egeasj0pc5rm4m36g4rj1fn3qj6keovwq7p4u9mx9by4nogzgbjl3fh23lvzf8ac2bzxlvamxt5331x25gww5p6cha53kumx46afr9bo1m09929p1bjyr0exynfu4y9mx9957yxo8zwtxou2qj2h2dmun6oo9seii3vx5gvlx3k0zfi4ikiwbksnh9ah2zrq0llxnn3tv8bnfjqy11g41m02j21app4j7ns6cumyvn7z7fbrj3zbikn7gglbsk6zjpemu7hd2u7y6csq4v77yygqrhjpkld00ltpsukt5rmwdp83vi2kf38y9b1x1w52wnplie3nwt0l3plz26dxv1oraedikd1k22l5qxlbwhp5sy8wemhizway0hzupf9g3mgpuu7kgzs645jfabuzxe9fs2itj9t967pg1b == \6\6\6\3\k\n\x\h\2\4\a\k\d\4\a\v\w\2\4\1\e\f\b\s\c\a\d\a\4\4\m\x\i\d\6\i\y\o\s\h\y\e\v\6\0\t\y\1\p\0\k\q\7\6\d\7\6\i\y\2\p\q\7\9\7\r\j\y\5\0\b\3\g\7\5\l\y\9\0\1\r\6\b\g\o\k\b\a\n\5\e\g\e\a\s\j\0\p\c\5\r\m\4\m\3\6\g\4\r\j\1\f\n\3\q\j\6\k\e\o\v\w\q\7\p\4\u\9\m\x\9\b\y\4\n\o\g\z\g\b\j\l\3\f\h\2\3\l\v\z\f\8\a\c\2\b\z\x\l\v\a\m\x\t\5\3\3\1\x\2\5\g\w\w\5\p\6\c\h\a\5\3\k\u\m\x\4\6\a\f\r\9\b\o\1\m\0\9\9\2\9\p\1\b\j\y\r\0\e\x\y\n\f\u\4\y\9\m\x\9\9\5\7\y\x\o\8\z\w\t\x\o\u\2\q\j\2\h\2\d\m\u\n\6\o\o\9\s\e\i\i\3\v\x\5\g\v\l\x\3\k\0\z\f\i\4\i\k\i\w\b\k\s\n\h\9\a\h\2\z\r\q\0\l\l\x\n\n\3\t\v\8\b\n\f\j\q\y\1\1\g\4\1\m\0\2\j\2\1\a\p\p\4\j\7\n\s\6\c\u\m\y\v\n\7\z\7\f\b\r\j\3\z\b\i\k\n\7\g\g\l\b\s\k\6\z\j\p\e\m\u\7\h\d\2\u\7\y\6\c\s\q\4\v\7\7\y\y\g\q\r\h\j\p\k\l\d\0\0\l\t\p\s\u\k\t\5\r\m\w\d\p\8\3\v\i\2\k\f\3\8\y\9\b\1\x\1\w\5\2\w\n\p\l\i\e\3\n\w\t\0\l\3\p\l\z\2\6\d\x\v\1\o\r\a\e\d\i\k\d\1\k\2\2\l\5\q\x\l\b\w\h\p\5\s\y\8\w\e\m\h\i\z\w\a\y\0\h\z\u\p\f\9\g\3\m\g\p\u\u\7\k\g\z\s\6\4\5\j\f\a\b\u\z\x\e\9\f\s\2\i\t\j\9\t\9\6\7\p\g\1\b ]] 00:23:55.614 11:34:13 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:55.614 11:34:13 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:55.614 11:34:13 -- dd/common.sh@98 -- # xtrace_disable 00:23:55.614 11:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:55.614 11:34:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:55.614 11:34:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:55.614 [2024-11-26 11:34:13.743663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:55.615 [2024-11-26 11:34:13.743841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98738 ] 00:23:55.874 [2024-11-26 11:34:13.910618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.874 [2024-11-26 11:34:13.944099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.874  [2024-11-26T11:34:14.364Z] Copying: 512/512 [B] (average 500 kBps) 00:23:56.134 00:23:56.134 11:34:14 -- dd/posix.sh@93 -- # [[ z81sn6izavzfm1zs8w4riig5aptj93w9ju122jkgxixyrfp7okux5kechckyfrl76z908leptdezhgnmuypl38i20uolt0epg9suqjruqp0epaaxdnlanezcowyujfoqrqrlv8p8ki4lh8c35b6h1q9g9gljiqszw7qntizoebf97g1iej12ds9lkuvneg826v3eg8pyvx5l5dqo6nct49hwlslnsj48r3xhf05u8pd8cmp9vyo5sbvq7ncha5serbqb2b5q2xzd4wmv25zfo9zwgu6oth8cvi04p976acv80rbctqd414kze9qqlr1y6jbu8zg8t5h53ghdz2cxuc1p2aa8l883t1wkc5ykokb0e41ybncphixq5q130g2vd0vjf29litkh8h4r7wwsx3xhzuc09m7pwbn015bbl4mkvgt1tz7emxg9y7o4hf176ceejf71gprw8darzqawb8wcaly8q3k1jm11kqcftg0qesas3p8842rmm5tg124u == \z\8\1\s\n\6\i\z\a\v\z\f\m\1\z\s\8\w\4\r\i\i\g\5\a\p\t\j\9\3\w\9\j\u\1\2\2\j\k\g\x\i\x\y\r\f\p\7\o\k\u\x\5\k\e\c\h\c\k\y\f\r\l\7\6\z\9\0\8\l\e\p\t\d\e\z\h\g\n\m\u\y\p\l\3\8\i\2\0\u\o\l\t\0\e\p\g\9\s\u\q\j\r\u\q\p\0\e\p\a\a\x\d\n\l\a\n\e\z\c\o\w\y\u\j\f\o\q\r\q\r\l\v\8\p\8\k\i\4\l\h\8\c\3\5\b\6\h\1\q\9\g\9\g\l\j\i\q\s\z\w\7\q\n\t\i\z\o\e\b\f\9\7\g\1\i\e\j\1\2\d\s\9\l\k\u\v\n\e\g\8\2\6\v\3\e\g\8\p\y\v\x\5\l\5\d\q\o\6\n\c\t\4\9\h\w\l\s\l\n\s\j\4\8\r\3\x\h\f\0\5\u\8\p\d\8\c\m\p\9\v\y\o\5\s\b\v\q\7\n\c\h\a\5\s\e\r\b\q\b\2\b\5\q\2\x\z\d\4\w\m\v\2\5\z\f\o\9\z\w\g\u\6\o\t\h\8\c\v\i\0\4\p\9\7\6\a\c\v\8\0\r\b\c\t\q\d\4\1\4\k\z\e\9\q\q\l\r\1\y\6\j\b\u\8\z\g\8\t\5\h\5\3\g\h\d\z\2\c\x\u\c\1\p\2\a\a\8\l\8\8\3\t\1\w\k\c\5\y\k\o\k\b\0\e\4\1\y\b\n\c\p\h\i\x\q\5\q\1\3\0\g\2\v\d\0\v\j\f\2\9\l\i\t\k\h\8\h\4\r\7\w\w\s\x\3\x\h\z\u\c\0\9\m\7\p\w\b\n\0\1\5\b\b\l\4\m\k\v\g\t\1\t\z\7\e\m\x\g\9\y\7\o\4\h\f\1\7\6\c\e\e\j\f\7\1\g\p\r\w\8\d\a\r\z\q\a\w\b\8\w\c\a\l\y\8\q\3\k\1\j\m\1\1\k\q\c\f\t\g\0\q\e\s\a\s\3\p\8\8\4\2\r\m\m\5\t\g\1\2\4\u ]] 00:23:56.134 11:34:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:56.134 11:34:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:56.135 [2024-11-26 11:34:14.235945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:56.135 [2024-11-26 11:34:14.236123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98742 ] 00:23:56.394 [2024-11-26 11:34:14.392258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.394 [2024-11-26 11:34:14.427037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.394  [2024-11-26T11:34:14.884Z] Copying: 512/512 [B] (average 500 kBps) 00:23:56.654 00:23:56.655 11:34:14 -- dd/posix.sh@93 -- # [[ z81sn6izavzfm1zs8w4riig5aptj93w9ju122jkgxixyrfp7okux5kechckyfrl76z908leptdezhgnmuypl38i20uolt0epg9suqjruqp0epaaxdnlanezcowyujfoqrqrlv8p8ki4lh8c35b6h1q9g9gljiqszw7qntizoebf97g1iej12ds9lkuvneg826v3eg8pyvx5l5dqo6nct49hwlslnsj48r3xhf05u8pd8cmp9vyo5sbvq7ncha5serbqb2b5q2xzd4wmv25zfo9zwgu6oth8cvi04p976acv80rbctqd414kze9qqlr1y6jbu8zg8t5h53ghdz2cxuc1p2aa8l883t1wkc5ykokb0e41ybncphixq5q130g2vd0vjf29litkh8h4r7wwsx3xhzuc09m7pwbn015bbl4mkvgt1tz7emxg9y7o4hf176ceejf71gprw8darzqawb8wcaly8q3k1jm11kqcftg0qesas3p8842rmm5tg124u == \z\8\1\s\n\6\i\z\a\v\z\f\m\1\z\s\8\w\4\r\i\i\g\5\a\p\t\j\9\3\w\9\j\u\1\2\2\j\k\g\x\i\x\y\r\f\p\7\o\k\u\x\5\k\e\c\h\c\k\y\f\r\l\7\6\z\9\0\8\l\e\p\t\d\e\z\h\g\n\m\u\y\p\l\3\8\i\2\0\u\o\l\t\0\e\p\g\9\s\u\q\j\r\u\q\p\0\e\p\a\a\x\d\n\l\a\n\e\z\c\o\w\y\u\j\f\o\q\r\q\r\l\v\8\p\8\k\i\4\l\h\8\c\3\5\b\6\h\1\q\9\g\9\g\l\j\i\q\s\z\w\7\q\n\t\i\z\o\e\b\f\9\7\g\1\i\e\j\1\2\d\s\9\l\k\u\v\n\e\g\8\2\6\v\3\e\g\8\p\y\v\x\5\l\5\d\q\o\6\n\c\t\4\9\h\w\l\s\l\n\s\j\4\8\r\3\x\h\f\0\5\u\8\p\d\8\c\m\p\9\v\y\o\5\s\b\v\q\7\n\c\h\a\5\s\e\r\b\q\b\2\b\5\q\2\x\z\d\4\w\m\v\2\5\z\f\o\9\z\w\g\u\6\o\t\h\8\c\v\i\0\4\p\9\7\6\a\c\v\8\0\r\b\c\t\q\d\4\1\4\k\z\e\9\q\q\l\r\1\y\6\j\b\u\8\z\g\8\t\5\h\5\3\g\h\d\z\2\c\x\u\c\1\p\2\a\a\8\l\8\8\3\t\1\w\k\c\5\y\k\o\k\b\0\e\4\1\y\b\n\c\p\h\i\x\q\5\q\1\3\0\g\2\v\d\0\v\j\f\2\9\l\i\t\k\h\8\h\4\r\7\w\w\s\x\3\x\h\z\u\c\0\9\m\7\p\w\b\n\0\1\5\b\b\l\4\m\k\v\g\t\1\t\z\7\e\m\x\g\9\y\7\o\4\h\f\1\7\6\c\e\e\j\f\7\1\g\p\r\w\8\d\a\r\z\q\a\w\b\8\w\c\a\l\y\8\q\3\k\1\j\m\1\1\k\q\c\f\t\g\0\q\e\s\a\s\3\p\8\8\4\2\r\m\m\5\t\g\1\2\4\u ]] 00:23:56.655 11:34:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:56.655 11:34:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:56.655 [2024-11-26 11:34:14.709159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:56.655 [2024-11-26 11:34:14.709350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98751 ] 00:23:56.655 [2024-11-26 11:34:14.874097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.915 [2024-11-26 11:34:14.908757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.915  [2024-11-26T11:34:15.145Z] Copying: 512/512 [B] (average 125 kBps) 00:23:56.915 00:23:56.915 11:34:15 -- dd/posix.sh@93 -- # [[ z81sn6izavzfm1zs8w4riig5aptj93w9ju122jkgxixyrfp7okux5kechckyfrl76z908leptdezhgnmuypl38i20uolt0epg9suqjruqp0epaaxdnlanezcowyujfoqrqrlv8p8ki4lh8c35b6h1q9g9gljiqszw7qntizoebf97g1iej12ds9lkuvneg826v3eg8pyvx5l5dqo6nct49hwlslnsj48r3xhf05u8pd8cmp9vyo5sbvq7ncha5serbqb2b5q2xzd4wmv25zfo9zwgu6oth8cvi04p976acv80rbctqd414kze9qqlr1y6jbu8zg8t5h53ghdz2cxuc1p2aa8l883t1wkc5ykokb0e41ybncphixq5q130g2vd0vjf29litkh8h4r7wwsx3xhzuc09m7pwbn015bbl4mkvgt1tz7emxg9y7o4hf176ceejf71gprw8darzqawb8wcaly8q3k1jm11kqcftg0qesas3p8842rmm5tg124u == \z\8\1\s\n\6\i\z\a\v\z\f\m\1\z\s\8\w\4\r\i\i\g\5\a\p\t\j\9\3\w\9\j\u\1\2\2\j\k\g\x\i\x\y\r\f\p\7\o\k\u\x\5\k\e\c\h\c\k\y\f\r\l\7\6\z\9\0\8\l\e\p\t\d\e\z\h\g\n\m\u\y\p\l\3\8\i\2\0\u\o\l\t\0\e\p\g\9\s\u\q\j\r\u\q\p\0\e\p\a\a\x\d\n\l\a\n\e\z\c\o\w\y\u\j\f\o\q\r\q\r\l\v\8\p\8\k\i\4\l\h\8\c\3\5\b\6\h\1\q\9\g\9\g\l\j\i\q\s\z\w\7\q\n\t\i\z\o\e\b\f\9\7\g\1\i\e\j\1\2\d\s\9\l\k\u\v\n\e\g\8\2\6\v\3\e\g\8\p\y\v\x\5\l\5\d\q\o\6\n\c\t\4\9\h\w\l\s\l\n\s\j\4\8\r\3\x\h\f\0\5\u\8\p\d\8\c\m\p\9\v\y\o\5\s\b\v\q\7\n\c\h\a\5\s\e\r\b\q\b\2\b\5\q\2\x\z\d\4\w\m\v\2\5\z\f\o\9\z\w\g\u\6\o\t\h\8\c\v\i\0\4\p\9\7\6\a\c\v\8\0\r\b\c\t\q\d\4\1\4\k\z\e\9\q\q\l\r\1\y\6\j\b\u\8\z\g\8\t\5\h\5\3\g\h\d\z\2\c\x\u\c\1\p\2\a\a\8\l\8\8\3\t\1\w\k\c\5\y\k\o\k\b\0\e\4\1\y\b\n\c\p\h\i\x\q\5\q\1\3\0\g\2\v\d\0\v\j\f\2\9\l\i\t\k\h\8\h\4\r\7\w\w\s\x\3\x\h\z\u\c\0\9\m\7\p\w\b\n\0\1\5\b\b\l\4\m\k\v\g\t\1\t\z\7\e\m\x\g\9\y\7\o\4\h\f\1\7\6\c\e\e\j\f\7\1\g\p\r\w\8\d\a\r\z\q\a\w\b\8\w\c\a\l\y\8\q\3\k\1\j\m\1\1\k\q\c\f\t\g\0\q\e\s\a\s\3\p\8\8\4\2\r\m\m\5\t\g\1\2\4\u ]] 00:23:56.915 11:34:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:56.915 11:34:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:57.173 [2024-11-26 11:34:15.200692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.173 [2024-11-26 11:34:15.200861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98759 ] 00:23:57.173 [2024-11-26 11:34:15.365666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.173 [2024-11-26 11:34:15.401993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.431  [2024-11-26T11:34:15.661Z] Copying: 512/512 [B] (average 125 kBps) 00:23:57.431 00:23:57.431 11:34:15 -- dd/posix.sh@93 -- # [[ z81sn6izavzfm1zs8w4riig5aptj93w9ju122jkgxixyrfp7okux5kechckyfrl76z908leptdezhgnmuypl38i20uolt0epg9suqjruqp0epaaxdnlanezcowyujfoqrqrlv8p8ki4lh8c35b6h1q9g9gljiqszw7qntizoebf97g1iej12ds9lkuvneg826v3eg8pyvx5l5dqo6nct49hwlslnsj48r3xhf05u8pd8cmp9vyo5sbvq7ncha5serbqb2b5q2xzd4wmv25zfo9zwgu6oth8cvi04p976acv80rbctqd414kze9qqlr1y6jbu8zg8t5h53ghdz2cxuc1p2aa8l883t1wkc5ykokb0e41ybncphixq5q130g2vd0vjf29litkh8h4r7wwsx3xhzuc09m7pwbn015bbl4mkvgt1tz7emxg9y7o4hf176ceejf71gprw8darzqawb8wcaly8q3k1jm11kqcftg0qesas3p8842rmm5tg124u == \z\8\1\s\n\6\i\z\a\v\z\f\m\1\z\s\8\w\4\r\i\i\g\5\a\p\t\j\9\3\w\9\j\u\1\2\2\j\k\g\x\i\x\y\r\f\p\7\o\k\u\x\5\k\e\c\h\c\k\y\f\r\l\7\6\z\9\0\8\l\e\p\t\d\e\z\h\g\n\m\u\y\p\l\3\8\i\2\0\u\o\l\t\0\e\p\g\9\s\u\q\j\r\u\q\p\0\e\p\a\a\x\d\n\l\a\n\e\z\c\o\w\y\u\j\f\o\q\r\q\r\l\v\8\p\8\k\i\4\l\h\8\c\3\5\b\6\h\1\q\9\g\9\g\l\j\i\q\s\z\w\7\q\n\t\i\z\o\e\b\f\9\7\g\1\i\e\j\1\2\d\s\9\l\k\u\v\n\e\g\8\2\6\v\3\e\g\8\p\y\v\x\5\l\5\d\q\o\6\n\c\t\4\9\h\w\l\s\l\n\s\j\4\8\r\3\x\h\f\0\5\u\8\p\d\8\c\m\p\9\v\y\o\5\s\b\v\q\7\n\c\h\a\5\s\e\r\b\q\b\2\b\5\q\2\x\z\d\4\w\m\v\2\5\z\f\o\9\z\w\g\u\6\o\t\h\8\c\v\i\0\4\p\9\7\6\a\c\v\8\0\r\b\c\t\q\d\4\1\4\k\z\e\9\q\q\l\r\1\y\6\j\b\u\8\z\g\8\t\5\h\5\3\g\h\d\z\2\c\x\u\c\1\p\2\a\a\8\l\8\8\3\t\1\w\k\c\5\y\k\o\k\b\0\e\4\1\y\b\n\c\p\h\i\x\q\5\q\1\3\0\g\2\v\d\0\v\j\f\2\9\l\i\t\k\h\8\h\4\r\7\w\w\s\x\3\x\h\z\u\c\0\9\m\7\p\w\b\n\0\1\5\b\b\l\4\m\k\v\g\t\1\t\z\7\e\m\x\g\9\y\7\o\4\h\f\1\7\6\c\e\e\j\f\7\1\g\p\r\w\8\d\a\r\z\q\a\w\b\8\w\c\a\l\y\8\q\3\k\1\j\m\1\1\k\q\c\f\t\g\0\q\e\s\a\s\3\p\8\8\4\2\r\m\m\5\t\g\1\2\4\u ]] 00:23:57.431 00:23:57.431 real 0m3.944s 00:23:57.431 user 0m1.781s 00:23:57.431 sys 0m1.196s 00:23:57.431 11:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:57.431 ************************************ 00:23:57.431 END TEST dd_flags_misc_forced_aio 00:23:57.431 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.431 ************************************ 00:23:57.690 11:34:15 -- dd/posix.sh@1 -- # cleanup 00:23:57.690 11:34:15 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:57.690 11:34:15 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:57.690 00:23:57.690 real 0m18.481s 00:23:57.690 user 0m7.559s 00:23:57.690 sys 0m5.189s 00:23:57.690 11:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:57.690 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.690 ************************************ 00:23:57.690 END TEST spdk_dd_posix 00:23:57.690 ************************************ 00:23:57.690 11:34:15 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:23:57.690 11:34:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:57.690 11:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.690 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.690 ************************************ 00:23:57.691 START TEST spdk_dd_malloc 00:23:57.691 ************************************ 00:23:57.691 11:34:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:23:57.691 * Looking for test storage... 00:23:57.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:57.691 11:34:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:57.691 11:34:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:57.691 11:34:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:57.691 11:34:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:57.691 11:34:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:57.691 11:34:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:57.691 11:34:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:57.691 11:34:15 -- scripts/common.sh@335 -- # IFS=.-: 00:23:57.691 11:34:15 -- scripts/common.sh@335 -- # read -ra ver1 00:23:57.691 11:34:15 -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.691 11:34:15 -- scripts/common.sh@336 -- # read -ra ver2 00:23:57.691 11:34:15 -- scripts/common.sh@337 -- # local 'op=<' 00:23:57.691 11:34:15 -- scripts/common.sh@339 -- # ver1_l=2 00:23:57.691 11:34:15 -- scripts/common.sh@340 -- # ver2_l=1 00:23:57.691 11:34:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:57.691 11:34:15 -- scripts/common.sh@343 -- # case "$op" in 00:23:57.691 11:34:15 -- scripts/common.sh@344 -- # : 1 00:23:57.691 11:34:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:57.691 11:34:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.691 11:34:15 -- scripts/common.sh@364 -- # decimal 1 00:23:57.691 11:34:15 -- scripts/common.sh@352 -- # local d=1 00:23:57.691 11:34:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.691 11:34:15 -- scripts/common.sh@354 -- # echo 1 00:23:57.691 11:34:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:57.691 11:34:15 -- scripts/common.sh@365 -- # decimal 2 00:23:57.691 11:34:15 -- scripts/common.sh@352 -- # local d=2 00:23:57.691 11:34:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.691 11:34:15 -- scripts/common.sh@354 -- # echo 2 00:23:57.691 11:34:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:57.691 11:34:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:57.691 11:34:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:57.691 11:34:15 -- scripts/common.sh@367 -- # return 0 00:23:57.691 11:34:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.691 11:34:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.691 --rc genhtml_branch_coverage=1 00:23:57.691 --rc genhtml_function_coverage=1 00:23:57.691 --rc genhtml_legend=1 00:23:57.691 --rc geninfo_all_blocks=1 00:23:57.691 --rc geninfo_unexecuted_blocks=1 00:23:57.691 00:23:57.691 ' 00:23:57.691 11:34:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.691 --rc genhtml_branch_coverage=1 00:23:57.691 --rc genhtml_function_coverage=1 00:23:57.691 --rc genhtml_legend=1 00:23:57.691 --rc geninfo_all_blocks=1 00:23:57.691 --rc geninfo_unexecuted_blocks=1 00:23:57.691 00:23:57.691 ' 00:23:57.691 11:34:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.691 --rc genhtml_branch_coverage=1 00:23:57.691 --rc genhtml_function_coverage=1 00:23:57.691 --rc genhtml_legend=1 00:23:57.691 --rc geninfo_all_blocks=1 00:23:57.691 --rc geninfo_unexecuted_blocks=1 00:23:57.691 00:23:57.691 ' 00:23:57.691 11:34:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.691 --rc genhtml_branch_coverage=1 00:23:57.691 --rc genhtml_function_coverage=1 00:23:57.691 --rc genhtml_legend=1 00:23:57.691 --rc geninfo_all_blocks=1 00:23:57.691 --rc geninfo_unexecuted_blocks=1 00:23:57.691 00:23:57.691 ' 00:23:57.691 11:34:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.951 11:34:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.951 11:34:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.951 11:34:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.951 11:34:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:57.951 11:34:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:57.951 11:34:15 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:57.951 11:34:15 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:57.951 11:34:15 -- paths/export.sh@6 -- # export PATH 00:23:57.951 11:34:15 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:57.951 11:34:15 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:23:57.951 11:34:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:57.951 11:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.951 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.951 ************************************ 00:23:57.951 START TEST dd_malloc_copy 00:23:57.951 ************************************ 00:23:57.951 11:34:15 -- common/autotest_common.sh@1114 -- # malloc_copy 00:23:57.951 11:34:15 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:23:57.951 11:34:15 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:23:57.951 11:34:15 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:23:57.951 11:34:15 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:23:57.951 11:34:15 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:23:57.951 11:34:15 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:23:57.951 11:34:15 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:23:57.951 11:34:15 -- dd/malloc.sh@28 -- # gen_conf 00:23:57.951 11:34:15 -- dd/common.sh@31 -- # xtrace_disable 00:23:57.951 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.951 { 00:23:57.951 "subsystems": [ 00:23:57.951 { 00:23:57.951 "subsystem": "bdev", 00:23:57.951 "config": [ 00:23:57.951 { 00:23:57.951 "params": { 00:23:57.951 "block_size": 512, 00:23:57.951 "num_blocks": 1048576, 00:23:57.951 "name": "malloc0" 00:23:57.951 }, 00:23:57.951 "method": "bdev_malloc_create" 00:23:57.951 }, 00:23:57.951 { 00:23:57.951 "params": { 00:23:57.951 "block_size": 512, 00:23:57.951 "num_blocks": 1048576, 00:23:57.951 "name": "malloc1" 00:23:57.951 }, 00:23:57.951 "method": "bdev_malloc_create" 00:23:57.951 }, 00:23:57.951 { 00:23:57.951 "method": "bdev_wait_for_examine" 00:23:57.951 } 00:23:57.951 ] 00:23:57.951 } 00:23:57.951 ] 00:23:57.951 } 00:23:57.951 [2024-11-26 11:34:15.998658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:57.951 [2024-11-26 11:34:15.998846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98836 ] 00:23:57.951 [2024-11-26 11:34:16.163138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.211 [2024-11-26 11:34:16.196594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.590  [2024-11-26T11:34:18.760Z] Copying: 208/512 [MB] (208 MBps) [2024-11-26T11:34:19.019Z] Copying: 418/512 [MB] (210 MBps) [2024-11-26T11:34:19.280Z] Copying: 512/512 [MB] (average 208 MBps) 00:24:01.050 00:24:01.050 11:34:19 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:24:01.050 11:34:19 -- dd/malloc.sh@33 -- # gen_conf 00:24:01.050 11:34:19 -- dd/common.sh@31 -- # xtrace_disable 00:24:01.050 11:34:19 -- common/autotest_common.sh@10 -- # set +x 00:24:01.050 { 00:24:01.050 "subsystems": [ 00:24:01.050 { 00:24:01.050 "subsystem": "bdev", 00:24:01.050 "config": [ 00:24:01.050 { 00:24:01.050 "params": { 00:24:01.050 "block_size": 512, 00:24:01.050 "num_blocks": 1048576, 00:24:01.050 "name": "malloc0" 00:24:01.050 }, 00:24:01.050 "method": "bdev_malloc_create" 00:24:01.050 }, 00:24:01.050 { 00:24:01.050 "params": { 00:24:01.050 "block_size": 512, 00:24:01.050 "num_blocks": 1048576, 00:24:01.050 "name": "malloc1" 00:24:01.050 }, 00:24:01.050 "method": "bdev_malloc_create" 00:24:01.050 }, 00:24:01.050 { 00:24:01.050 "method": "bdev_wait_for_examine" 00:24:01.050 } 00:24:01.050 ] 00:24:01.050 } 00:24:01.050 ] 00:24:01.050 } 00:24:01.050 [2024-11-26 11:34:19.261369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:01.050 [2024-11-26 11:34:19.261524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98879 ] 00:24:01.309 [2024-11-26 11:34:19.425956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.309 [2024-11-26 11:34:19.458827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.689  [2024-11-26T11:34:21.858Z] Copying: 204/512 [MB] (204 MBps) [2024-11-26T11:34:22.427Z] Copying: 410/512 [MB] (205 MBps) [2024-11-26T11:34:22.686Z] Copying: 512/512 [MB] (average 205 MBps) 00:24:04.456 00:24:04.456 00:24:04.456 real 0m6.572s 00:24:04.456 user 0m5.710s 00:24:04.456 sys 0m0.661s 00:24:04.456 ************************************ 00:24:04.456 END TEST dd_malloc_copy 00:24:04.456 ************************************ 00:24:04.456 11:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:04.456 11:34:22 -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 00:24:04.456 real 0m6.800s 00:24:04.456 user 0m5.836s 00:24:04.456 sys 0m0.775s 00:24:04.456 11:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:04.456 ************************************ 00:24:04.456 END TEST spdk_dd_malloc 00:24:04.456 11:34:22 -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 ************************************ 00:24:04.456 11:34:22 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:04.456 11:34:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:04.456 11:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.456 11:34:22 -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 ************************************ 00:24:04.456 START TEST spdk_dd_bdev_to_bdev 00:24:04.456 ************************************ 00:24:04.456 11:34:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:04.456 * Looking for test storage... 00:24:04.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:04.456 11:34:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:04.456 11:34:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:04.456 11:34:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:04.716 11:34:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:04.716 11:34:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:04.716 11:34:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:04.716 11:34:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:04.716 11:34:22 -- scripts/common.sh@335 -- # IFS=.-: 00:24:04.716 11:34:22 -- scripts/common.sh@335 -- # read -ra ver1 00:24:04.716 11:34:22 -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.716 11:34:22 -- scripts/common.sh@336 -- # read -ra ver2 00:24:04.716 11:34:22 -- scripts/common.sh@337 -- # local 'op=<' 00:24:04.716 11:34:22 -- scripts/common.sh@339 -- # ver1_l=2 00:24:04.716 11:34:22 -- scripts/common.sh@340 -- # ver2_l=1 00:24:04.716 11:34:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:04.716 11:34:22 -- scripts/common.sh@343 -- # case "$op" in 00:24:04.716 11:34:22 -- scripts/common.sh@344 -- # : 1 00:24:04.716 11:34:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:04.716 11:34:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.716 11:34:22 -- scripts/common.sh@364 -- # decimal 1 00:24:04.716 11:34:22 -- scripts/common.sh@352 -- # local d=1 00:24:04.716 11:34:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.716 11:34:22 -- scripts/common.sh@354 -- # echo 1 00:24:04.716 11:34:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:04.716 11:34:22 -- scripts/common.sh@365 -- # decimal 2 00:24:04.716 11:34:22 -- scripts/common.sh@352 -- # local d=2 00:24:04.716 11:34:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.716 11:34:22 -- scripts/common.sh@354 -- # echo 2 00:24:04.716 11:34:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:04.716 11:34:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:04.716 11:34:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:04.716 11:34:22 -- scripts/common.sh@367 -- # return 0 00:24:04.716 11:34:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.716 11:34:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:04.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.716 --rc genhtml_branch_coverage=1 00:24:04.716 --rc genhtml_function_coverage=1 00:24:04.716 --rc genhtml_legend=1 00:24:04.716 --rc geninfo_all_blocks=1 00:24:04.716 --rc geninfo_unexecuted_blocks=1 00:24:04.716 00:24:04.716 ' 00:24:04.716 11:34:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:04.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.717 --rc genhtml_branch_coverage=1 00:24:04.717 --rc genhtml_function_coverage=1 00:24:04.717 --rc genhtml_legend=1 00:24:04.717 --rc geninfo_all_blocks=1 00:24:04.717 --rc geninfo_unexecuted_blocks=1 00:24:04.717 00:24:04.717 ' 00:24:04.717 11:34:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:04.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.717 --rc genhtml_branch_coverage=1 00:24:04.717 --rc genhtml_function_coverage=1 00:24:04.717 --rc genhtml_legend=1 00:24:04.717 --rc geninfo_all_blocks=1 00:24:04.717 --rc geninfo_unexecuted_blocks=1 00:24:04.717 00:24:04.717 ' 00:24:04.717 11:34:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:04.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.717 --rc genhtml_branch_coverage=1 00:24:04.717 --rc genhtml_function_coverage=1 00:24:04.717 --rc genhtml_legend=1 00:24:04.717 --rc geninfo_all_blocks=1 00:24:04.717 --rc geninfo_unexecuted_blocks=1 00:24:04.717 00:24:04.717 ' 00:24:04.717 11:34:22 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.717 11:34:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.717 11:34:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.717 11:34:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.717 11:34:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:04.717 11:34:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:04.717 11:34:22 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:04.717 11:34:22 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:04.717 11:34:22 -- paths/export.sh@6 -- # export PATH 00:24:04.717 11:34:22 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:24:04.717 11:34:22 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:24:04.717 [2024-11-26 11:34:22.830617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:04.717 [2024-11-26 11:34:22.830795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98990 ] 00:24:04.976 [2024-11-26 11:34:22.994862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.976 [2024-11-26 11:34:23.030009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.976  [2024-11-26T11:34:23.465Z] Copying: 256/256 [MB] (average 1939 MBps) 00:24:05.235 00:24:05.235 11:34:23 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:05.235 11:34:23 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:05.235 11:34:23 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:24:05.235 11:34:23 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:24:05.235 11:34:23 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:05.235 11:34:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:24:05.236 11:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.236 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:05.236 ************************************ 00:24:05.236 START TEST dd_inflate_file 00:24:05.236 ************************************ 00:24:05.236 11:34:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:05.236 [2024-11-26 11:34:23.465140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:05.236 [2024-11-26 11:34:23.465327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98999 ] 00:24:05.495 [2024-11-26 11:34:23.638951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.495 [2024-11-26 11:34:23.669057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.753  [2024-11-26T11:34:23.983Z] Copying: 64/64 [MB] (average 1684 MBps) 00:24:05.753 00:24:05.753 00:24:05.753 real 0m0.535s 00:24:05.753 user 0m0.221s 00:24:05.753 sys 0m0.199s 00:24:05.753 11:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:05.753 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:05.753 ************************************ 00:24:05.753 END TEST dd_inflate_file 00:24:05.753 ************************************ 00:24:05.753 11:34:23 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:24:05.753 11:34:23 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:24:05.753 11:34:23 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:05.753 11:34:23 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:24:05.753 11:34:23 -- dd/common.sh@31 -- # xtrace_disable 00:24:05.753 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:05.753 11:34:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:05.753 11:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.753 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.012 ************************************ 00:24:06.012 START TEST dd_copy_to_out_bdev 00:24:06.012 ************************************ 00:24:06.012 11:34:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:06.012 { 00:24:06.012 "subsystems": [ 00:24:06.012 { 00:24:06.012 "subsystem": "bdev", 00:24:06.012 "config": [ 00:24:06.012 { 00:24:06.012 "params": { 00:24:06.012 "block_size": 4096, 00:24:06.012 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:06.012 "name": "aio1" 00:24:06.012 }, 00:24:06.012 "method": "bdev_aio_create" 00:24:06.012 }, 00:24:06.012 { 00:24:06.012 "params": { 00:24:06.012 "trtype": "pcie", 00:24:06.012 "traddr": "0000:00:06.0", 00:24:06.012 "name": "Nvme0" 00:24:06.012 }, 00:24:06.012 "method": "bdev_nvme_attach_controller" 00:24:06.012 }, 00:24:06.012 { 00:24:06.012 "method": "bdev_wait_for_examine" 00:24:06.012 } 00:24:06.012 ] 00:24:06.012 } 00:24:06.012 ] 00:24:06.012 } 00:24:06.012 [2024-11-26 11:34:24.053169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:06.012 [2024-11-26 11:34:24.053378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99033 ] 00:24:06.012 [2024-11-26 11:34:24.218864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.271 [2024-11-26 11:34:24.255429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.209  [2024-11-26T11:34:26.007Z] Copying: 39/64 [MB] (39 MBps) [2024-11-26T11:34:26.266Z] Copying: 64/64 [MB] (average 40 MBps) 00:24:08.036 00:24:08.036 00:24:08.036 real 0m2.232s 00:24:08.036 user 0m1.867s 00:24:08.036 sys 0m0.250s 00:24:08.036 11:34:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:08.036 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.036 ************************************ 00:24:08.036 END TEST dd_copy_to_out_bdev 00:24:08.036 ************************************ 00:24:08.036 11:34:26 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:24:08.036 11:34:26 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:24:08.036 11:34:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:08.036 11:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.036 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.295 ************************************ 00:24:08.295 START TEST dd_offset_magic 00:24:08.295 ************************************ 00:24:08.295 11:34:26 -- common/autotest_common.sh@1114 -- # offset_magic 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:24:08.295 11:34:26 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:24:08.295 11:34:26 -- dd/common.sh@31 -- # xtrace_disable 00:24:08.295 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:24:08.295 { 00:24:08.295 "subsystems": [ 00:24:08.295 { 00:24:08.295 "subsystem": "bdev", 00:24:08.295 "config": [ 00:24:08.295 { 00:24:08.295 "params": { 00:24:08.295 "block_size": 4096, 00:24:08.295 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:08.295 "name": "aio1" 00:24:08.295 }, 00:24:08.295 "method": "bdev_aio_create" 00:24:08.295 }, 00:24:08.295 { 00:24:08.295 "params": { 00:24:08.295 "trtype": "pcie", 00:24:08.295 "traddr": "0000:00:06.0", 00:24:08.295 "name": "Nvme0" 00:24:08.295 }, 00:24:08.295 "method": "bdev_nvme_attach_controller" 00:24:08.295 }, 00:24:08.295 { 00:24:08.295 "method": "bdev_wait_for_examine" 00:24:08.295 } 00:24:08.295 ] 00:24:08.295 } 00:24:08.295 ] 00:24:08.295 } 00:24:08.295 [2024-11-26 11:34:26.339841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:08.295 [2024-11-26 11:34:26.340028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99078 ] 00:24:08.295 [2024-11-26 11:34:26.506562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.555 [2024-11-26 11:34:26.543162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.123  [2024-11-26T11:34:27.353Z] Copying: 65/65 [MB] (average 138 MBps) 00:24:09.123 00:24:09.383 11:34:27 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:24:09.383 11:34:27 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:24:09.383 11:34:27 -- dd/common.sh@31 -- # xtrace_disable 00:24:09.383 11:34:27 -- common/autotest_common.sh@10 -- # set +x 00:24:09.383 { 00:24:09.383 "subsystems": [ 00:24:09.383 { 00:24:09.383 "subsystem": "bdev", 00:24:09.383 "config": [ 00:24:09.383 { 00:24:09.383 "params": { 00:24:09.383 "block_size": 4096, 00:24:09.383 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:09.383 "name": "aio1" 00:24:09.383 }, 00:24:09.383 "method": "bdev_aio_create" 00:24:09.383 }, 00:24:09.383 { 00:24:09.383 "params": { 00:24:09.383 "trtype": "pcie", 00:24:09.383 "traddr": "0000:00:06.0", 00:24:09.383 "name": "Nvme0" 00:24:09.383 }, 00:24:09.383 "method": "bdev_nvme_attach_controller" 00:24:09.383 }, 00:24:09.383 { 00:24:09.383 "method": "bdev_wait_for_examine" 00:24:09.383 } 00:24:09.383 ] 00:24:09.383 } 00:24:09.383 ] 00:24:09.383 } 00:24:09.383 [2024-11-26 11:34:27.412962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.383 [2024-11-26 11:34:27.413111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99099 ] 00:24:09.383 [2024-11-26 11:34:27.578186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.383 [2024-11-26 11:34:27.611808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.642  [2024-11-26T11:34:28.130Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:09.900 00:24:09.900 11:34:27 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:24:09.900 11:34:27 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:24:09.900 11:34:27 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:24:09.900 11:34:27 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:24:09.900 11:34:27 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:24:09.900 11:34:27 -- dd/common.sh@31 -- # xtrace_disable 00:24:09.900 11:34:27 -- common/autotest_common.sh@10 -- # set +x 00:24:09.900 { 00:24:09.900 "subsystems": [ 00:24:09.900 { 00:24:09.900 "subsystem": "bdev", 00:24:09.900 "config": [ 00:24:09.900 { 00:24:09.900 "params": { 00:24:09.900 "block_size": 4096, 00:24:09.900 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:09.900 "name": "aio1" 00:24:09.900 }, 00:24:09.901 "method": "bdev_aio_create" 00:24:09.901 }, 00:24:09.901 { 00:24:09.901 "params": { 00:24:09.901 "trtype": "pcie", 00:24:09.901 "traddr": "0000:00:06.0", 00:24:09.901 "name": "Nvme0" 00:24:09.901 }, 00:24:09.901 "method": "bdev_nvme_attach_controller" 00:24:09.901 }, 00:24:09.901 { 00:24:09.901 "method": "bdev_wait_for_examine" 00:24:09.901 } 00:24:09.901 ] 00:24:09.901 } 00:24:09.901 ] 00:24:09.901 } 00:24:09.901 [2024-11-26 11:34:28.021337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.901 [2024-11-26 11:34:28.021522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99115 ] 00:24:10.160 [2024-11-26 11:34:28.186479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.160 [2024-11-26 11:34:28.222963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.762  [2024-11-26T11:34:28.992Z] Copying: 65/65 [MB] (average 192 MBps) 00:24:10.762 00:24:10.762 11:34:28 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:24:10.762 11:34:28 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:24:10.762 11:34:28 -- dd/common.sh@31 -- # xtrace_disable 00:24:10.762 11:34:28 -- common/autotest_common.sh@10 -- # set +x 00:24:10.762 { 00:24:10.762 "subsystems": [ 00:24:10.762 { 00:24:10.762 "subsystem": "bdev", 00:24:10.762 "config": [ 00:24:10.762 { 00:24:10.762 "params": { 00:24:10.762 "block_size": 4096, 00:24:10.762 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:10.762 "name": "aio1" 00:24:10.762 }, 00:24:10.762 "method": "bdev_aio_create" 00:24:10.762 }, 00:24:10.762 { 00:24:10.762 "params": { 00:24:10.762 "trtype": "pcie", 00:24:10.762 "traddr": "0000:00:06.0", 00:24:10.762 "name": "Nvme0" 00:24:10.762 }, 00:24:10.762 "method": "bdev_nvme_attach_controller" 00:24:10.762 }, 00:24:10.762 { 00:24:10.762 "method": "bdev_wait_for_examine" 00:24:10.762 } 00:24:10.762 ] 00:24:10.762 } 00:24:10.762 ] 00:24:10.762 } 00:24:10.762 [2024-11-26 11:34:28.978426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:10.762 [2024-11-26 11:34:28.978588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99136 ] 00:24:11.021 [2024-11-26 11:34:29.142828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.021 [2024-11-26 11:34:29.173940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.279  [2024-11-26T11:34:29.509Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:11.279 00:24:11.537 11:34:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:24:11.537 11:34:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:24:11.537 00:24:11.537 real 0m3.250s 00:24:11.537 user 0m1.338s 00:24:11.537 sys 0m0.822s 00:24:11.537 11:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:11.537 ************************************ 00:24:11.537 END TEST dd_offset_magic 00:24:11.537 ************************************ 00:24:11.537 11:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.537 11:34:29 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:24:11.537 11:34:29 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:24:11.537 11:34:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:11.537 11:34:29 -- dd/common.sh@11 -- # local nvme_ref= 00:24:11.537 11:34:29 -- dd/common.sh@12 -- # local size=4194330 00:24:11.537 11:34:29 -- dd/common.sh@14 -- # local bs=1048576 00:24:11.537 11:34:29 -- dd/common.sh@15 -- # local count=5 00:24:11.537 11:34:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:24:11.537 11:34:29 -- dd/common.sh@18 -- # gen_conf 00:24:11.537 11:34:29 -- dd/common.sh@31 -- # xtrace_disable 00:24:11.537 11:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.537 { 00:24:11.537 "subsystems": [ 00:24:11.537 { 00:24:11.537 "subsystem": "bdev", 00:24:11.537 "config": [ 00:24:11.537 { 00:24:11.537 "params": { 00:24:11.537 "block_size": 4096, 00:24:11.537 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:11.537 "name": "aio1" 00:24:11.537 }, 00:24:11.537 "method": "bdev_aio_create" 00:24:11.537 }, 00:24:11.537 { 00:24:11.537 "params": { 00:24:11.537 "trtype": "pcie", 00:24:11.537 "traddr": "0000:00:06.0", 00:24:11.537 "name": "Nvme0" 00:24:11.537 }, 00:24:11.537 "method": "bdev_nvme_attach_controller" 00:24:11.537 }, 00:24:11.537 { 00:24:11.537 "method": "bdev_wait_for_examine" 00:24:11.537 } 00:24:11.537 ] 00:24:11.537 } 00:24:11.537 ] 00:24:11.537 } 00:24:11.537 [2024-11-26 11:34:29.629688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:11.537 [2024-11-26 11:34:29.629912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99165 ] 00:24:11.795 [2024-11-26 11:34:29.794782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.795 [2024-11-26 11:34:29.828412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.795  [2024-11-26T11:34:30.284Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:24:12.054 00:24:12.054 11:34:30 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:24:12.054 11:34:30 -- dd/common.sh@10 -- # local bdev=aio1 00:24:12.054 11:34:30 -- dd/common.sh@11 -- # local nvme_ref= 00:24:12.054 11:34:30 -- dd/common.sh@12 -- # local size=4194330 00:24:12.054 11:34:30 -- dd/common.sh@14 -- # local bs=1048576 00:24:12.054 11:34:30 -- dd/common.sh@15 -- # local count=5 00:24:12.054 11:34:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:24:12.054 11:34:30 -- dd/common.sh@18 -- # gen_conf 00:24:12.054 11:34:30 -- dd/common.sh@31 -- # xtrace_disable 00:24:12.054 11:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.054 { 00:24:12.054 "subsystems": [ 00:24:12.054 { 00:24:12.054 "subsystem": "bdev", 00:24:12.054 "config": [ 00:24:12.054 { 00:24:12.054 "params": { 00:24:12.054 "block_size": 4096, 00:24:12.054 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:12.054 "name": "aio1" 00:24:12.054 }, 00:24:12.054 "method": "bdev_aio_create" 00:24:12.054 }, 00:24:12.054 { 00:24:12.054 "params": { 00:24:12.054 "trtype": "pcie", 00:24:12.054 "traddr": "0000:00:06.0", 00:24:12.054 "name": "Nvme0" 00:24:12.054 }, 00:24:12.054 "method": "bdev_nvme_attach_controller" 00:24:12.054 }, 00:24:12.054 { 00:24:12.054 "method": "bdev_wait_for_examine" 00:24:12.054 } 00:24:12.054 ] 00:24:12.054 } 00:24:12.054 ] 00:24:12.054 } 00:24:12.054 [2024-11-26 11:34:30.205826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:12.054 [2024-11-26 11:34:30.206009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99182 ] 00:24:12.313 [2024-11-26 11:34:30.365706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.313 [2024-11-26 11:34:30.401924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.572  [2024-11-26T11:34:30.802Z] Copying: 5120/5120 [kB] (average 185 MBps) 00:24:12.572 00:24:12.572 11:34:30 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:24:12.832 00:24:12.832 real 0m8.237s 00:24:12.832 user 0m4.422s 00:24:12.833 sys 0m2.115s 00:24:12.833 11:34:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:12.833 ************************************ 00:24:12.833 END TEST spdk_dd_bdev_to_bdev 00:24:12.833 11:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.833 ************************************ 00:24:12.833 11:34:30 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:24:12.833 11:34:30 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:24:12.833 11:34:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:12.833 11:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:12.833 11:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.833 ************************************ 00:24:12.833 START TEST spdk_dd_sparse 00:24:12.833 ************************************ 00:24:12.833 11:34:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:24:12.833 * Looking for test storage... 00:24:12.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:12.833 11:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:12.833 11:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:12.833 11:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:12.833 11:34:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:12.833 11:34:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:12.833 11:34:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:12.833 11:34:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:12.833 11:34:31 -- scripts/common.sh@335 -- # IFS=.-: 00:24:12.833 11:34:31 -- scripts/common.sh@335 -- # read -ra ver1 00:24:12.833 11:34:31 -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.833 11:34:31 -- scripts/common.sh@336 -- # read -ra ver2 00:24:12.833 11:34:31 -- scripts/common.sh@337 -- # local 'op=<' 00:24:12.833 11:34:31 -- scripts/common.sh@339 -- # ver1_l=2 00:24:12.833 11:34:31 -- scripts/common.sh@340 -- # ver2_l=1 00:24:12.833 11:34:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:12.833 11:34:31 -- scripts/common.sh@343 -- # case "$op" in 00:24:12.833 11:34:31 -- scripts/common.sh@344 -- # : 1 00:24:12.833 11:34:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:12.833 11:34:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.833 11:34:31 -- scripts/common.sh@364 -- # decimal 1 00:24:12.833 11:34:31 -- scripts/common.sh@352 -- # local d=1 00:24:12.833 11:34:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.833 11:34:31 -- scripts/common.sh@354 -- # echo 1 00:24:12.833 11:34:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:12.833 11:34:31 -- scripts/common.sh@365 -- # decimal 2 00:24:12.833 11:34:31 -- scripts/common.sh@352 -- # local d=2 00:24:12.833 11:34:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.833 11:34:31 -- scripts/common.sh@354 -- # echo 2 00:24:12.833 11:34:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:12.833 11:34:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:12.833 11:34:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:12.833 11:34:31 -- scripts/common.sh@367 -- # return 0 00:24:12.833 11:34:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.833 11:34:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.833 --rc genhtml_branch_coverage=1 00:24:12.833 --rc genhtml_function_coverage=1 00:24:12.833 --rc genhtml_legend=1 00:24:12.833 --rc geninfo_all_blocks=1 00:24:12.833 --rc geninfo_unexecuted_blocks=1 00:24:12.833 00:24:12.833 ' 00:24:12.833 11:34:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.833 --rc genhtml_branch_coverage=1 00:24:12.833 --rc genhtml_function_coverage=1 00:24:12.833 --rc genhtml_legend=1 00:24:12.833 --rc geninfo_all_blocks=1 00:24:12.833 --rc geninfo_unexecuted_blocks=1 00:24:12.833 00:24:12.833 ' 00:24:12.833 11:34:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.833 --rc genhtml_branch_coverage=1 00:24:12.833 --rc genhtml_function_coverage=1 00:24:12.833 --rc genhtml_legend=1 00:24:12.833 --rc geninfo_all_blocks=1 00:24:12.833 --rc geninfo_unexecuted_blocks=1 00:24:12.833 00:24:12.833 ' 00:24:12.833 11:34:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.833 --rc genhtml_branch_coverage=1 00:24:12.833 --rc genhtml_function_coverage=1 00:24:12.833 --rc genhtml_legend=1 00:24:12.833 --rc geninfo_all_blocks=1 00:24:12.833 --rc geninfo_unexecuted_blocks=1 00:24:12.833 00:24:12.833 ' 00:24:12.833 11:34:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.833 11:34:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.833 11:34:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.833 11:34:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.833 11:34:31 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:12.833 11:34:31 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:12.833 11:34:31 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:12.833 11:34:31 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:12.833 11:34:31 -- paths/export.sh@6 -- # export PATH 00:24:12.833 11:34:31 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:12.834 11:34:31 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:24:12.834 11:34:31 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:24:12.834 11:34:31 -- dd/sparse.sh@110 -- # file1=file_zero1 00:24:12.834 11:34:31 -- dd/sparse.sh@111 -- # file2=file_zero2 00:24:12.834 11:34:31 -- dd/sparse.sh@112 -- # file3=file_zero3 00:24:13.093 11:34:31 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:24:13.093 11:34:31 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:24:13.093 11:34:31 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:24:13.093 11:34:31 -- dd/sparse.sh@118 -- # prepare 00:24:13.093 11:34:31 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:24:13.093 11:34:31 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:24:13.093 1+0 records in 00:24:13.093 1+0 records out 00:24:13.093 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00814926 s, 515 MB/s 00:24:13.093 11:34:31 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:24:13.093 1+0 records in 00:24:13.093 1+0 records out 00:24:13.093 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0081325 s, 516 MB/s 00:24:13.093 11:34:31 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:24:13.093 1+0 records in 00:24:13.093 1+0 records out 00:24:13.093 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00881116 s, 476 MB/s 00:24:13.093 11:34:31 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:24:13.093 11:34:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:13.093 11:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:13.093 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.093 ************************************ 00:24:13.093 START TEST dd_sparse_file_to_file 00:24:13.093 ************************************ 00:24:13.093 11:34:31 -- common/autotest_common.sh@1114 -- # file_to_file 00:24:13.093 11:34:31 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:24:13.093 11:34:31 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:24:13.093 11:34:31 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:13.093 11:34:31 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:24:13.093 11:34:31 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:24:13.093 11:34:31 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:24:13.093 11:34:31 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:24:13.093 11:34:31 -- dd/sparse.sh@41 -- # gen_conf 00:24:13.093 11:34:31 -- dd/common.sh@31 -- # xtrace_disable 00:24:13.093 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.093 { 00:24:13.093 "subsystems": [ 00:24:13.093 { 00:24:13.093 "subsystem": "bdev", 00:24:13.093 "config": [ 00:24:13.093 { 00:24:13.093 "params": { 00:24:13.093 "block_size": 4096, 00:24:13.093 "filename": "dd_sparse_aio_disk", 00:24:13.093 "name": "dd_aio" 00:24:13.093 }, 00:24:13.093 "method": "bdev_aio_create" 00:24:13.093 }, 00:24:13.093 { 00:24:13.093 "params": { 00:24:13.093 "lvs_name": "dd_lvstore", 00:24:13.093 "bdev_name": "dd_aio" 00:24:13.093 }, 00:24:13.093 "method": "bdev_lvol_create_lvstore" 00:24:13.093 }, 00:24:13.093 { 00:24:13.093 "method": "bdev_wait_for_examine" 00:24:13.093 } 00:24:13.093 ] 00:24:13.093 } 00:24:13.093 ] 00:24:13.093 } 00:24:13.093 [2024-11-26 11:34:31.183508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:13.093 [2024-11-26 11:34:31.183691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99258 ] 00:24:13.353 [2024-11-26 11:34:31.347259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.353 [2024-11-26 11:34:31.382722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.353  [2024-11-26T11:34:31.843Z] Copying: 12/36 [MB] (average 1500 MBps) 00:24:13.613 00:24:13.613 11:34:31 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:24:13.613 11:34:31 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:24:13.613 11:34:31 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:24:13.613 11:34:31 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:24:13.613 11:34:31 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:24:13.613 11:34:31 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:24:13.613 11:34:31 -- dd/sparse.sh@52 -- # stat1_b=24576 00:24:13.613 11:34:31 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:24:13.613 11:34:31 -- dd/sparse.sh@53 -- # stat2_b=24576 00:24:13.613 11:34:31 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:24:13.613 00:24:13.613 real 0m0.596s 00:24:13.613 user 0m0.272s 00:24:13.613 sys 0m0.203s 00:24:13.613 11:34:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:13.613 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.613 ************************************ 00:24:13.613 END TEST dd_sparse_file_to_file 00:24:13.613 ************************************ 00:24:13.613 11:34:31 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:24:13.613 11:34:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:13.613 11:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:13.613 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.613 ************************************ 00:24:13.613 START TEST dd_sparse_file_to_bdev 00:24:13.613 ************************************ 00:24:13.613 11:34:31 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:24:13.613 11:34:31 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:13.613 11:34:31 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:24:13.613 11:34:31 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:24:13.613 11:34:31 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:24:13.613 11:34:31 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:24:13.613 11:34:31 -- dd/sparse.sh@73 -- # gen_conf 00:24:13.613 11:34:31 -- dd/common.sh@31 -- # xtrace_disable 00:24:13.613 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.613 { 00:24:13.613 "subsystems": [ 00:24:13.613 { 00:24:13.613 "subsystem": "bdev", 00:24:13.613 "config": [ 00:24:13.613 { 00:24:13.613 "params": { 00:24:13.613 "block_size": 4096, 00:24:13.613 "filename": "dd_sparse_aio_disk", 00:24:13.613 "name": "dd_aio" 00:24:13.613 }, 00:24:13.613 "method": "bdev_aio_create" 00:24:13.613 }, 00:24:13.613 { 00:24:13.613 "params": { 00:24:13.613 "lvs_name": "dd_lvstore", 00:24:13.613 "lvol_name": "dd_lvol", 00:24:13.613 "size": 37748736, 00:24:13.613 "thin_provision": true 00:24:13.613 }, 00:24:13.613 "method": "bdev_lvol_create" 00:24:13.613 }, 00:24:13.613 { 00:24:13.613 "method": "bdev_wait_for_examine" 00:24:13.613 } 00:24:13.613 ] 00:24:13.613 } 00:24:13.613 ] 00:24:13.613 } 00:24:13.613 [2024-11-26 11:34:31.829706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:13.613 [2024-11-26 11:34:31.829870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99293 ] 00:24:13.872 [2024-11-26 11:34:31.994770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.872 [2024-11-26 11:34:32.031214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.872 [2024-11-26 11:34:32.094313] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:14.131  [2024-11-26T11:34:32.361Z] Copying: 12/36 [MB] (average 521 MBps)[2024-11-26 11:34:32.133476] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:14.131 00:24:14.131 00:24:14.131 00:24:14.131 real 0m0.585s 00:24:14.131 user 0m0.283s 00:24:14.131 sys 0m0.187s 00:24:14.131 11:34:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:14.131 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.131 ************************************ 00:24:14.131 END TEST dd_sparse_file_to_bdev 00:24:14.131 ************************************ 00:24:14.390 11:34:32 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:24:14.390 11:34:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:14.390 11:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:14.390 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.390 ************************************ 00:24:14.390 START TEST dd_sparse_bdev_to_file 00:24:14.390 ************************************ 00:24:14.390 11:34:32 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:24:14.390 11:34:32 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:24:14.390 11:34:32 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:24:14.390 11:34:32 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:14.390 11:34:32 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:24:14.390 11:34:32 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:24:14.390 11:34:32 -- dd/sparse.sh@91 -- # gen_conf 00:24:14.390 11:34:32 -- dd/common.sh@31 -- # xtrace_disable 00:24:14.390 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.390 { 00:24:14.390 "subsystems": [ 00:24:14.390 { 00:24:14.390 "subsystem": "bdev", 00:24:14.390 "config": [ 00:24:14.390 { 00:24:14.390 "params": { 00:24:14.390 "block_size": 4096, 00:24:14.390 "filename": "dd_sparse_aio_disk", 00:24:14.390 "name": "dd_aio" 00:24:14.391 }, 00:24:14.391 "method": "bdev_aio_create" 00:24:14.391 }, 00:24:14.391 { 00:24:14.391 "method": "bdev_wait_for_examine" 00:24:14.391 } 00:24:14.391 ] 00:24:14.391 } 00:24:14.391 ] 00:24:14.391 } 00:24:14.391 [2024-11-26 11:34:32.467864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:14.391 [2024-11-26 11:34:32.468062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99331 ] 00:24:14.650 [2024-11-26 11:34:32.632533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.650 [2024-11-26 11:34:32.667955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.650  [2024-11-26T11:34:33.139Z] Copying: 12/36 [MB] (average 1500 MBps) 00:24:14.909 00:24:14.909 11:34:32 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:24:14.909 11:34:32 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:24:14.910 11:34:32 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:24:14.910 11:34:32 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:24:14.910 11:34:32 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:24:14.910 11:34:32 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:24:14.910 11:34:32 -- dd/sparse.sh@102 -- # stat2_b=24576 00:24:14.910 11:34:32 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:24:14.910 11:34:32 -- dd/sparse.sh@103 -- # stat3_b=24576 00:24:14.910 11:34:32 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:24:14.910 00:24:14.910 real 0m0.578s 00:24:14.910 user 0m0.285s 00:24:14.910 sys 0m0.178s 00:24:14.910 11:34:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:14.910 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.910 ************************************ 00:24:14.910 END TEST dd_sparse_bdev_to_file 00:24:14.910 ************************************ 00:24:14.910 11:34:33 -- dd/sparse.sh@1 -- # cleanup 00:24:14.910 11:34:33 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:24:14.910 11:34:33 -- dd/sparse.sh@12 -- # rm file_zero1 00:24:14.910 11:34:33 -- dd/sparse.sh@13 -- # rm file_zero2 00:24:14.910 11:34:33 -- dd/sparse.sh@14 -- # rm file_zero3 00:24:14.910 00:24:14.910 real 0m2.171s 00:24:14.910 user 0m1.015s 00:24:14.910 sys 0m0.800s 00:24:14.910 11:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:14.910 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:14.910 ************************************ 00:24:14.910 END TEST spdk_dd_sparse 00:24:14.910 ************************************ 00:24:14.910 11:34:33 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:24:14.910 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:14.910 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:14.910 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:14.910 ************************************ 00:24:14.910 START TEST spdk_dd_negative 00:24:14.910 ************************************ 00:24:14.910 11:34:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:24:15.170 * Looking for test storage... 00:24:15.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:15.170 11:34:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:15.170 11:34:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:15.170 11:34:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:15.170 11:34:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:15.170 11:34:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:15.170 11:34:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:15.170 11:34:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:15.170 11:34:33 -- scripts/common.sh@335 -- # IFS=.-: 00:24:15.170 11:34:33 -- scripts/common.sh@335 -- # read -ra ver1 00:24:15.170 11:34:33 -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.170 11:34:33 -- scripts/common.sh@336 -- # read -ra ver2 00:24:15.170 11:34:33 -- scripts/common.sh@337 -- # local 'op=<' 00:24:15.170 11:34:33 -- scripts/common.sh@339 -- # ver1_l=2 00:24:15.170 11:34:33 -- scripts/common.sh@340 -- # ver2_l=1 00:24:15.170 11:34:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:15.170 11:34:33 -- scripts/common.sh@343 -- # case "$op" in 00:24:15.170 11:34:33 -- scripts/common.sh@344 -- # : 1 00:24:15.170 11:34:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:15.170 11:34:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.170 11:34:33 -- scripts/common.sh@364 -- # decimal 1 00:24:15.170 11:34:33 -- scripts/common.sh@352 -- # local d=1 00:24:15.170 11:34:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.170 11:34:33 -- scripts/common.sh@354 -- # echo 1 00:24:15.170 11:34:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:15.170 11:34:33 -- scripts/common.sh@365 -- # decimal 2 00:24:15.170 11:34:33 -- scripts/common.sh@352 -- # local d=2 00:24:15.170 11:34:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.171 11:34:33 -- scripts/common.sh@354 -- # echo 2 00:24:15.171 11:34:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:15.171 11:34:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:15.171 11:34:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:15.171 11:34:33 -- scripts/common.sh@367 -- # return 0 00:24:15.171 11:34:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.171 11:34:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:15.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.171 --rc genhtml_branch_coverage=1 00:24:15.171 --rc genhtml_function_coverage=1 00:24:15.171 --rc genhtml_legend=1 00:24:15.171 --rc geninfo_all_blocks=1 00:24:15.171 --rc geninfo_unexecuted_blocks=1 00:24:15.171 00:24:15.171 ' 00:24:15.171 11:34:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:15.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.171 --rc genhtml_branch_coverage=1 00:24:15.171 --rc genhtml_function_coverage=1 00:24:15.171 --rc genhtml_legend=1 00:24:15.171 --rc geninfo_all_blocks=1 00:24:15.171 --rc geninfo_unexecuted_blocks=1 00:24:15.171 00:24:15.171 ' 00:24:15.171 11:34:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:15.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.171 --rc genhtml_branch_coverage=1 00:24:15.171 --rc genhtml_function_coverage=1 00:24:15.171 --rc genhtml_legend=1 00:24:15.171 --rc geninfo_all_blocks=1 00:24:15.171 --rc geninfo_unexecuted_blocks=1 00:24:15.171 00:24:15.171 ' 00:24:15.171 11:34:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:15.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.171 --rc genhtml_branch_coverage=1 00:24:15.171 --rc genhtml_function_coverage=1 00:24:15.171 --rc genhtml_legend=1 00:24:15.171 --rc geninfo_all_blocks=1 00:24:15.171 --rc geninfo_unexecuted_blocks=1 00:24:15.171 00:24:15.171 ' 00:24:15.171 11:34:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:15.171 11:34:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.171 11:34:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.171 11:34:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.171 11:34:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:15.171 11:34:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:15.171 11:34:33 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:15.171 11:34:33 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:15.171 11:34:33 -- paths/export.sh@6 -- # export PATH 00:24:15.171 11:34:33 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:15.171 11:34:33 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:15.171 11:34:33 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:15.171 11:34:33 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:15.171 11:34:33 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:15.171 11:34:33 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:24:15.171 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.171 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.171 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.171 ************************************ 00:24:15.171 START TEST dd_invalid_arguments 00:24:15.171 ************************************ 00:24:15.171 11:34:33 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:24:15.171 11:34:33 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:24:15.171 11:34:33 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.171 11:34:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:24:15.171 11:34:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.171 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.171 11:34:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.171 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.171 11:34:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.171 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.171 11:34:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.171 11:34:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.171 11:34:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:24:15.171 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:24:15.171 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:24:15.171 options: 00:24:15.171 -c, --config JSON config file (default none) 00:24:15.171 --json JSON config file (default none) 00:24:15.171 --json-ignore-init-errors 00:24:15.171 don't exit on invalid config entry 00:24:15.171 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:24:15.171 -g, --single-file-segments 00:24:15.171 force creating just one hugetlbfs file 00:24:15.171 -h, --help show this usage 00:24:15.171 -i, --shm-id shared memory ID (optional) 00:24:15.171 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:24:15.171 --lcores lcore to CPU mapping list. The list is in the format: 00:24:15.171 [<,lcores[@CPUs]>...] 00:24:15.171 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:24:15.171 Within the group, '-' is used for range separator, 00:24:15.171 ',' is used for single number separator. 00:24:15.171 '( )' can be omitted for single element group, 00:24:15.171 '@' can be omitted if cpus and lcores have the same value 00:24:15.171 -n, --mem-channels channel number of memory channels used for DPDK 00:24:15.171 -p, --main-core main (primary) core for DPDK 00:24:15.171 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:24:15.171 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:24:15.171 --disable-cpumask-locks Disable CPU core lock files. 00:24:15.171 --silence-noticelog disable notice level logging to stderr 00:24:15.171 --msg-mempool-size global message memory pool size in count (default: 262143) 00:24:15.171 -u, --no-pci disable PCI access 00:24:15.171 --wait-for-rpc wait for RPCs to initialize subsystems 00:24:15.171 --max-delay maximum reactor delay (in microseconds) 00:24:15.171 -B, --pci-blocked pci addr to block (can be used more than once) 00:24:15.171 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:24:15.171 -R, --huge-unlink unlink huge files after initialization 00:24:15.171 -v, --version print SPDK version 00:24:15.171 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:24:15.171 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:24:15.171 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:24:15.171 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:24:15.171 Tracepoints vary in size and can use more than one trace entry. 00:24:15.171 --rpcs-allowed comma-separated list of permitted RPCS 00:24:15.171 --env-context Opaque context for use of the env implementation 00:24:15.171 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:24:15.171 --no-huge run without using hugepages 00:24:15.171 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:24:15.172 -e, --tpoint-group [:] 00:24:15.172 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:24:15.172 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:24:15.172 Groups and [2024-11-26 11:34:33.369923] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:24:15.432 masks can be combined (e.g. thread,bdev:0x1). 00:24:15.432 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:24:15.432 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:24:15.432 [--------- DD Options ---------] 00:24:15.432 --if Input file. Must specify either --if or --ib. 00:24:15.432 --ib Input bdev. Must specifier either --if or --ib 00:24:15.432 --of Output file. Must specify either --of or --ob. 00:24:15.432 --ob Output bdev. Must specify either --of or --ob. 00:24:15.432 --iflag Input file flags. 00:24:15.432 --oflag Output file flags. 00:24:15.432 --bs I/O unit size (default: 4096) 00:24:15.432 --qd Queue depth (default: 2) 00:24:15.432 --count I/O unit count. The number of I/O units to copy. (default: all) 00:24:15.432 --skip Skip this many I/O units at start of input. (default: 0) 00:24:15.432 --seek Skip this many I/O units at start of output. (default: 0) 00:24:15.432 --aio Force usage of AIO. (by default io_uring is used if available) 00:24:15.432 --sparse Enable hole skipping in input target 00:24:15.432 Available iflag and oflag values: 00:24:15.432 append - append mode 00:24:15.432 direct - use direct I/O for data 00:24:15.432 directory - fail unless a directory 00:24:15.432 dsync - use synchronized I/O for data 00:24:15.432 noatime - do not update access time 00:24:15.432 noctty - do not assign controlling terminal from file 00:24:15.432 nofollow - do not follow symlinks 00:24:15.432 nonblock - use non-blocking I/O 00:24:15.432 sync - use synchronized I/O for data and metadata 00:24:15.432 11:34:33 -- common/autotest_common.sh@653 -- # es=2 00:24:15.432 11:34:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.432 11:34:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.432 11:34:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.432 00:24:15.432 real 0m0.112s 00:24:15.432 user 0m0.063s 00:24:15.432 sys 0m0.049s 00:24:15.432 11:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.432 ************************************ 00:24:15.432 END TEST dd_invalid_arguments 00:24:15.432 ************************************ 00:24:15.432 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 11:34:33 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:24:15.432 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.432 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.432 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 ************************************ 00:24:15.432 START TEST dd_double_input 00:24:15.432 ************************************ 00:24:15.432 11:34:33 -- common/autotest_common.sh@1114 -- # double_input 00:24:15.432 11:34:33 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:24:15.432 11:34:33 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.432 11:34:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:24:15.432 11:34:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.432 11:34:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:24:15.432 [2024-11-26 11:34:33.529316] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:24:15.432 11:34:33 -- common/autotest_common.sh@653 -- # es=22 00:24:15.432 11:34:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.432 11:34:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.432 11:34:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.432 00:24:15.432 real 0m0.109s 00:24:15.432 user 0m0.063s 00:24:15.432 sys 0m0.047s 00:24:15.432 11:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.432 ************************************ 00:24:15.432 END TEST dd_double_input 00:24:15.432 ************************************ 00:24:15.432 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 11:34:33 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:24:15.432 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.432 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.432 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 ************************************ 00:24:15.432 START TEST dd_double_output 00:24:15.432 ************************************ 00:24:15.432 11:34:33 -- common/autotest_common.sh@1114 -- # double_output 00:24:15.432 11:34:33 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:24:15.432 11:34:33 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.432 11:34:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:24:15.432 11:34:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.432 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.432 11:34:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.433 11:34:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.433 11:34:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:24:15.693 [2024-11-26 11:34:33.691586] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:24:15.693 11:34:33 -- common/autotest_common.sh@653 -- # es=22 00:24:15.693 11:34:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.693 11:34:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.693 11:34:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.693 00:24:15.693 real 0m0.109s 00:24:15.693 user 0m0.066s 00:24:15.693 sys 0m0.044s 00:24:15.693 11:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.693 ************************************ 00:24:15.693 END TEST dd_double_output 00:24:15.693 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.693 ************************************ 00:24:15.693 11:34:33 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:24:15.693 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.693 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.693 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.693 ************************************ 00:24:15.693 START TEST dd_no_input 00:24:15.693 ************************************ 00:24:15.693 11:34:33 -- common/autotest_common.sh@1114 -- # no_input 00:24:15.693 11:34:33 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:24:15.693 11:34:33 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.693 11:34:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:24:15.693 11:34:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.693 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.693 11:34:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.693 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.693 11:34:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.693 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.693 11:34:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.693 11:34:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.693 11:34:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:24:15.693 [2024-11-26 11:34:33.854963] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:24:15.693 11:34:33 -- common/autotest_common.sh@653 -- # es=22 00:24:15.693 11:34:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.693 11:34:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.693 11:34:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.693 00:24:15.693 real 0m0.108s 00:24:15.693 user 0m0.065s 00:24:15.693 sys 0m0.044s 00:24:15.693 11:34:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.693 ************************************ 00:24:15.693 END TEST dd_no_input 00:24:15.693 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.693 ************************************ 00:24:15.953 11:34:33 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:24:15.953 11:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.953 11:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.953 11:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.953 ************************************ 00:24:15.953 START TEST dd_no_output 00:24:15.953 ************************************ 00:24:15.953 11:34:33 -- common/autotest_common.sh@1114 -- # no_output 00:24:15.953 11:34:33 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:15.953 11:34:33 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.953 11:34:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:15.953 11:34:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.953 11:34:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:15.953 [2024-11-26 11:34:34.015465] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:24:15.953 11:34:34 -- common/autotest_common.sh@653 -- # es=22 00:24:15.953 11:34:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.953 11:34:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.953 11:34:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.953 00:24:15.953 real 0m0.109s 00:24:15.953 user 0m0.059s 00:24:15.953 sys 0m0.051s 00:24:15.953 11:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.953 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:15.953 ************************************ 00:24:15.953 END TEST dd_no_output 00:24:15.953 ************************************ 00:24:15.953 11:34:34 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:24:15.953 11:34:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.953 11:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.953 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:15.953 ************************************ 00:24:15.953 START TEST dd_wrong_blocksize 00:24:15.953 ************************************ 00:24:15.953 11:34:34 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:24:15.953 11:34:34 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:24:15.953 11:34:34 -- common/autotest_common.sh@650 -- # local es=0 00:24:15.953 11:34:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:24:15.953 11:34:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:34 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:34 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.953 11:34:34 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.953 11:34:34 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:15.953 11:34:34 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:24:15.953 [2024-11-26 11:34:34.184394] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:24:16.213 11:34:34 -- common/autotest_common.sh@653 -- # es=22 00:24:16.213 11:34:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.213 11:34:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.213 11:34:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.213 00:24:16.213 real 0m0.116s 00:24:16.213 user 0m0.066s 00:24:16.213 sys 0m0.050s 00:24:16.213 11:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.213 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:16.213 ************************************ 00:24:16.213 END TEST dd_wrong_blocksize 00:24:16.213 ************************************ 00:24:16.213 11:34:34 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:24:16.213 11:34:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:16.213 11:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.213 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:16.213 ************************************ 00:24:16.213 START TEST dd_smaller_blocksize 00:24:16.213 ************************************ 00:24:16.213 11:34:34 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:24:16.213 11:34:34 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:24:16.213 11:34:34 -- common/autotest_common.sh@650 -- # local es=0 00:24:16.213 11:34:34 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:24:16.213 11:34:34 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.213 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.213 11:34:34 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.213 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.213 11:34:34 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.213 11:34:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.213 11:34:34 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.213 11:34:34 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:16.213 11:34:34 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:24:16.213 [2024-11-26 11:34:34.359268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:16.213 [2024-11-26 11:34:34.359469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99562 ] 00:24:16.472 [2024-11-26 11:34:34.527649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.472 [2024-11-26 11:34:34.577436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.734 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:24:17.008 [2024-11-26 11:34:34.978158] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:24:17.008 [2024-11-26 11:34:34.978227] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:17.008 [2024-11-26 11:34:35.048747] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:17.008 11:34:35 -- common/autotest_common.sh@653 -- # es=244 00:24:17.008 11:34:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.008 ************************************ 00:24:17.008 END TEST dd_smaller_blocksize 00:24:17.008 ************************************ 00:24:17.008 11:34:35 -- common/autotest_common.sh@662 -- # es=116 00:24:17.008 11:34:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:17.008 11:34:35 -- common/autotest_common.sh@670 -- # es=1 00:24:17.008 11:34:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.008 00:24:17.008 real 0m0.844s 00:24:17.008 user 0m0.296s 00:24:17.008 sys 0m0.447s 00:24:17.008 11:34:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:17.008 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.008 11:34:35 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:24:17.008 11:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:17.008 11:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.008 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.008 ************************************ 00:24:17.008 START TEST dd_invalid_count 00:24:17.008 ************************************ 00:24:17.008 11:34:35 -- common/autotest_common.sh@1114 -- # invalid_count 00:24:17.008 11:34:35 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:24:17.008 11:34:35 -- common/autotest_common.sh@650 -- # local es=0 00:24:17.008 11:34:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:24:17.008 11:34:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.008 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.008 11:34:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.008 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.008 11:34:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.008 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.008 11:34:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.008 11:34:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:17.009 11:34:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:24:17.293 [2024-11-26 11:34:35.243616] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:24:17.293 11:34:35 -- common/autotest_common.sh@653 -- # es=22 00:24:17.293 11:34:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.293 11:34:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.293 11:34:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.293 00:24:17.293 real 0m0.105s 00:24:17.293 user 0m0.057s 00:24:17.293 sys 0m0.048s 00:24:17.293 11:34:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:17.293 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 ************************************ 00:24:17.293 END TEST dd_invalid_count 00:24:17.293 ************************************ 00:24:17.293 11:34:35 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:24:17.293 11:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:17.293 11:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.293 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 ************************************ 00:24:17.293 START TEST dd_invalid_oflag 00:24:17.293 ************************************ 00:24:17.293 11:34:35 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:24:17.293 11:34:35 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:24:17.293 11:34:35 -- common/autotest_common.sh@650 -- # local es=0 00:24:17.293 11:34:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:24:17.293 11:34:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.293 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.293 11:34:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.293 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.293 11:34:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.293 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.293 11:34:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.293 11:34:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:17.293 11:34:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:24:17.293 [2024-11-26 11:34:35.402495] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:24:17.293 11:34:35 -- common/autotest_common.sh@653 -- # es=22 00:24:17.293 11:34:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.293 11:34:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.293 11:34:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.293 00:24:17.293 real 0m0.109s 00:24:17.293 user 0m0.066s 00:24:17.293 sys 0m0.043s 00:24:17.293 11:34:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:17.293 ************************************ 00:24:17.293 END TEST dd_invalid_oflag 00:24:17.293 ************************************ 00:24:17.293 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 11:34:35 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:24:17.293 11:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:17.293 11:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.294 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.294 ************************************ 00:24:17.294 START TEST dd_invalid_iflag 00:24:17.294 ************************************ 00:24:17.294 11:34:35 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:24:17.294 11:34:35 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:24:17.294 11:34:35 -- common/autotest_common.sh@650 -- # local es=0 00:24:17.294 11:34:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:24:17.294 11:34:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.294 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.294 11:34:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.294 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.294 11:34:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.294 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.294 11:34:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.294 11:34:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:17.294 11:34:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:24:17.563 [2024-11-26 11:34:35.566673] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:24:17.563 11:34:35 -- common/autotest_common.sh@653 -- # es=22 00:24:17.563 11:34:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.563 11:34:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.563 11:34:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.563 00:24:17.563 real 0m0.113s 00:24:17.563 user 0m0.060s 00:24:17.563 sys 0m0.053s 00:24:17.563 11:34:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:17.563 ************************************ 00:24:17.563 END TEST dd_invalid_iflag 00:24:17.563 ************************************ 00:24:17.563 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.563 11:34:35 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:24:17.563 11:34:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:17.563 11:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.563 11:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.563 ************************************ 00:24:17.563 START TEST dd_unknown_flag 00:24:17.563 ************************************ 00:24:17.563 11:34:35 -- common/autotest_common.sh@1114 -- # unknown_flag 00:24:17.563 11:34:35 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:24:17.563 11:34:35 -- common/autotest_common.sh@650 -- # local es=0 00:24:17.563 11:34:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:24:17.563 11:34:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.563 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.563 11:34:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.563 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.563 11:34:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.563 11:34:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.563 11:34:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:17.563 11:34:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:17.563 11:34:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:24:17.563 [2024-11-26 11:34:35.735834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:17.563 [2024-11-26 11:34:35.736068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99658 ] 00:24:17.822 [2024-11-26 11:34:35.900030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.822 [2024-11-26 11:34:35.933081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.822 [2024-11-26 11:34:35.979817] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:24:17.822 [2024-11-26 11:34:35.979951] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:24:17.822 [2024-11-26 11:34:35.979978] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:24:17.822 [2024-11-26 11:34:35.979997] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:17.822 [2024-11-26 11:34:36.047313] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:18.081 11:34:36 -- common/autotest_common.sh@653 -- # es=236 00:24:18.081 11:34:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.081 11:34:36 -- common/autotest_common.sh@662 -- # es=108 00:24:18.081 11:34:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:18.081 11:34:36 -- common/autotest_common.sh@670 -- # es=1 00:24:18.081 11:34:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.081 00:24:18.081 real 0m0.480s 00:24:18.081 user 0m0.223s 00:24:18.081 sys 0m0.156s 00:24:18.081 ************************************ 00:24:18.081 END TEST dd_unknown_flag 00:24:18.081 ************************************ 00:24:18.081 11:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:18.081 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.081 11:34:36 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:24:18.081 11:34:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:18.081 11:34:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:18.081 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.081 ************************************ 00:24:18.081 START TEST dd_invalid_json 00:24:18.081 ************************************ 00:24:18.081 11:34:36 -- common/autotest_common.sh@1114 -- # invalid_json 00:24:18.081 11:34:36 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:24:18.081 11:34:36 -- common/autotest_common.sh@650 -- # local es=0 00:24:18.081 11:34:36 -- dd/negative_dd.sh@95 -- # : 00:24:18.081 11:34:36 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:24:18.081 11:34:36 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.081 11:34:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.081 11:34:36 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.081 11:34:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.081 11:34:36 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.081 11:34:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.081 11:34:36 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.081 11:34:36 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:18.081 11:34:36 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:24:18.081 [2024-11-26 11:34:36.263940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:18.081 [2024-11-26 11:34:36.264125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99692 ] 00:24:18.341 [2024-11-26 11:34:36.427982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.341 [2024-11-26 11:34:36.464379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.341 [2024-11-26 11:34:36.464582] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:24:18.341 [2024-11-26 11:34:36.464622] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:18.341 [2024-11-26 11:34:36.464687] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:18.341 11:34:36 -- common/autotest_common.sh@653 -- # es=234 00:24:18.341 11:34:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.341 11:34:36 -- common/autotest_common.sh@662 -- # es=106 00:24:18.341 11:34:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:18.341 11:34:36 -- common/autotest_common.sh@670 -- # es=1 00:24:18.341 11:34:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.341 00:24:18.341 real 0m0.361s 00:24:18.341 user 0m0.157s 00:24:18.341 sys 0m0.105s 00:24:18.341 ************************************ 00:24:18.341 END TEST dd_invalid_json 00:24:18.341 ************************************ 00:24:18.341 11:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:18.341 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.600 00:24:18.600 real 0m3.496s 00:24:18.600 user 0m1.538s 00:24:18.600 sys 0m1.638s 00:24:18.600 11:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:18.600 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.600 ************************************ 00:24:18.600 END TEST spdk_dd_negative 00:24:18.600 ************************************ 00:24:18.600 00:24:18.600 real 0m56.937s 00:24:18.600 user 0m30.561s 00:24:18.600 sys 0m15.879s 00:24:18.600 11:34:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:18.600 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.600 ************************************ 00:24:18.600 END TEST spdk_dd 00:24:18.600 ************************************ 00:24:18.600 11:34:36 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:24:18.600 11:34:36 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:24:18.600 11:34:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:18.600 11:34:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:18.600 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.600 ************************************ 00:24:18.600 START TEST blockdev_nvme 00:24:18.600 ************************************ 00:24:18.600 11:34:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:24:18.600 * Looking for test storage... 00:24:18.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:18.600 11:34:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:18.601 11:34:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:18.601 11:34:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:18.860 11:34:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:18.860 11:34:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:18.860 11:34:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:18.860 11:34:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:18.860 11:34:36 -- scripts/common.sh@335 -- # IFS=.-: 00:24:18.860 11:34:36 -- scripts/common.sh@335 -- # read -ra ver1 00:24:18.860 11:34:36 -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.860 11:34:36 -- scripts/common.sh@336 -- # read -ra ver2 00:24:18.860 11:34:36 -- scripts/common.sh@337 -- # local 'op=<' 00:24:18.860 11:34:36 -- scripts/common.sh@339 -- # ver1_l=2 00:24:18.860 11:34:36 -- scripts/common.sh@340 -- # ver2_l=1 00:24:18.860 11:34:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:18.860 11:34:36 -- scripts/common.sh@343 -- # case "$op" in 00:24:18.860 11:34:36 -- scripts/common.sh@344 -- # : 1 00:24:18.860 11:34:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:18.860 11:34:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.860 11:34:36 -- scripts/common.sh@364 -- # decimal 1 00:24:18.860 11:34:36 -- scripts/common.sh@352 -- # local d=1 00:24:18.860 11:34:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.860 11:34:36 -- scripts/common.sh@354 -- # echo 1 00:24:18.860 11:34:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:18.860 11:34:36 -- scripts/common.sh@365 -- # decimal 2 00:24:18.860 11:34:36 -- scripts/common.sh@352 -- # local d=2 00:24:18.860 11:34:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.860 11:34:36 -- scripts/common.sh@354 -- # echo 2 00:24:18.860 11:34:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:18.860 11:34:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:18.860 11:34:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:18.860 11:34:36 -- scripts/common.sh@367 -- # return 0 00:24:18.860 11:34:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.860 11:34:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.860 --rc genhtml_branch_coverage=1 00:24:18.860 --rc genhtml_function_coverage=1 00:24:18.860 --rc genhtml_legend=1 00:24:18.860 --rc geninfo_all_blocks=1 00:24:18.860 --rc geninfo_unexecuted_blocks=1 00:24:18.860 00:24:18.860 ' 00:24:18.860 11:34:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.860 --rc genhtml_branch_coverage=1 00:24:18.860 --rc genhtml_function_coverage=1 00:24:18.860 --rc genhtml_legend=1 00:24:18.860 --rc geninfo_all_blocks=1 00:24:18.860 --rc geninfo_unexecuted_blocks=1 00:24:18.860 00:24:18.860 ' 00:24:18.860 11:34:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.860 --rc genhtml_branch_coverage=1 00:24:18.860 --rc genhtml_function_coverage=1 00:24:18.860 --rc genhtml_legend=1 00:24:18.860 --rc geninfo_all_blocks=1 00:24:18.860 --rc geninfo_unexecuted_blocks=1 00:24:18.860 00:24:18.860 ' 00:24:18.860 11:34:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.860 --rc genhtml_branch_coverage=1 00:24:18.860 --rc genhtml_function_coverage=1 00:24:18.860 --rc genhtml_legend=1 00:24:18.860 --rc geninfo_all_blocks=1 00:24:18.860 --rc geninfo_unexecuted_blocks=1 00:24:18.860 00:24:18.860 ' 00:24:18.860 11:34:36 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:18.860 11:34:36 -- bdev/nbd_common.sh@6 -- # set -e 00:24:18.860 11:34:36 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:18.860 11:34:36 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:18.860 11:34:36 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:18.860 11:34:36 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:18.860 11:34:36 -- bdev/blockdev.sh@18 -- # : 00:24:18.860 11:34:36 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:24:18.860 11:34:36 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:24:18.860 11:34:36 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:24:18.860 11:34:36 -- bdev/blockdev.sh@672 -- # uname -s 00:24:18.860 11:34:36 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:24:18.860 11:34:36 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:24:18.860 11:34:36 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:24:18.860 11:34:36 -- bdev/blockdev.sh@681 -- # crypto_device= 00:24:18.860 11:34:36 -- bdev/blockdev.sh@682 -- # dek= 00:24:18.860 11:34:36 -- bdev/blockdev.sh@683 -- # env_ctx= 00:24:18.860 11:34:36 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:24:18.860 11:34:36 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:24:18.860 11:34:36 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:24:18.861 11:34:36 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:24:18.861 11:34:36 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:24:18.861 11:34:36 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=99775 00:24:18.861 11:34:36 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:18.861 11:34:36 -- bdev/blockdev.sh@47 -- # waitforlisten 99775 00:24:18.861 11:34:36 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:18.861 11:34:36 -- common/autotest_common.sh@829 -- # '[' -z 99775 ']' 00:24:18.861 11:34:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.861 11:34:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.861 11:34:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.861 11:34:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.861 11:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.861 [2024-11-26 11:34:36.945892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:18.861 [2024-11-26 11:34:36.946050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99775 ] 00:24:18.861 [2024-11-26 11:34:37.099372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.120 [2024-11-26 11:34:37.133687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:19.120 [2024-11-26 11:34:37.133945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.688 11:34:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.688 11:34:37 -- common/autotest_common.sh@862 -- # return 0 00:24:19.688 11:34:37 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:24:19.688 11:34:37 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:24:19.688 11:34:37 -- bdev/blockdev.sh@79 -- # local json 00:24:19.688 11:34:37 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:24:19.688 11:34:37 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:19.947 11:34:37 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:24:19.947 11:34:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.947 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:19.947 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.947 11:34:38 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:24:19.947 11:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.947 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:19.947 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.948 11:34:38 -- bdev/blockdev.sh@738 -- # cat 00:24:19.948 11:34:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:24:19.948 11:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.948 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:19.948 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.948 11:34:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:24:19.948 11:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.948 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:19.948 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.948 11:34:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:19.948 11:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.948 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:19.948 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.948 11:34:38 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:24:19.948 11:34:38 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:24:19.948 11:34:38 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:24:19.948 11:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.948 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:19.948 11:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.948 11:34:38 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:24:19.948 11:34:38 -- bdev/blockdev.sh@747 -- # jq -r .name 00:24:19.948 11:34:38 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8d6d0e98-11c4-47e7-86e6-d3a4097fc662"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8d6d0e98-11c4-47e7-86e6-d3a4097fc662",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:24:19.948 11:34:38 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:24:19.948 11:34:38 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:24:19.948 11:34:38 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:24:19.948 11:34:38 -- bdev/blockdev.sh@752 -- # killprocess 99775 00:24:19.948 11:34:38 -- common/autotest_common.sh@936 -- # '[' -z 99775 ']' 00:24:19.948 11:34:38 -- common/autotest_common.sh@940 -- # kill -0 99775 00:24:19.948 11:34:38 -- common/autotest_common.sh@941 -- # uname 00:24:19.948 11:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:19.948 11:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99775 00:24:19.948 11:34:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:19.948 killing process with pid 99775 00:24:19.948 11:34:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:19.948 11:34:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99775' 00:24:19.948 11:34:38 -- common/autotest_common.sh@955 -- # kill 99775 00:24:19.948 11:34:38 -- common/autotest_common.sh@960 -- # wait 99775 00:24:20.517 11:34:38 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:20.517 11:34:38 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:20.517 11:34:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:24:20.517 11:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:20.517 11:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:20.517 ************************************ 00:24:20.517 START TEST bdev_hello_world 00:24:20.517 ************************************ 00:24:20.517 11:34:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:20.517 [2024-11-26 11:34:38.531038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:20.517 [2024-11-26 11:34:38.531201] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99838 ] 00:24:20.517 [2024-11-26 11:34:38.695348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.517 [2024-11-26 11:34:38.729387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.776 [2024-11-26 11:34:38.895866] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:20.776 [2024-11-26 11:34:38.895966] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:24:20.776 [2024-11-26 11:34:38.896012] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:20.776 [2024-11-26 11:34:38.898311] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:20.776 [2024-11-26 11:34:38.898853] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:20.776 [2024-11-26 11:34:38.898904] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:20.776 [2024-11-26 11:34:38.899135] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:20.776 00:24:20.776 [2024-11-26 11:34:38.899176] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:21.035 00:24:21.035 real 0m0.612s 00:24:21.035 user 0m0.349s 00:24:21.035 sys 0m0.163s 00:24:21.035 11:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:21.035 ************************************ 00:24:21.035 END TEST bdev_hello_world 00:24:21.035 ************************************ 00:24:21.035 11:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.035 11:34:39 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:24:21.035 11:34:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:21.035 11:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:21.035 11:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.035 ************************************ 00:24:21.035 START TEST bdev_bounds 00:24:21.035 ************************************ 00:24:21.035 11:34:39 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:24:21.035 11:34:39 -- bdev/blockdev.sh@288 -- # bdevio_pid=99865 00:24:21.035 11:34:39 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:21.035 11:34:39 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:21.035 Process bdevio pid: 99865 00:24:21.035 11:34:39 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 99865' 00:24:21.035 11:34:39 -- bdev/blockdev.sh@291 -- # waitforlisten 99865 00:24:21.035 11:34:39 -- common/autotest_common.sh@829 -- # '[' -z 99865 ']' 00:24:21.035 11:34:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.035 11:34:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.035 11:34:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.035 11:34:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.035 11:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.035 [2024-11-26 11:34:39.194233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:21.035 [2024-11-26 11:34:39.194456] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99865 ] 00:24:21.294 [2024-11-26 11:34:39.359014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:21.294 [2024-11-26 11:34:39.390578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.294 [2024-11-26 11:34:39.390651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.294 [2024-11-26 11:34:39.390731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.862 11:34:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.862 11:34:40 -- common/autotest_common.sh@862 -- # return 0 00:24:21.862 11:34:40 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:22.122 I/O targets: 00:24:22.122 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:22.122 00:24:22.122 00:24:22.122 CUnit - A unit testing framework for C - Version 2.1-3 00:24:22.122 http://cunit.sourceforge.net/ 00:24:22.122 00:24:22.122 00:24:22.122 Suite: bdevio tests on: Nvme0n1 00:24:22.122 Test: blockdev write read block ...passed 00:24:22.122 Test: blockdev write zeroes read block ...passed 00:24:22.122 Test: blockdev write zeroes read no split ...passed 00:24:22.122 Test: blockdev write zeroes read split ...passed 00:24:22.122 Test: blockdev write zeroes read split partial ...passed 00:24:22.122 Test: blockdev reset ...[2024-11-26 11:34:40.166194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:24:22.123 [2024-11-26 11:34:40.168385] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:22.123 passed 00:24:22.123 Test: blockdev write read 8 blocks ...passed 00:24:22.123 Test: blockdev write read size > 128k ...passed 00:24:22.123 Test: blockdev write read invalid size ...passed 00:24:22.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:22.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:22.123 Test: blockdev write read max offset ...passed 00:24:22.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:22.123 Test: blockdev writev readv 8 blocks ...passed 00:24:22.123 Test: blockdev writev readv 30 x 1block ...passed 00:24:22.123 Test: blockdev writev readv block ...passed 00:24:22.123 Test: blockdev writev readv size > 128k ...passed 00:24:22.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:22.123 Test: blockdev comparev and writev ...[2024-11-26 11:34:40.176165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x32f60d000 len:0x1000 00:24:22.123 [2024-11-26 11:34:40.176218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:22.123 passed 00:24:22.123 Test: blockdev nvme passthru rw ...passed 00:24:22.123 Test: blockdev nvme passthru vendor specific ...[2024-11-26 11:34:40.177383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:22.123 [2024-11-26 11:34:40.177432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:22.123 passed 00:24:22.123 Test: blockdev nvme admin passthru ...passed 00:24:22.123 Test: blockdev copy ...passed 00:24:22.123 00:24:22.123 Run Summary: Type Total Ran Passed Failed Inactive 00:24:22.123 suites 1 1 n/a 0 0 00:24:22.123 tests 23 23 23 0 0 00:24:22.123 asserts 152 152 152 0 n/a 00:24:22.123 00:24:22.123 Elapsed time = 0.079 seconds 00:24:22.123 0 00:24:22.123 11:34:40 -- bdev/blockdev.sh@293 -- # killprocess 99865 00:24:22.123 11:34:40 -- common/autotest_common.sh@936 -- # '[' -z 99865 ']' 00:24:22.123 11:34:40 -- common/autotest_common.sh@940 -- # kill -0 99865 00:24:22.123 11:34:40 -- common/autotest_common.sh@941 -- # uname 00:24:22.123 11:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.123 11:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99865 00:24:22.123 11:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:22.123 11:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:22.123 killing process with pid 99865 00:24:22.123 11:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99865' 00:24:22.123 11:34:40 -- common/autotest_common.sh@955 -- # kill 99865 00:24:22.123 11:34:40 -- common/autotest_common.sh@960 -- # wait 99865 00:24:22.383 11:34:40 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:24:22.383 00:24:22.383 real 0m1.253s 00:24:22.383 user 0m3.191s 00:24:22.383 sys 0m0.288s 00:24:22.383 11:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:22.383 ************************************ 00:24:22.383 END TEST bdev_bounds 00:24:22.383 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:22.383 ************************************ 00:24:22.383 11:34:40 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:24:22.383 11:34:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:22.383 11:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:22.383 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:22.383 ************************************ 00:24:22.383 START TEST bdev_nbd 00:24:22.383 ************************************ 00:24:22.383 11:34:40 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:24:22.383 11:34:40 -- bdev/blockdev.sh@298 -- # uname -s 00:24:22.383 11:34:40 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:24:22.383 11:34:40 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:22.383 11:34:40 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:22.383 11:34:40 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:24:22.383 11:34:40 -- bdev/blockdev.sh@302 -- # local bdev_all 00:24:22.383 11:34:40 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:24:22.383 11:34:40 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:24:22.383 11:34:40 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:22.383 11:34:40 -- bdev/blockdev.sh@309 -- # local nbd_all 00:24:22.383 11:34:40 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:24:22.383 11:34:40 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:24:22.383 11:34:40 -- bdev/blockdev.sh@312 -- # local nbd_list 00:24:22.383 11:34:40 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:24:22.383 11:34:40 -- bdev/blockdev.sh@313 -- # local bdev_list 00:24:22.383 11:34:40 -- bdev/blockdev.sh@316 -- # nbd_pid=99908 00:24:22.383 11:34:40 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:22.383 11:34:40 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:22.383 11:34:40 -- bdev/blockdev.sh@318 -- # waitforlisten 99908 /var/tmp/spdk-nbd.sock 00:24:22.383 11:34:40 -- common/autotest_common.sh@829 -- # '[' -z 99908 ']' 00:24:22.383 11:34:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:22.383 11:34:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:22.384 11:34:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:22.384 11:34:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.384 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:22.384 [2024-11-26 11:34:40.493222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:22.384 [2024-11-26 11:34:40.493395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.644 [2024-11-26 11:34:40.640923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.644 [2024-11-26 11:34:40.674691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.213 11:34:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.213 11:34:41 -- common/autotest_common.sh@862 -- # return 0 00:24:23.213 11:34:41 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@24 -- # local i 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:23.213 11:34:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:24:23.473 11:34:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:23.473 11:34:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:23.473 11:34:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:23.473 11:34:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:23.473 11:34:41 -- common/autotest_common.sh@867 -- # local i 00:24:23.473 11:34:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:23.473 11:34:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:23.473 11:34:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:23.473 11:34:41 -- common/autotest_common.sh@871 -- # break 00:24:23.473 11:34:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:23.473 11:34:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:23.473 11:34:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:23.473 1+0 records in 00:24:23.473 1+0 records out 00:24:23.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546276 s, 7.5 MB/s 00:24:23.473 11:34:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.473 11:34:41 -- common/autotest_common.sh@884 -- # size=4096 00:24:23.473 11:34:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.733 11:34:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:23.733 11:34:41 -- common/autotest_common.sh@887 -- # return 0 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:23.733 { 00:24:23.733 "nbd_device": "/dev/nbd0", 00:24:23.733 "bdev_name": "Nvme0n1" 00:24:23.733 } 00:24:23.733 ]' 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:23.733 { 00:24:23.733 "nbd_device": "/dev/nbd0", 00:24:23.733 "bdev_name": "Nvme0n1" 00:24:23.733 } 00:24:23.733 ]' 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@51 -- # local i 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.733 11:34:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@41 -- # break 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.992 11:34:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@65 -- # true 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@65 -- # count=0 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@122 -- # count=0 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@127 -- # return 0 00:24:24.252 11:34:42 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@12 -- # local i 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:24.252 11:34:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:24:24.252 /dev/nbd0 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:24.512 11:34:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:24.512 11:34:42 -- common/autotest_common.sh@867 -- # local i 00:24:24.512 11:34:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:24.512 11:34:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:24.512 11:34:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:24.512 11:34:42 -- common/autotest_common.sh@871 -- # break 00:24:24.512 11:34:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:24.512 11:34:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:24.512 11:34:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:24.512 1+0 records in 00:24:24.512 1+0 records out 00:24:24.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510739 s, 8.0 MB/s 00:24:24.512 11:34:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.512 11:34:42 -- common/autotest_common.sh@884 -- # size=4096 00:24:24.512 11:34:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.512 11:34:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:24.512 11:34:42 -- common/autotest_common.sh@887 -- # return 0 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:24.512 11:34:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:24.772 { 00:24:24.772 "nbd_device": "/dev/nbd0", 00:24:24.772 "bdev_name": "Nvme0n1" 00:24:24.772 } 00:24:24.772 ]' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:24.772 { 00:24:24.772 "nbd_device": "/dev/nbd0", 00:24:24.772 "bdev_name": "Nvme0n1" 00:24:24.772 } 00:24:24.772 ]' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@65 -- # count=1 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@66 -- # echo 1 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@95 -- # count=1 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:24.772 256+0 records in 00:24:24.772 256+0 records out 00:24:24.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00779423 s, 135 MB/s 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:24.772 256+0 records in 00:24:24.772 256+0 records out 00:24:24.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639558 s, 16.4 MB/s 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@51 -- # local i 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:24.772 11:34:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@41 -- # break 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@45 -- # return 0 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:25.032 11:34:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@65 -- # true 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@65 -- # count=0 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@104 -- # count=0 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@109 -- # return 0 00:24:25.292 11:34:43 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:24:25.292 11:34:43 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:25.551 malloc_lvol_verify 00:24:25.551 11:34:43 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:25.810 3213bf97-0e7a-4979-9a22-0bcee50362c0 00:24:25.811 11:34:43 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:26.069 fdb12336-a122-4911-9cd4-d5487ca25494 00:24:26.069 11:34:44 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:26.069 /dev/nbd0 00:24:26.069 11:34:44 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:24:26.070 mke2fs 1.47.0 (5-Feb-2023) 00:24:26.070 00:24:26.070 Filesystem too small for a journal 00:24:26.070 Discarding device blocks: 0/1024 done 00:24:26.070 Creating filesystem with 1024 4k blocks and 1024 inodes 00:24:26.070 00:24:26.070 Allocating group tables: 0/1 done 00:24:26.070 Writing inode tables: 0/1 done 00:24:26.070 Writing superblocks and filesystem accounting information: 0/1 done 00:24:26.070 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@51 -- # local i 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:26.070 11:34:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@41 -- # break 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@45 -- # return 0 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:24:26.329 11:34:44 -- bdev/nbd_common.sh@147 -- # return 0 00:24:26.329 11:34:44 -- bdev/blockdev.sh@324 -- # killprocess 99908 00:24:26.329 11:34:44 -- common/autotest_common.sh@936 -- # '[' -z 99908 ']' 00:24:26.329 11:34:44 -- common/autotest_common.sh@940 -- # kill -0 99908 00:24:26.329 11:34:44 -- common/autotest_common.sh@941 -- # uname 00:24:26.329 11:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:26.329 11:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99908 00:24:26.329 11:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:26.329 11:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:26.329 killing process with pid 99908 00:24:26.329 11:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99908' 00:24:26.329 11:34:44 -- common/autotest_common.sh@955 -- # kill 99908 00:24:26.329 11:34:44 -- common/autotest_common.sh@960 -- # wait 99908 00:24:26.588 11:34:44 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:24:26.588 00:24:26.588 real 0m4.301s 00:24:26.588 user 0m6.616s 00:24:26.588 sys 0m0.992s 00:24:26.588 ************************************ 00:24:26.588 END TEST bdev_nbd 00:24:26.588 ************************************ 00:24:26.588 11:34:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:26.588 11:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:26.589 11:34:44 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:24:26.589 skipping fio tests on NVMe due to multi-ns failures. 00:24:26.589 11:34:44 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:24:26.589 11:34:44 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:24:26.589 11:34:44 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:26.589 11:34:44 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:26.589 11:34:44 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:24:26.589 11:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.589 11:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:26.589 ************************************ 00:24:26.589 START TEST bdev_verify 00:24:26.589 ************************************ 00:24:26.589 11:34:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:26.848 [2024-11-26 11:34:44.838945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:26.848 [2024-11-26 11:34:44.839084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100078 ] 00:24:26.848 [2024-11-26 11:34:44.986801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:26.848 [2024-11-26 11:34:45.018390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.848 [2024-11-26 11:34:45.018475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.107 Running I/O for 5 seconds... 00:24:32.379 00:24:32.379 Latency(us) 00:24:32.379 [2024-11-26T11:34:50.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.379 [2024-11-26T11:34:50.609Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:32.379 Verification LBA range: start 0x0 length 0xa0000 00:24:32.379 Nvme0n1 : 5.01 17744.40 69.31 0.00 0.00 7182.08 366.78 13881.72 00:24:32.379 [2024-11-26T11:34:50.609Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:32.379 Verification LBA range: start 0xa0000 length 0xa0000 00:24:32.379 Nvme0n1 : 5.01 17705.68 69.16 0.00 0.00 7196.22 588.33 15371.17 00:24:32.379 [2024-11-26T11:34:50.609Z] =================================================================================================================== 00:24:32.379 [2024-11-26T11:34:50.609Z] Total : 35450.09 138.48 0.00 0.00 7189.14 366.78 15371.17 00:24:40.500 00:24:40.500 real 0m13.253s 00:24:40.500 user 0m25.873s 00:24:40.500 sys 0m0.212s 00:24:40.500 11:34:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:40.500 11:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.500 ************************************ 00:24:40.500 END TEST bdev_verify 00:24:40.500 ************************************ 00:24:40.500 11:34:58 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:40.500 11:34:58 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:24:40.500 11:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.500 11:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.500 ************************************ 00:24:40.500 START TEST bdev_verify_big_io 00:24:40.500 ************************************ 00:24:40.500 11:34:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:40.500 [2024-11-26 11:34:58.153144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:40.500 [2024-11-26 11:34:58.153326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100197 ] 00:24:40.500 [2024-11-26 11:34:58.316764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:40.500 [2024-11-26 11:34:58.353400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.500 [2024-11-26 11:34:58.353487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.500 Running I/O for 5 seconds... 00:24:45.774 00:24:45.774 Latency(us) 00:24:45.774 [2024-11-26T11:35:04.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.774 [2024-11-26T11:35:04.004Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:45.774 Verification LBA range: start 0x0 length 0xa000 00:24:45.774 Nvme0n1 : 5.05 1900.89 118.81 0.00 0.00 66437.14 543.65 92465.34 00:24:45.774 [2024-11-26T11:35:04.004Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:45.774 Verification LBA range: start 0xa000 length 0xa000 00:24:45.774 Nvme0n1 : 5.05 1832.18 114.51 0.00 0.00 68892.93 569.72 101044.60 00:24:45.774 [2024-11-26T11:35:04.004Z] =================================================================================================================== 00:24:45.774 [2024-11-26T11:35:04.004Z] Total : 3733.08 233.32 0.00 0.00 67642.35 543.65 101044.60 00:24:46.034 00:24:46.034 real 0m6.017s 00:24:46.034 user 0m11.397s 00:24:46.034 sys 0m0.166s 00:24:46.034 11:35:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:46.034 11:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.034 ************************************ 00:24:46.034 END TEST bdev_verify_big_io 00:24:46.034 ************************************ 00:24:46.034 11:35:04 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:46.034 11:35:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:24:46.034 11:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:46.034 11:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.034 ************************************ 00:24:46.034 START TEST bdev_write_zeroes 00:24:46.034 ************************************ 00:24:46.034 11:35:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:46.034 [2024-11-26 11:35:04.207298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:46.034 [2024-11-26 11:35:04.207489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100280 ] 00:24:46.293 [2024-11-26 11:35:04.356681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.293 [2024-11-26 11:35:04.390208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.550 Running I/O for 1 seconds... 00:24:47.484 00:24:47.484 Latency(us) 00:24:47.484 [2024-11-26T11:35:05.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.484 [2024-11-26T11:35:05.714Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:47.484 Nvme0n1 : 1.00 62048.16 242.38 0.00 0.00 2057.67 897.40 12690.15 00:24:47.484 [2024-11-26T11:35:05.714Z] =================================================================================================================== 00:24:47.484 [2024-11-26T11:35:05.714Z] Total : 62048.16 242.38 0.00 0.00 2057.67 897.40 12690.15 00:24:47.743 00:24:47.743 real 0m1.618s 00:24:47.743 user 0m1.363s 00:24:47.743 sys 0m0.154s 00:24:47.743 11:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:47.743 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:47.743 ************************************ 00:24:47.743 END TEST bdev_write_zeroes 00:24:47.743 ************************************ 00:24:47.743 11:35:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:47.743 11:35:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:24:47.743 11:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:47.743 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:47.743 ************************************ 00:24:47.743 START TEST bdev_json_nonenclosed 00:24:47.743 ************************************ 00:24:47.743 11:35:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:47.743 [2024-11-26 11:35:05.901436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:47.743 [2024-11-26 11:35:05.901602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100312 ] 00:24:48.002 [2024-11-26 11:35:06.066658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.002 [2024-11-26 11:35:06.112914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.002 [2024-11-26 11:35:06.113204] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:48.002 [2024-11-26 11:35:06.113254] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:48.262 00:24:48.262 real 0m0.398s 00:24:48.262 user 0m0.197s 00:24:48.262 sys 0m0.100s 00:24:48.262 11:35:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:48.262 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.262 ************************************ 00:24:48.262 END TEST bdev_json_nonenclosed 00:24:48.262 ************************************ 00:24:48.262 11:35:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:48.262 11:35:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:24:48.262 11:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.262 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.262 ************************************ 00:24:48.262 START TEST bdev_json_nonarray 00:24:48.262 ************************************ 00:24:48.262 11:35:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:48.262 [2024-11-26 11:35:06.350888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:48.262 [2024-11-26 11:35:06.351052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100338 ] 00:24:48.521 [2024-11-26 11:35:06.518957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.521 [2024-11-26 11:35:06.566338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.521 [2024-11-26 11:35:06.566611] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:48.521 [2024-11-26 11:35:06.566663] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:48.521 00:24:48.521 real 0m0.403s 00:24:48.521 user 0m0.186s 00:24:48.521 sys 0m0.116s 00:24:48.521 11:35:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:48.521 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.521 ************************************ 00:24:48.521 END TEST bdev_json_nonarray 00:24:48.521 ************************************ 00:24:48.521 11:35:06 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:24:48.521 11:35:06 -- bdev/blockdev.sh@809 -- # cleanup 00:24:48.521 11:35:06 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:48.521 11:35:06 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:48.521 11:35:06 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:24:48.521 11:35:06 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:24:48.521 00:24:48.521 real 0m30.050s 00:24:48.521 user 0m51.202s 00:24:48.521 sys 0m2.895s 00:24:48.521 11:35:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:48.521 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.521 ************************************ 00:24:48.521 END TEST blockdev_nvme 00:24:48.521 ************************************ 00:24:48.781 11:35:06 -- spdk/autotest.sh@206 -- # uname -s 00:24:48.781 11:35:06 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:24:48.781 11:35:06 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:24:48.781 11:35:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.781 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.781 ************************************ 00:24:48.781 START TEST blockdev_nvme_gpt 00:24:48.781 ************************************ 00:24:48.781 11:35:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:24:48.781 * Looking for test storage... 00:24:48.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:48.781 11:35:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:48.781 11:35:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:48.781 11:35:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:48.781 11:35:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:48.781 11:35:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:48.781 11:35:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:48.781 11:35:06 -- scripts/common.sh@335 -- # IFS=.-: 00:24:48.781 11:35:06 -- scripts/common.sh@335 -- # read -ra ver1 00:24:48.781 11:35:06 -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.781 11:35:06 -- scripts/common.sh@336 -- # read -ra ver2 00:24:48.781 11:35:06 -- scripts/common.sh@337 -- # local 'op=<' 00:24:48.781 11:35:06 -- scripts/common.sh@339 -- # ver1_l=2 00:24:48.781 11:35:06 -- scripts/common.sh@340 -- # ver2_l=1 00:24:48.781 11:35:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:48.781 11:35:06 -- scripts/common.sh@343 -- # case "$op" in 00:24:48.781 11:35:06 -- scripts/common.sh@344 -- # : 1 00:24:48.781 11:35:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:48.781 11:35:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.781 11:35:06 -- scripts/common.sh@364 -- # decimal 1 00:24:48.781 11:35:06 -- scripts/common.sh@352 -- # local d=1 00:24:48.781 11:35:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.781 11:35:06 -- scripts/common.sh@354 -- # echo 1 00:24:48.781 11:35:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:48.781 11:35:06 -- scripts/common.sh@365 -- # decimal 2 00:24:48.781 11:35:06 -- scripts/common.sh@352 -- # local d=2 00:24:48.781 11:35:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.781 11:35:06 -- scripts/common.sh@354 -- # echo 2 00:24:48.781 11:35:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:48.781 11:35:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:48.781 11:35:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:48.781 11:35:06 -- scripts/common.sh@367 -- # return 0 00:24:48.781 11:35:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:48.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.781 --rc genhtml_branch_coverage=1 00:24:48.781 --rc genhtml_function_coverage=1 00:24:48.781 --rc genhtml_legend=1 00:24:48.781 --rc geninfo_all_blocks=1 00:24:48.781 --rc geninfo_unexecuted_blocks=1 00:24:48.781 00:24:48.781 ' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:48.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.781 --rc genhtml_branch_coverage=1 00:24:48.781 --rc genhtml_function_coverage=1 00:24:48.781 --rc genhtml_legend=1 00:24:48.781 --rc geninfo_all_blocks=1 00:24:48.781 --rc geninfo_unexecuted_blocks=1 00:24:48.781 00:24:48.781 ' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:48.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.781 --rc genhtml_branch_coverage=1 00:24:48.781 --rc genhtml_function_coverage=1 00:24:48.781 --rc genhtml_legend=1 00:24:48.781 --rc geninfo_all_blocks=1 00:24:48.781 --rc geninfo_unexecuted_blocks=1 00:24:48.781 00:24:48.781 ' 00:24:48.781 11:35:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:48.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.781 --rc genhtml_branch_coverage=1 00:24:48.781 --rc genhtml_function_coverage=1 00:24:48.781 --rc genhtml_legend=1 00:24:48.781 --rc geninfo_all_blocks=1 00:24:48.781 --rc geninfo_unexecuted_blocks=1 00:24:48.781 00:24:48.781 ' 00:24:48.781 11:35:06 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:48.781 11:35:06 -- bdev/nbd_common.sh@6 -- # set -e 00:24:48.781 11:35:06 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:48.781 11:35:06 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:48.781 11:35:06 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:48.781 11:35:06 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:48.781 11:35:06 -- bdev/blockdev.sh@18 -- # : 00:24:48.781 11:35:06 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:24:48.781 11:35:06 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:24:48.781 11:35:06 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:24:48.781 11:35:06 -- bdev/blockdev.sh@672 -- # uname -s 00:24:48.781 11:35:06 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:24:48.781 11:35:06 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:24:48.781 11:35:06 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:24:48.781 11:35:06 -- bdev/blockdev.sh@681 -- # crypto_device= 00:24:48.781 11:35:06 -- bdev/blockdev.sh@682 -- # dek= 00:24:48.781 11:35:06 -- bdev/blockdev.sh@683 -- # env_ctx= 00:24:48.781 11:35:06 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:24:48.781 11:35:06 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:24:48.781 11:35:06 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:24:48.781 11:35:06 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:24:48.781 11:35:06 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:24:48.781 11:35:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=100415 00:24:48.781 11:35:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:48.781 11:35:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:48.781 11:35:06 -- bdev/blockdev.sh@47 -- # waitforlisten 100415 00:24:48.781 11:35:06 -- common/autotest_common.sh@829 -- # '[' -z 100415 ']' 00:24:48.781 11:35:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.782 11:35:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.782 11:35:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.782 11:35:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.782 11:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.041 [2024-11-26 11:35:07.045162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:49.041 [2024-11-26 11:35:07.045350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100415 ] 00:24:49.041 [2024-11-26 11:35:07.214624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.041 [2024-11-26 11:35:07.259254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:49.041 [2024-11-26 11:35:07.259541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.978 11:35:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.978 11:35:07 -- common/autotest_common.sh@862 -- # return 0 00:24:49.978 11:35:07 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:24:49.978 11:35:07 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:24:49.978 11:35:07 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:50.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:24:50.237 Waiting for block devices as requested 00:24:50.237 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:24:50.237 11:35:08 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:24:50.237 11:35:08 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:24:50.237 11:35:08 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:24:50.237 11:35:08 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:24:50.237 11:35:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:24:50.237 11:35:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:24:50.237 11:35:08 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:24:50.237 11:35:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:50.237 11:35:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:24:50.237 11:35:08 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:24:50.237 11:35:08 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:24:50.237 11:35:08 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:24:50.237 11:35:08 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:24:50.237 11:35:08 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:24:50.237 11:35:08 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:24:50.237 11:35:08 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:24:50.237 11:35:08 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:24:50.237 BYT; 00:24:50.237 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:24:50.237 11:35:08 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:24:50.237 BYT; 00:24:50.237 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:24:50.237 11:35:08 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:24:50.237 11:35:08 -- bdev/blockdev.sh@114 -- # break 00:24:50.237 11:35:08 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:24:50.237 11:35:08 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:24:50.237 11:35:08 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:24:50.237 11:35:08 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:24:50.496 11:35:08 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:24:50.496 11:35:08 -- scripts/common.sh@410 -- # local spdk_guid 00:24:50.496 11:35:08 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:24:50.496 11:35:08 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:50.496 11:35:08 -- scripts/common.sh@415 -- # IFS='()' 00:24:50.496 11:35:08 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:24:50.496 11:35:08 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:50.496 11:35:08 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:24:50.496 11:35:08 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:50.496 11:35:08 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:50.496 11:35:08 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:50.496 11:35:08 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:24:50.496 11:35:08 -- scripts/common.sh@422 -- # local spdk_guid 00:24:50.496 11:35:08 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:24:50.496 11:35:08 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:50.496 11:35:08 -- scripts/common.sh@427 -- # IFS='()' 00:24:50.496 11:35:08 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:24:50.496 11:35:08 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:50.496 11:35:08 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:24:50.496 11:35:08 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:50.496 11:35:08 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:50.496 11:35:08 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:50.496 11:35:08 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:24:51.432 The operation has completed successfully. 00:24:51.432 11:35:09 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:24:52.810 The operation has completed successfully. 00:24:52.810 11:35:10 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:52.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:24:53.070 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:24:53.329 11:35:11 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:24:53.329 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.329 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 [] 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:24:53.589 11:35:11 -- bdev/blockdev.sh@79 -- # local json 00:24:53.589 11:35:11 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:24:53.589 11:35:11 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:53.589 11:35:11 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@738 -- # cat 00:24:53.589 11:35:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:24:53.589 11:35:11 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:24:53.589 11:35:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.589 11:35:11 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:24:53.589 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 11:35:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.589 11:35:11 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:24:53.589 11:35:11 -- bdev/blockdev.sh@747 -- # jq -r .name 00:24:53.589 11:35:11 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:24:53.589 11:35:11 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:24:53.589 11:35:11 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:24:53.589 11:35:11 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:24:53.589 11:35:11 -- bdev/blockdev.sh@752 -- # killprocess 100415 00:24:53.589 11:35:11 -- common/autotest_common.sh@936 -- # '[' -z 100415 ']' 00:24:53.589 11:35:11 -- common/autotest_common.sh@940 -- # kill -0 100415 00:24:53.589 11:35:11 -- common/autotest_common.sh@941 -- # uname 00:24:53.590 11:35:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:53.590 11:35:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100415 00:24:53.849 11:35:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:53.849 11:35:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:53.849 killing process with pid 100415 00:24:53.849 11:35:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100415' 00:24:53.849 11:35:11 -- common/autotest_common.sh@955 -- # kill 100415 00:24:53.849 11:35:11 -- common/autotest_common.sh@960 -- # wait 100415 00:24:54.109 11:35:12 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:54.109 11:35:12 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:24:54.109 11:35:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:24:54.109 11:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.109 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.109 ************************************ 00:24:54.109 START TEST bdev_hello_world 00:24:54.109 ************************************ 00:24:54.109 11:35:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:24:54.109 [2024-11-26 11:35:12.181691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:54.109 [2024-11-26 11:35:12.181889] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100790 ] 00:24:54.109 [2024-11-26 11:35:12.346480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.369 [2024-11-26 11:35:12.385717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.369 [2024-11-26 11:35:12.555220] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:54.369 [2024-11-26 11:35:12.555287] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:24:54.369 [2024-11-26 11:35:12.555325] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:54.369 [2024-11-26 11:35:12.557555] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:54.369 [2024-11-26 11:35:12.558219] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:54.369 [2024-11-26 11:35:12.558289] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:54.369 [2024-11-26 11:35:12.558570] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:54.369 00:24:54.369 [2024-11-26 11:35:12.558610] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:54.629 00:24:54.629 real 0m0.618s 00:24:54.629 user 0m0.346s 00:24:54.629 sys 0m0.172s 00:24:54.629 11:35:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:54.629 ************************************ 00:24:54.629 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.629 END TEST bdev_hello_world 00:24:54.629 ************************************ 00:24:54.629 11:35:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:24:54.629 11:35:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:54.629 11:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.629 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.629 ************************************ 00:24:54.629 START TEST bdev_bounds 00:24:54.629 ************************************ 00:24:54.629 11:35:12 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:24:54.629 11:35:12 -- bdev/blockdev.sh@288 -- # bdevio_pid=100820 00:24:54.629 11:35:12 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:54.629 Process bdevio pid: 100820 00:24:54.629 11:35:12 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 100820' 00:24:54.629 11:35:12 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:54.629 11:35:12 -- bdev/blockdev.sh@291 -- # waitforlisten 100820 00:24:54.629 11:35:12 -- common/autotest_common.sh@829 -- # '[' -z 100820 ']' 00:24:54.629 11:35:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.629 11:35:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.629 11:35:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.629 11:35:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.629 11:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.629 [2024-11-26 11:35:12.850647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:54.629 [2024-11-26 11:35:12.850853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100820 ] 00:24:54.889 [2024-11-26 11:35:13.015283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:54.889 [2024-11-26 11:35:13.050978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.889 [2024-11-26 11:35:13.051010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.889 [2024-11-26 11:35:13.051061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.827 11:35:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.827 11:35:13 -- common/autotest_common.sh@862 -- # return 0 00:24:55.827 11:35:13 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:55.827 I/O targets: 00:24:55.827 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:24:55.827 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:24:55.827 00:24:55.827 00:24:55.827 CUnit - A unit testing framework for C - Version 2.1-3 00:24:55.827 http://cunit.sourceforge.net/ 00:24:55.827 00:24:55.827 00:24:55.827 Suite: bdevio tests on: Nvme0n1p2 00:24:55.827 Test: blockdev write read block ...passed 00:24:55.827 Test: blockdev write zeroes read block ...passed 00:24:55.827 Test: blockdev write zeroes read no split ...passed 00:24:55.827 Test: blockdev write zeroes read split ...passed 00:24:55.827 Test: blockdev write zeroes read split partial ...passed 00:24:55.827 Test: blockdev reset ...[2024-11-26 11:35:13.886562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:24:55.827 passed 00:24:55.827 Test: blockdev write read 8 blocks ...[2024-11-26 11:35:13.888774] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.827 passed 00:24:55.827 Test: blockdev write read size > 128k ...passed 00:24:55.827 Test: blockdev write read invalid size ...passed 00:24:55.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:55.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:55.827 Test: blockdev write read max offset ...passed 00:24:55.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:55.827 Test: blockdev writev readv 8 blocks ...passed 00:24:55.827 Test: blockdev writev readv 30 x 1block ...passed 00:24:55.827 Test: blockdev writev readv block ...passed 00:24:55.827 Test: blockdev writev readv size > 128k ...passed 00:24:55.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:55.827 Test: blockdev comparev and writev ...[2024-11-26 11:35:13.896166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x322a0b000 len:0x1000 00:24:55.827 [2024-11-26 11:35:13.896255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:55.827 passed 00:24:55.827 Test: blockdev nvme passthru rw ...passed 00:24:55.827 Test: blockdev nvme passthru vendor specific ...passed 00:24:55.827 Test: blockdev nvme admin passthru ...passed 00:24:55.827 Test: blockdev copy ...passed 00:24:55.827 Suite: bdevio tests on: Nvme0n1p1 00:24:55.827 Test: blockdev write read block ...passed 00:24:55.827 Test: blockdev write zeroes read block ...passed 00:24:55.827 Test: blockdev write zeroes read no split ...passed 00:24:55.827 Test: blockdev write zeroes read split ...passed 00:24:55.827 Test: blockdev write zeroes read split partial ...passed 00:24:55.827 Test: blockdev reset ...[2024-11-26 11:35:13.908938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:24:55.827 passed 00:24:55.827 Test: blockdev write read 8 blocks ...[2024-11-26 11:35:13.910908] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.827 passed 00:24:55.827 Test: blockdev write read size > 128k ...passed 00:24:55.827 Test: blockdev write read invalid size ...passed 00:24:55.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:55.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:55.827 Test: blockdev write read max offset ...passed 00:24:55.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:55.827 Test: blockdev writev readv 8 blocks ...passed 00:24:55.827 Test: blockdev writev readv 30 x 1block ...passed 00:24:55.827 Test: blockdev writev readv block ...passed 00:24:55.828 Test: blockdev writev readv size > 128k ...passed 00:24:55.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:55.828 Test: blockdev comparev and writev ...[2024-11-26 11:35:13.917419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x322a0d000 len:0x1000 00:24:55.828 [2024-11-26 11:35:13.917485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:55.828 passed 00:24:55.828 Test: blockdev nvme passthru rw ...passed 00:24:55.828 Test: blockdev nvme passthru vendor specific ...passed 00:24:55.828 Test: blockdev nvme admin passthru ...passed 00:24:55.828 Test: blockdev copy ...passed 00:24:55.828 00:24:55.828 Run Summary: Type Total Ran Passed Failed Inactive 00:24:55.828 suites 2 2 n/a 0 0 00:24:55.828 tests 46 46 46 0 0 00:24:55.828 asserts 284 284 284 0 n/a 00:24:55.828 00:24:55.828 Elapsed time = 0.108 seconds 00:24:55.828 0 00:24:55.828 11:35:13 -- bdev/blockdev.sh@293 -- # killprocess 100820 00:24:55.828 11:35:13 -- common/autotest_common.sh@936 -- # '[' -z 100820 ']' 00:24:55.828 11:35:13 -- common/autotest_common.sh@940 -- # kill -0 100820 00:24:55.828 11:35:13 -- common/autotest_common.sh@941 -- # uname 00:24:55.828 11:35:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.828 11:35:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100820 00:24:55.828 11:35:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.828 11:35:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.828 11:35:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100820' 00:24:55.828 killing process with pid 100820 00:24:55.828 11:35:13 -- common/autotest_common.sh@955 -- # kill 100820 00:24:55.828 11:35:13 -- common/autotest_common.sh@960 -- # wait 100820 00:24:56.088 11:35:14 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:24:56.088 00:24:56.088 real 0m1.354s 00:24:56.088 user 0m3.522s 00:24:56.088 sys 0m0.274s 00:24:56.088 11:35:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:56.088 11:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:56.088 ************************************ 00:24:56.088 END TEST bdev_bounds 00:24:56.088 ************************************ 00:24:56.088 11:35:14 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:24:56.088 11:35:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:56.088 11:35:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.088 11:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:56.088 ************************************ 00:24:56.088 START TEST bdev_nbd 00:24:56.088 ************************************ 00:24:56.088 11:35:14 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:24:56.088 11:35:14 -- bdev/blockdev.sh@298 -- # uname -s 00:24:56.088 11:35:14 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:24:56.088 11:35:14 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.088 11:35:14 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:56.088 11:35:14 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:24:56.088 11:35:14 -- bdev/blockdev.sh@302 -- # local bdev_all 00:24:56.088 11:35:14 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:24:56.088 11:35:14 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:24:56.088 11:35:14 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:56.088 11:35:14 -- bdev/blockdev.sh@309 -- # local nbd_all 00:24:56.088 11:35:14 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:24:56.088 11:35:14 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:56.088 11:35:14 -- bdev/blockdev.sh@312 -- # local nbd_list 00:24:56.088 11:35:14 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:24:56.088 11:35:14 -- bdev/blockdev.sh@313 -- # local bdev_list 00:24:56.088 11:35:14 -- bdev/blockdev.sh@316 -- # nbd_pid=100870 00:24:56.088 11:35:14 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:56.088 11:35:14 -- bdev/blockdev.sh@318 -- # waitforlisten 100870 /var/tmp/spdk-nbd.sock 00:24:56.088 11:35:14 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:56.088 11:35:14 -- common/autotest_common.sh@829 -- # '[' -z 100870 ']' 00:24:56.088 11:35:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:56.088 11:35:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.088 11:35:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:56.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:56.088 11:35:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.088 11:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:56.088 [2024-11-26 11:35:14.261072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:56.088 [2024-11-26 11:35:14.261257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.348 [2024-11-26 11:35:14.422782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.348 [2024-11-26 11:35:14.456917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.293 11:35:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.293 11:35:15 -- common/autotest_common.sh@862 -- # return 0 00:24:57.293 11:35:15 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@24 -- # local i 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:57.293 11:35:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:57.293 11:35:15 -- common/autotest_common.sh@867 -- # local i 00:24:57.293 11:35:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:57.293 11:35:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:57.293 11:35:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:57.293 11:35:15 -- common/autotest_common.sh@871 -- # break 00:24:57.293 11:35:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:57.293 11:35:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:57.293 11:35:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.293 1+0 records in 00:24:57.293 1+0 records out 00:24:57.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517093 s, 7.9 MB/s 00:24:57.293 11:35:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.293 11:35:15 -- common/autotest_common.sh@884 -- # size=4096 00:24:57.293 11:35:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.293 11:35:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:57.293 11:35:15 -- common/autotest_common.sh@887 -- # return 0 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:24:57.293 11:35:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:57.570 11:35:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:57.570 11:35:15 -- common/autotest_common.sh@867 -- # local i 00:24:57.570 11:35:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:57.570 11:35:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:57.570 11:35:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:57.570 11:35:15 -- common/autotest_common.sh@871 -- # break 00:24:57.570 11:35:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:57.570 11:35:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:57.570 11:35:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.570 1+0 records in 00:24:57.570 1+0 records out 00:24:57.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576573 s, 7.1 MB/s 00:24:57.570 11:35:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.570 11:35:15 -- common/autotest_common.sh@884 -- # size=4096 00:24:57.570 11:35:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.570 11:35:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:57.570 11:35:15 -- common/autotest_common.sh@887 -- # return 0 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:24:57.570 11:35:15 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:57.855 11:35:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:57.855 { 00:24:57.855 "nbd_device": "/dev/nbd0", 00:24:57.855 "bdev_name": "Nvme0n1p1" 00:24:57.855 }, 00:24:57.855 { 00:24:57.855 "nbd_device": "/dev/nbd1", 00:24:57.855 "bdev_name": "Nvme0n1p2" 00:24:57.855 } 00:24:57.855 ]' 00:24:57.855 11:35:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:57.855 11:35:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:57.855 { 00:24:57.855 "nbd_device": "/dev/nbd0", 00:24:57.855 "bdev_name": "Nvme0n1p1" 00:24:57.855 }, 00:24:57.856 { 00:24:57.856 "nbd_device": "/dev/nbd1", 00:24:57.856 "bdev_name": "Nvme0n1p2" 00:24:57.856 } 00:24:57.856 ]' 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@51 -- # local i 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.856 11:35:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@41 -- # break 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.126 11:35:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@41 -- # break 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:58.386 11:35:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:58.645 11:35:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@65 -- # true 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@65 -- # count=0 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@122 -- # count=0 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@127 -- # return 0 00:24:58.646 11:35:16 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@12 -- # local i 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:58.646 11:35:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:24:58.905 /dev/nbd0 00:24:58.905 11:35:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:58.905 11:35:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:58.905 11:35:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:58.905 11:35:16 -- common/autotest_common.sh@867 -- # local i 00:24:58.905 11:35:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:58.905 11:35:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:58.905 11:35:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:58.905 11:35:16 -- common/autotest_common.sh@871 -- # break 00:24:58.905 11:35:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:58.905 11:35:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:58.905 11:35:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:58.905 1+0 records in 00:24:58.905 1+0 records out 00:24:58.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067891 s, 6.0 MB/s 00:24:58.905 11:35:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:58.905 11:35:16 -- common/autotest_common.sh@884 -- # size=4096 00:24:58.905 11:35:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:58.905 11:35:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:58.905 11:35:16 -- common/autotest_common.sh@887 -- # return 0 00:24:58.905 11:35:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:58.905 11:35:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:58.905 11:35:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:24:58.905 /dev/nbd1 00:24:58.905 11:35:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:59.165 11:35:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:59.165 11:35:17 -- common/autotest_common.sh@867 -- # local i 00:24:59.165 11:35:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:59.165 11:35:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:59.165 11:35:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:59.165 11:35:17 -- common/autotest_common.sh@871 -- # break 00:24:59.165 11:35:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:59.165 11:35:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:59.165 11:35:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.165 1+0 records in 00:24:59.165 1+0 records out 00:24:59.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765807 s, 5.3 MB/s 00:24:59.165 11:35:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.165 11:35:17 -- common/autotest_common.sh@884 -- # size=4096 00:24:59.165 11:35:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.165 11:35:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:59.165 11:35:17 -- common/autotest_common.sh@887 -- # return 0 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.165 11:35:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:59.425 { 00:24:59.425 "nbd_device": "/dev/nbd0", 00:24:59.425 "bdev_name": "Nvme0n1p1" 00:24:59.425 }, 00:24:59.425 { 00:24:59.425 "nbd_device": "/dev/nbd1", 00:24:59.425 "bdev_name": "Nvme0n1p2" 00:24:59.425 } 00:24:59.425 ]' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:59.425 { 00:24:59.425 "nbd_device": "/dev/nbd0", 00:24:59.425 "bdev_name": "Nvme0n1p1" 00:24:59.425 }, 00:24:59.425 { 00:24:59.425 "nbd_device": "/dev/nbd1", 00:24:59.425 "bdev_name": "Nvme0n1p2" 00:24:59.425 } 00:24:59.425 ]' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:59.425 /dev/nbd1' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:59.425 /dev/nbd1' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@65 -- # count=2 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@95 -- # count=2 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:59.425 256+0 records in 00:24:59.425 256+0 records out 00:24:59.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748691 s, 140 MB/s 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:59.425 256+0 records in 00:24:59.425 256+0 records out 00:24:59.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0935616 s, 11.2 MB/s 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:59.425 256+0 records in 00:24:59.425 256+0 records out 00:24:59.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0839426 s, 12.5 MB/s 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@51 -- # local i 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.425 11:35:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@41 -- # break 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.684 11:35:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@41 -- # break 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.944 11:35:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@65 -- # true 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@65 -- # count=0 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@104 -- # count=0 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@109 -- # return 0 00:25:00.203 11:35:18 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:00.203 11:35:18 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:00.463 malloc_lvol_verify 00:25:00.463 11:35:18 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:00.722 e83329de-3631-44b5-869c-1c7454c27de8 00:25:00.722 11:35:18 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:00.980 7e20a6f7-ad76-407e-a4b5-15c19ea00495 00:25:00.980 11:35:19 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:01.239 /dev/nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:01.239 mke2fs 1.47.0 (5-Feb-2023) 00:25:01.239 00:25:01.239 Filesystem too small for a journal 00:25:01.239 Discarding device blocks: 0/1024 done 00:25:01.239 Creating filesystem with 1024 4k blocks and 1024 inodes 00:25:01.239 00:25:01.239 Allocating group tables: 0/1 done 00:25:01.239 Writing inode tables: 0/1 done 00:25:01.239 Writing superblocks and filesystem accounting information: 0/1 done 00:25:01.239 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@51 -- # local i 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@41 -- # break 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:01.239 11:35:19 -- bdev/nbd_common.sh@147 -- # return 0 00:25:01.239 11:35:19 -- bdev/blockdev.sh@324 -- # killprocess 100870 00:25:01.239 11:35:19 -- common/autotest_common.sh@936 -- # '[' -z 100870 ']' 00:25:01.239 11:35:19 -- common/autotest_common.sh@940 -- # kill -0 100870 00:25:01.239 11:35:19 -- common/autotest_common.sh@941 -- # uname 00:25:01.239 11:35:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:01.239 11:35:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100870 00:25:01.498 11:35:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:01.498 11:35:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:01.498 killing process with pid 100870 00:25:01.498 11:35:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100870' 00:25:01.498 11:35:19 -- common/autotest_common.sh@955 -- # kill 100870 00:25:01.498 11:35:19 -- common/autotest_common.sh@960 -- # wait 100870 00:25:01.498 11:35:19 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:25:01.498 00:25:01.498 real 0m5.476s 00:25:01.498 user 0m8.374s 00:25:01.498 sys 0m1.445s 00:25:01.498 11:35:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:01.498 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:25:01.498 ************************************ 00:25:01.498 END TEST bdev_nbd 00:25:01.498 ************************************ 00:25:01.498 11:35:19 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:25:01.498 11:35:19 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:25:01.498 skipping fio tests on NVMe due to multi-ns failures. 00:25:01.498 11:35:19 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:25:01.498 11:35:19 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:01.498 11:35:19 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:01.498 11:35:19 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:01.498 11:35:19 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:01.498 11:35:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:01.498 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:25:01.498 ************************************ 00:25:01.498 START TEST bdev_verify 00:25:01.498 ************************************ 00:25:01.498 11:35:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:01.755 [2024-11-26 11:35:19.768782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:01.755 [2024-11-26 11:35:19.768937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101091 ] 00:25:01.755 [2024-11-26 11:35:19.923818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:01.755 [2024-11-26 11:35:19.969312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.755 [2024-11-26 11:35:19.969388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.013 Running I/O for 5 seconds... 00:25:07.283 00:25:07.283 Latency(us) 00:25:07.283 [2024-11-26T11:35:25.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.283 [2024-11-26T11:35:25.513Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.283 Verification LBA range: start 0x0 length 0x4ff80 00:25:07.283 Nvme0n1p1 : 5.01 7581.88 29.62 0.00 0.00 16838.96 1638.40 21328.99 00:25:07.283 [2024-11-26T11:35:25.513Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:07.283 Verification LBA range: start 0x4ff80 length 0x4ff80 00:25:07.283 Nvme0n1p1 : 5.02 7625.44 29.79 0.00 0.00 16731.00 389.12 19899.11 00:25:07.283 [2024-11-26T11:35:25.513Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.283 Verification LBA range: start 0x0 length 0x4ff7f 00:25:07.283 Nvme0n1p2 : 5.02 7585.44 29.63 0.00 0.00 16816.51 897.40 20852.36 00:25:07.283 [2024-11-26T11:35:25.513Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:07.283 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:25:07.283 Nvme0n1p2 : 5.01 7620.73 29.77 0.00 0.00 16753.78 1392.64 21448.15 00:25:07.283 [2024-11-26T11:35:25.513Z] =================================================================================================================== 00:25:07.283 [2024-11-26T11:35:25.513Z] Total : 30413.49 118.80 0.00 0.00 16784.94 389.12 21448.15 00:25:11.474 ************************************ 00:25:11.474 END TEST bdev_verify 00:25:11.474 ************************************ 00:25:11.474 00:25:11.474 real 0m9.117s 00:25:11.474 user 0m17.563s 00:25:11.474 sys 0m0.208s 00:25:11.474 11:35:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:11.474 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.474 11:35:28 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:11.474 11:35:28 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:11.474 11:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.474 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.474 ************************************ 00:25:11.474 START TEST bdev_verify_big_io 00:25:11.474 ************************************ 00:25:11.474 11:35:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:11.474 [2024-11-26 11:35:28.957573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.474 [2024-11-26 11:35:28.957740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101205 ] 00:25:11.474 [2024-11-26 11:35:29.122758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.474 [2024-11-26 11:35:29.161362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.474 [2024-11-26 11:35:29.161436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.474 Running I/O for 5 seconds... 00:25:16.747 00:25:16.747 Latency(us) 00:25:16.747 [2024-11-26T11:35:34.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.747 [2024-11-26T11:35:34.977Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:16.747 Verification LBA range: start 0x0 length 0x4ff8 00:25:16.747 Nvme0n1p1 : 5.10 1013.30 63.33 0.00 0.00 124990.40 2695.91 187790.43 00:25:16.747 [2024-11-26T11:35:34.977Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:16.747 Verification LBA range: start 0x4ff8 length 0x4ff8 00:25:16.747 Nvme0n1p1 : 5.10 970.37 60.65 0.00 0.00 130524.71 2308.65 183024.17 00:25:16.747 [2024-11-26T11:35:34.977Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:16.747 Verification LBA range: start 0x0 length 0x4ff7 00:25:16.747 Nvme0n1p2 : 5.10 1020.64 63.79 0.00 0.00 122787.52 666.53 144894.14 00:25:16.747 [2024-11-26T11:35:34.977Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:16.747 Verification LBA range: start 0x4ff7 length 0x4ff7 00:25:16.747 Nvme0n1p2 : 5.10 978.18 61.14 0.00 0.00 128213.83 707.49 186837.18 00:25:16.747 [2024-11-26T11:35:34.978Z] =================================================================================================================== 00:25:16.748 [2024-11-26T11:35:34.978Z] Total : 3982.50 248.91 0.00 0.00 126566.15 666.53 187790.43 00:25:16.748 00:25:16.748 real 0m5.767s 00:25:16.748 user 0m10.892s 00:25:16.748 sys 0m0.183s 00:25:16.748 11:35:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:16.748 ************************************ 00:25:16.748 END TEST bdev_verify_big_io 00:25:16.748 ************************************ 00:25:16.748 11:35:34 -- common/autotest_common.sh@10 -- # set +x 00:25:16.748 11:35:34 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:16.748 11:35:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:16.748 11:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.748 11:35:34 -- common/autotest_common.sh@10 -- # set +x 00:25:16.748 ************************************ 00:25:16.748 START TEST bdev_write_zeroes 00:25:16.748 ************************************ 00:25:16.748 11:35:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:16.748 [2024-11-26 11:35:34.759479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:16.748 [2024-11-26 11:35:34.759624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101283 ] 00:25:16.748 [2024-11-26 11:35:34.905198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.748 [2024-11-26 11:35:34.935717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.007 Running I/O for 1 seconds... 00:25:17.941 00:25:17.941 Latency(us) 00:25:17.941 [2024-11-26T11:35:36.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.941 [2024-11-26T11:35:36.171Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:17.941 Nvme0n1p1 : 1.01 23327.48 91.12 0.00 0.00 5475.44 3038.49 16681.89 00:25:17.941 [2024-11-26T11:35:36.171Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:17.941 Nvme0n1p2 : 1.01 23303.76 91.03 0.00 0.00 5473.90 2636.33 16920.20 00:25:17.941 [2024-11-26T11:35:36.171Z] =================================================================================================================== 00:25:17.941 [2024-11-26T11:35:36.171Z] Total : 46631.24 182.15 0.00 0.00 5474.67 2636.33 16920.20 00:25:18.199 00:25:18.199 real 0m1.576s 00:25:18.199 user 0m1.330s 00:25:18.199 sys 0m0.146s 00:25:18.199 ************************************ 00:25:18.199 END TEST bdev_write_zeroes 00:25:18.199 ************************************ 00:25:18.199 11:35:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:18.199 11:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:18.199 11:35:36 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:18.199 11:35:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:18.199 11:35:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.199 11:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:18.199 ************************************ 00:25:18.199 START TEST bdev_json_nonenclosed 00:25:18.199 ************************************ 00:25:18.199 11:35:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:18.199 [2024-11-26 11:35:36.394424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.199 [2024-11-26 11:35:36.394594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101318 ] 00:25:18.458 [2024-11-26 11:35:36.560020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.458 [2024-11-26 11:35:36.594067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.458 [2024-11-26 11:35:36.594307] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:18.458 [2024-11-26 11:35:36.594359] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:18.458 00:25:18.458 real 0m0.346s 00:25:18.458 user 0m0.146s 00:25:18.458 sys 0m0.099s 00:25:18.458 ************************************ 00:25:18.458 END TEST bdev_json_nonenclosed 00:25:18.458 ************************************ 00:25:18.458 11:35:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:18.459 11:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:18.718 11:35:36 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:18.718 11:35:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:18.718 11:35:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.718 11:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:18.718 ************************************ 00:25:18.718 START TEST bdev_json_nonarray 00:25:18.718 ************************************ 00:25:18.718 11:35:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:18.718 [2024-11-26 11:35:36.773179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.718 [2024-11-26 11:35:36.773363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101345 ] 00:25:18.718 [2024-11-26 11:35:36.919543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.718 [2024-11-26 11:35:36.950952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.719 [2024-11-26 11:35:36.951165] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:18.719 [2024-11-26 11:35:36.951208] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:18.978 00:25:18.978 real 0m0.303s 00:25:18.978 user 0m0.127s 00:25:18.978 sys 0m0.076s 00:25:18.978 ************************************ 00:25:18.978 END TEST bdev_json_nonarray 00:25:18.978 ************************************ 00:25:18.978 11:35:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:18.978 11:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:18.978 11:35:37 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:25:18.978 11:35:37 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:25:18.978 11:35:37 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:25:18.978 11:35:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:18.978 11:35:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:18.978 11:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:18.978 ************************************ 00:25:18.978 START TEST bdev_gpt_uuid 00:25:18.978 ************************************ 00:25:18.978 11:35:37 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:25:18.978 11:35:37 -- bdev/blockdev.sh@612 -- # local bdev 00:25:18.978 11:35:37 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:25:18.978 11:35:37 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=101365 00:25:18.978 11:35:37 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:18.978 11:35:37 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:18.978 11:35:37 -- bdev/blockdev.sh@47 -- # waitforlisten 101365 00:25:18.978 11:35:37 -- common/autotest_common.sh@829 -- # '[' -z 101365 ']' 00:25:18.978 11:35:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.978 11:35:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.978 11:35:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.978 11:35:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.978 11:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:18.978 [2024-11-26 11:35:37.149612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.978 [2024-11-26 11:35:37.149796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101365 ] 00:25:19.237 [2024-11-26 11:35:37.305067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.237 [2024-11-26 11:35:37.334176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:19.237 [2024-11-26 11:35:37.334421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.803 11:35:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.803 11:35:38 -- common/autotest_common.sh@862 -- # return 0 00:25:19.803 11:35:38 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:19.803 11:35:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.803 11:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.062 Some configs were skipped because the RPC state that can call them passed over. 00:25:20.062 11:35:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.062 11:35:38 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:25:20.062 11:35:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.062 11:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.062 11:35:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.062 11:35:38 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:25:20.062 11:35:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.062 11:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.062 11:35:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.062 11:35:38 -- bdev/blockdev.sh@619 -- # bdev='[ 00:25:20.062 { 00:25:20.062 "name": "Nvme0n1p1", 00:25:20.062 "aliases": [ 00:25:20.062 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:25:20.062 ], 00:25:20.062 "product_name": "GPT Disk", 00:25:20.062 "block_size": 4096, 00:25:20.062 "num_blocks": 655104, 00:25:20.062 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:25:20.062 "assigned_rate_limits": { 00:25:20.062 "rw_ios_per_sec": 0, 00:25:20.062 "rw_mbytes_per_sec": 0, 00:25:20.062 "r_mbytes_per_sec": 0, 00:25:20.062 "w_mbytes_per_sec": 0 00:25:20.062 }, 00:25:20.062 "claimed": false, 00:25:20.062 "zoned": false, 00:25:20.062 "supported_io_types": { 00:25:20.062 "read": true, 00:25:20.062 "write": true, 00:25:20.062 "unmap": true, 00:25:20.062 "write_zeroes": true, 00:25:20.062 "flush": true, 00:25:20.062 "reset": true, 00:25:20.062 "compare": true, 00:25:20.062 "compare_and_write": false, 00:25:20.062 "abort": true, 00:25:20.062 "nvme_admin": false, 00:25:20.062 "nvme_io": false 00:25:20.062 }, 00:25:20.062 "driver_specific": { 00:25:20.062 "gpt": { 00:25:20.062 "base_bdev": "Nvme0n1", 00:25:20.062 "offset_blocks": 256, 00:25:20.063 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:25:20.063 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:25:20.063 "partition_name": "SPDK_TEST_first" 00:25:20.063 } 00:25:20.063 } 00:25:20.063 } 00:25:20.063 ]' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@620 -- # jq -r length 00:25:20.063 11:35:38 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:25:20.063 11:35:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.063 11:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.063 11:35:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@624 -- # bdev='[ 00:25:20.063 { 00:25:20.063 "name": "Nvme0n1p2", 00:25:20.063 "aliases": [ 00:25:20.063 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:25:20.063 ], 00:25:20.063 "product_name": "GPT Disk", 00:25:20.063 "block_size": 4096, 00:25:20.063 "num_blocks": 655103, 00:25:20.063 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:25:20.063 "assigned_rate_limits": { 00:25:20.063 "rw_ios_per_sec": 0, 00:25:20.063 "rw_mbytes_per_sec": 0, 00:25:20.063 "r_mbytes_per_sec": 0, 00:25:20.063 "w_mbytes_per_sec": 0 00:25:20.063 }, 00:25:20.063 "claimed": false, 00:25:20.063 "zoned": false, 00:25:20.063 "supported_io_types": { 00:25:20.063 "read": true, 00:25:20.063 "write": true, 00:25:20.063 "unmap": true, 00:25:20.063 "write_zeroes": true, 00:25:20.063 "flush": true, 00:25:20.063 "reset": true, 00:25:20.063 "compare": true, 00:25:20.063 "compare_and_write": false, 00:25:20.063 "abort": true, 00:25:20.063 "nvme_admin": false, 00:25:20.063 "nvme_io": false 00:25:20.063 }, 00:25:20.063 "driver_specific": { 00:25:20.063 "gpt": { 00:25:20.063 "base_bdev": "Nvme0n1", 00:25:20.063 "offset_blocks": 655360, 00:25:20.063 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:25:20.063 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:25:20.063 "partition_name": "SPDK_TEST_second" 00:25:20.063 } 00:25:20.063 } 00:25:20.063 } 00:25:20.063 ]' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@625 -- # jq -r length 00:25:20.063 11:35:38 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:25:20.063 11:35:38 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:25:20.063 11:35:38 -- bdev/blockdev.sh@629 -- # killprocess 101365 00:25:20.063 11:35:38 -- common/autotest_common.sh@936 -- # '[' -z 101365 ']' 00:25:20.063 11:35:38 -- common/autotest_common.sh@940 -- # kill -0 101365 00:25:20.063 11:35:38 -- common/autotest_common.sh@941 -- # uname 00:25:20.063 11:35:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.063 11:35:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101365 00:25:20.063 11:35:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:20.063 killing process with pid 101365 00:25:20.063 11:35:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:20.063 11:35:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101365' 00:25:20.063 11:35:38 -- common/autotest_common.sh@955 -- # kill 101365 00:25:20.063 11:35:38 -- common/autotest_common.sh@960 -- # wait 101365 00:25:20.322 00:25:20.322 real 0m1.433s 00:25:20.322 user 0m1.522s 00:25:20.322 sys 0m0.322s 00:25:20.322 ************************************ 00:25:20.322 END TEST bdev_gpt_uuid 00:25:20.322 ************************************ 00:25:20.322 11:35:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:20.322 11:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 11:35:38 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:25:20.322 11:35:38 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:25:20.322 11:35:38 -- bdev/blockdev.sh@809 -- # cleanup 00:25:20.322 11:35:38 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:20.322 11:35:38 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:20.580 11:35:38 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:25:20.580 11:35:38 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:25:20.580 11:35:38 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:25:20.580 11:35:38 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:20.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:25:20.839 Waiting for block devices as requested 00:25:20.839 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:20.839 11:35:38 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:25:20.839 11:35:38 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:25:21.098 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:25:21.098 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:25:21.098 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:25:21.098 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:25:21.098 11:35:39 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:25:21.098 00:25:21.098 real 0m32.455s 00:25:21.098 user 0m49.844s 00:25:21.098 sys 0m5.196s 00:25:21.098 11:35:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:21.098 ************************************ 00:25:21.098 END TEST blockdev_nvme_gpt 00:25:21.098 ************************************ 00:25:21.098 11:35:39 -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 11:35:39 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:25:21.098 11:35:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:21.098 11:35:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.098 11:35:39 -- common/autotest_common.sh@10 -- # set +x 00:25:21.098 ************************************ 00:25:21.098 START TEST nvme 00:25:21.098 ************************************ 00:25:21.098 11:35:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:25:21.357 * Looking for test storage... 00:25:21.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:21.357 11:35:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:21.357 11:35:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:21.357 11:35:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:21.357 11:35:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:21.357 11:35:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:21.357 11:35:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:21.357 11:35:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:21.357 11:35:39 -- scripts/common.sh@335 -- # IFS=.-: 00:25:21.357 11:35:39 -- scripts/common.sh@335 -- # read -ra ver1 00:25:21.357 11:35:39 -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.357 11:35:39 -- scripts/common.sh@336 -- # read -ra ver2 00:25:21.357 11:35:39 -- scripts/common.sh@337 -- # local 'op=<' 00:25:21.357 11:35:39 -- scripts/common.sh@339 -- # ver1_l=2 00:25:21.357 11:35:39 -- scripts/common.sh@340 -- # ver2_l=1 00:25:21.357 11:35:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:21.357 11:35:39 -- scripts/common.sh@343 -- # case "$op" in 00:25:21.357 11:35:39 -- scripts/common.sh@344 -- # : 1 00:25:21.357 11:35:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:21.357 11:35:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.357 11:35:39 -- scripts/common.sh@364 -- # decimal 1 00:25:21.357 11:35:39 -- scripts/common.sh@352 -- # local d=1 00:25:21.357 11:35:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.357 11:35:39 -- scripts/common.sh@354 -- # echo 1 00:25:21.357 11:35:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:21.357 11:35:39 -- scripts/common.sh@365 -- # decimal 2 00:25:21.357 11:35:39 -- scripts/common.sh@352 -- # local d=2 00:25:21.357 11:35:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.357 11:35:39 -- scripts/common.sh@354 -- # echo 2 00:25:21.357 11:35:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:21.357 11:35:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:21.357 11:35:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:21.357 11:35:39 -- scripts/common.sh@367 -- # return 0 00:25:21.357 11:35:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.357 11:35:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.357 --rc genhtml_branch_coverage=1 00:25:21.357 --rc genhtml_function_coverage=1 00:25:21.357 --rc genhtml_legend=1 00:25:21.357 --rc geninfo_all_blocks=1 00:25:21.357 --rc geninfo_unexecuted_blocks=1 00:25:21.357 00:25:21.357 ' 00:25:21.357 11:35:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.357 --rc genhtml_branch_coverage=1 00:25:21.357 --rc genhtml_function_coverage=1 00:25:21.357 --rc genhtml_legend=1 00:25:21.357 --rc geninfo_all_blocks=1 00:25:21.357 --rc geninfo_unexecuted_blocks=1 00:25:21.357 00:25:21.357 ' 00:25:21.357 11:35:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.357 --rc genhtml_branch_coverage=1 00:25:21.357 --rc genhtml_function_coverage=1 00:25:21.357 --rc genhtml_legend=1 00:25:21.357 --rc geninfo_all_blocks=1 00:25:21.357 --rc geninfo_unexecuted_blocks=1 00:25:21.357 00:25:21.357 ' 00:25:21.357 11:35:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.357 --rc genhtml_branch_coverage=1 00:25:21.357 --rc genhtml_function_coverage=1 00:25:21.357 --rc genhtml_legend=1 00:25:21.357 --rc geninfo_all_blocks=1 00:25:21.357 --rc geninfo_unexecuted_blocks=1 00:25:21.357 00:25:21.357 ' 00:25:21.357 11:35:39 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:21.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:25:21.876 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:22.444 11:35:40 -- nvme/nvme.sh@79 -- # uname 00:25:22.444 11:35:40 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:25:22.444 11:35:40 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:25:22.444 11:35:40 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:25:22.444 11:35:40 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:25:22.444 11:35:40 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:25:22.444 11:35:40 -- common/autotest_common.sh@1055 -- # echo 0 00:25:22.445 11:35:40 -- common/autotest_common.sh@1057 -- # stubpid=101723 00:25:22.445 Waiting for stub to ready for secondary processes... 00:25:22.445 11:35:40 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:25:22.445 11:35:40 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:25:22.445 11:35:40 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:25:22.445 11:35:40 -- common/autotest_common.sh@1061 -- # [[ -e /proc/101723 ]] 00:25:22.445 11:35:40 -- common/autotest_common.sh@1062 -- # sleep 1s 00:25:22.445 [2024-11-26 11:35:40.569152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:22.445 [2024-11-26 11:35:40.569325] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.382 [2024-11-26 11:35:41.338663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.382 [2024-11-26 11:35:41.361693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.382 [2024-11-26 11:35:41.361766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.382 [2024-11-26 11:35:41.361837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.382 [2024-11-26 11:35:41.370069] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:25:23.382 [2024-11-26 11:35:41.381291] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:25:23.382 [2024-11-26 11:35:41.381981] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:25:23.382 done. 00:25:23.382 11:35:41 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:25:23.382 11:35:41 -- common/autotest_common.sh@1064 -- # echo done. 00:25:23.382 11:35:41 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:25:23.382 11:35:41 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:25:23.382 11:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.382 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:23.382 ************************************ 00:25:23.382 START TEST nvme_reset 00:25:23.382 ************************************ 00:25:23.382 11:35:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:25:23.642 Initializing NVMe Controllers 00:25:23.642 Skipping QEMU NVMe SSD at 0000:00:06.0 00:25:23.642 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:25:23.642 ************************************ 00:25:23.642 END TEST nvme_reset 00:25:23.642 ************************************ 00:25:23.642 00:25:23.642 real 0m0.284s 00:25:23.642 user 0m0.099s 00:25:23.642 sys 0m0.144s 00:25:23.642 11:35:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:23.642 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:23.642 11:35:41 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:25:23.642 11:35:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:23.642 11:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.642 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:23.642 ************************************ 00:25:23.642 START TEST nvme_identify 00:25:23.642 ************************************ 00:25:23.642 11:35:41 -- common/autotest_common.sh@1114 -- # nvme_identify 00:25:23.642 11:35:41 -- nvme/nvme.sh@12 -- # bdfs=() 00:25:23.642 11:35:41 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:25:23.642 11:35:41 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:25:23.642 11:35:41 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:25:23.642 11:35:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:23.642 11:35:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:23.642 11:35:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:23.902 11:35:41 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:23.902 11:35:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:23.902 11:35:41 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:25:23.902 11:35:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:25:23.902 11:35:41 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:25:24.163 ===================================================== 00:25:24.163 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:24.163 ===================================================== 00:25:24.163 Controller Capabilities/Features 00:25:24.163 ================================ 00:25:24.163 Vendor ID: 1b36 00:25:24.163 Subsystem Vendor ID: 1af4 00:25:24.163 Serial Number: 12340 00:25:24.163 Model Number: QEMU NVMe Ctrl 00:25:24.163 Firmware Version: 8.0.0 00:25:24.163 Recommended Arb Burst: 6 00:25:24.163 IEEE OUI Identifier: 00 54 52 00:25:24.163 Multi-path I/O 00:25:24.163 May have multiple subsystem ports: No 00:25:24.163 May have multiple controllers: No 00:25:24.163 Associated with SR-IOV VF: No 00:25:24.163 Max Data Transfer Size: 524288 00:25:24.163 Max Number of Namespaces: 256 00:25:24.163 Max Number of I/O Queues: 64 00:25:24.163 NVMe Specification Version (VS): 1.4 00:25:24.163 NVMe Specification Version (Identify): 1.4 00:25:24.163 Maximum Queue Entries: 2048 00:25:24.163 Contiguous Queues Required: Yes 00:25:24.163 Arbitration Mechanisms Supported 00:25:24.163 Weighted Round Robin: Not Supported 00:25:24.163 Vendor Specific: Not Supported 00:25:24.163 Reset Timeout: 7500 ms 00:25:24.163 Doorbell Stride: 4 bytes 00:25:24.163 NVM Subsystem Reset: Not Supported 00:25:24.163 Command Sets Supported 00:25:24.163 NVM Command Set: Supported 00:25:24.163 Boot Partition: Not Supported 00:25:24.163 Memory Page Size Minimum: 4096 bytes 00:25:24.163 Memory Page Size Maximum: 65536 bytes 00:25:24.163 Persistent Memory Region: Not Supported 00:25:24.163 Optional Asynchronous Events Supported 00:25:24.163 Namespace Attribute Notices: Supported 00:25:24.163 Firmware Activation Notices: Not Supported 00:25:24.163 ANA Change Notices: Not Supported 00:25:24.163 PLE Aggregate Log Change Notices: Not Supported 00:25:24.163 LBA Status Info Alert Notices: Not Supported 00:25:24.163 EGE Aggregate Log Change Notices: Not Supported 00:25:24.163 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.163 Zone Descriptor Change Notices: Not Supported 00:25:24.163 Discovery Log Change Notices: Not Supported 00:25:24.163 Controller Attributes 00:25:24.163 128-bit Host Identifier: Not Supported 00:25:24.163 Non-Operational Permissive Mode: Not Supported 00:25:24.163 NVM Sets: Not Supported 00:25:24.163 Read Recovery Levels: Not Supported 00:25:24.163 Endurance Groups: Not Supported 00:25:24.163 Predictable Latency Mode: Not Supported 00:25:24.163 Traffic Based Keep ALive: Not Supported 00:25:24.163 Namespace Granularity: Not Supported 00:25:24.163 SQ Associations: Not Supported 00:25:24.163 UUID List: Not Supported 00:25:24.163 Multi-Domain Subsystem: Not Supported 00:25:24.163 Fixed Capacity Management: Not Supported 00:25:24.163 Variable Capacity Management: Not Supported 00:25:24.163 Delete Endurance Group: Not Supported 00:25:24.163 Delete NVM Set: Not Supported 00:25:24.163 Extended LBA Formats Supported: Supported 00:25:24.163 Flexible Data Placement Supported: Not Supported 00:25:24.163 00:25:24.163 Controller Memory Buffer Support 00:25:24.163 ================================ 00:25:24.163 Supported: No 00:25:24.163 00:25:24.163 Persistent Memory Region Support 00:25:24.163 ================================ 00:25:24.163 Supported: No 00:25:24.163 00:25:24.163 Admin Command Set Attributes 00:25:24.163 ============================ 00:25:24.163 Security Send/Receive: Not Supported 00:25:24.163 Format NVM: Supported 00:25:24.163 Firmware Activate/Download: Not Supported 00:25:24.163 Namespace Management: Supported 00:25:24.163 Device Self-Test: Not Supported 00:25:24.163 Directives: Supported 00:25:24.163 NVMe-MI: Not Supported 00:25:24.163 Virtualization Management: Not Supported 00:25:24.163 Doorbell Buffer Config: Supported 00:25:24.163 Get LBA Status Capability: Not Supported 00:25:24.163 Command & Feature Lockdown Capability: Not Supported 00:25:24.163 Abort Command Limit: 4 00:25:24.163 Async Event Request Limit: 4 00:25:24.163 Number of Firmware Slots: N/A 00:25:24.163 Firmware Slot 1 Read-Only: N/A 00:25:24.163 Firmware Activation Without Reset: N/A 00:25:24.163 Multiple Update Detection Support: N/A 00:25:24.163 Firmware Update Gr[2024-11-26 11:35:42.171021] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 101745 terminated unexpected 00:25:24.163 anularity: No Information Provided 00:25:24.163 Per-Namespace SMART Log: Yes 00:25:24.163 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.163 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:25:24.163 Command Effects Log Page: Supported 00:25:24.163 Get Log Page Extended Data: Supported 00:25:24.163 Telemetry Log Pages: Not Supported 00:25:24.163 Persistent Event Log Pages: Not Supported 00:25:24.163 Supported Log Pages Log Page: May Support 00:25:24.163 Commands Supported & Effects Log Page: Not Supported 00:25:24.163 Feature Identifiers & Effects Log Page:May Support 00:25:24.163 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.163 Data Area 4 for Telemetry Log: Not Supported 00:25:24.163 Error Log Page Entries Supported: 1 00:25:24.163 Keep Alive: Not Supported 00:25:24.163 00:25:24.163 NVM Command Set Attributes 00:25:24.163 ========================== 00:25:24.163 Submission Queue Entry Size 00:25:24.163 Max: 64 00:25:24.163 Min: 64 00:25:24.163 Completion Queue Entry Size 00:25:24.163 Max: 16 00:25:24.163 Min: 16 00:25:24.163 Number of Namespaces: 256 00:25:24.163 Compare Command: Supported 00:25:24.163 Write Uncorrectable Command: Not Supported 00:25:24.163 Dataset Management Command: Supported 00:25:24.163 Write Zeroes Command: Supported 00:25:24.163 Set Features Save Field: Supported 00:25:24.163 Reservations: Not Supported 00:25:24.163 Timestamp: Supported 00:25:24.163 Copy: Supported 00:25:24.163 Volatile Write Cache: Present 00:25:24.163 Atomic Write Unit (Normal): 1 00:25:24.163 Atomic Write Unit (PFail): 1 00:25:24.163 Atomic Compare & Write Unit: 1 00:25:24.163 Fused Compare & Write: Not Supported 00:25:24.163 Scatter-Gather List 00:25:24.163 SGL Command Set: Supported 00:25:24.163 SGL Keyed: Not Supported 00:25:24.163 SGL Bit Bucket Descriptor: Not Supported 00:25:24.163 SGL Metadata Pointer: Not Supported 00:25:24.163 Oversized SGL: Not Supported 00:25:24.163 SGL Metadata Address: Not Supported 00:25:24.163 SGL Offset: Not Supported 00:25:24.163 Transport SGL Data Block: Not Supported 00:25:24.163 Replay Protected Memory Block: Not Supported 00:25:24.163 00:25:24.163 Firmware Slot Information 00:25:24.163 ========================= 00:25:24.163 Active slot: 1 00:25:24.163 Slot 1 Firmware Revision: 1.0 00:25:24.163 00:25:24.163 00:25:24.163 Commands Supported and Effects 00:25:24.163 ============================== 00:25:24.163 Admin Commands 00:25:24.163 -------------- 00:25:24.163 Delete I/O Submission Queue (00h): Supported 00:25:24.163 Create I/O Submission Queue (01h): Supported 00:25:24.163 Get Log Page (02h): Supported 00:25:24.163 Delete I/O Completion Queue (04h): Supported 00:25:24.163 Create I/O Completion Queue (05h): Supported 00:25:24.163 Identify (06h): Supported 00:25:24.163 Abort (08h): Supported 00:25:24.163 Set Features (09h): Supported 00:25:24.163 Get Features (0Ah): Supported 00:25:24.163 Asynchronous Event Request (0Ch): Supported 00:25:24.163 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:24.163 Directive Send (19h): Supported 00:25:24.163 Directive Receive (1Ah): Supported 00:25:24.163 Virtualization Management (1Ch): Supported 00:25:24.163 Doorbell Buffer Config (7Ch): Supported 00:25:24.163 Format NVM (80h): Supported LBA-Change 00:25:24.163 I/O Commands 00:25:24.163 ------------ 00:25:24.163 Flush (00h): Supported LBA-Change 00:25:24.164 Write (01h): Supported LBA-Change 00:25:24.164 Read (02h): Supported 00:25:24.164 Compare (05h): Supported 00:25:24.164 Write Zeroes (08h): Supported LBA-Change 00:25:24.164 Dataset Management (09h): Supported LBA-Change 00:25:24.164 Unknown (0Ch): Supported 00:25:24.164 Unknown (12h): Supported 00:25:24.164 Copy (19h): Supported LBA-Change 00:25:24.164 Unknown (1Dh): Supported LBA-Change 00:25:24.164 00:25:24.164 Error Log 00:25:24.164 ========= 00:25:24.164 00:25:24.164 Arbitration 00:25:24.164 =========== 00:25:24.164 Arbitration Burst: no limit 00:25:24.164 00:25:24.164 Power Management 00:25:24.164 ================ 00:25:24.164 Number of Power States: 1 00:25:24.164 Current Power State: Power State #0 00:25:24.164 Power State #0: 00:25:24.164 Max Power: 25.00 W 00:25:24.164 Non-Operational State: Operational 00:25:24.164 Entry Latency: 16 microseconds 00:25:24.164 Exit Latency: 4 microseconds 00:25:24.164 Relative Read Throughput: 0 00:25:24.164 Relative Read Latency: 0 00:25:24.164 Relative Write Throughput: 0 00:25:24.164 Relative Write Latency: 0 00:25:24.164 Idle Power: Not Reported 00:25:24.164 Active Power: Not Reported 00:25:24.164 Non-Operational Permissive Mode: Not Supported 00:25:24.164 00:25:24.164 Health Information 00:25:24.164 ================== 00:25:24.164 Critical Warnings: 00:25:24.164 Available Spare Space: OK 00:25:24.164 Temperature: OK 00:25:24.164 Device Reliability: OK 00:25:24.164 Read Only: No 00:25:24.164 Volatile Memory Backup: OK 00:25:24.164 Current Temperature: 323 Kelvin (50 Celsius) 00:25:24.164 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:24.164 Available Spare: 0% 00:25:24.164 Available Spare Threshold: 0% 00:25:24.164 Life Percentage Used: 0% 00:25:24.164 Data Units Read: 8174 00:25:24.164 Data Units Written: 3970 00:25:24.164 Host Read Commands: 373673 00:25:24.164 Host Write Commands: 201787 00:25:24.164 Controller Busy Time: 0 minutes 00:25:24.164 Power Cycles: 0 00:25:24.164 Power On Hours: 0 hours 00:25:24.164 Unsafe Shutdowns: 0 00:25:24.164 Unrecoverable Media Errors: 0 00:25:24.164 Lifetime Error Log Entries: 0 00:25:24.164 Warning Temperature Time: 0 minutes 00:25:24.164 Critical Temperature Time: 0 minutes 00:25:24.164 00:25:24.164 Number of Queues 00:25:24.164 ================ 00:25:24.164 Number of I/O Submission Queues: 64 00:25:24.164 Number of I/O Completion Queues: 64 00:25:24.164 00:25:24.164 ZNS Specific Controller Data 00:25:24.164 ============================ 00:25:24.164 Zone Append Size Limit: 0 00:25:24.164 00:25:24.164 00:25:24.164 Active Namespaces 00:25:24.164 ================= 00:25:24.164 Namespace ID:1 00:25:24.164 Error Recovery Timeout: Unlimited 00:25:24.164 Command Set Identifier: NVM (00h) 00:25:24.164 Deallocate: Supported 00:25:24.164 Deallocated/Unwritten Error: Supported 00:25:24.164 Deallocated Read Value: All 0x00 00:25:24.164 Deallocate in Write Zeroes: Not Supported 00:25:24.164 Deallocated Guard Field: 0xFFFF 00:25:24.164 Flush: Supported 00:25:24.164 Reservation: Not Supported 00:25:24.164 Namespace Sharing Capabilities: Private 00:25:24.164 Size (in LBAs): 1310720 (5GiB) 00:25:24.164 Capacity (in LBAs): 1310720 (5GiB) 00:25:24.164 Utilization (in LBAs): 1310720 (5GiB) 00:25:24.164 Thin Provisioning: Not Supported 00:25:24.164 Per-NS Atomic Units: No 00:25:24.164 Maximum Single Source Range Length: 128 00:25:24.164 Maximum Copy Length: 128 00:25:24.164 Maximum Source Range Count: 128 00:25:24.164 NGUID/EUI64 Never Reused: No 00:25:24.164 Namespace Write Protected: No 00:25:24.164 Number of LBA Formats: 8 00:25:24.164 Current LBA Format: LBA Format #04 00:25:24.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:24.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:24.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:24.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:24.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:24.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:24.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:24.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:24.164 00:25:24.164 11:35:42 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:25:24.164 11:35:42 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:24.424 ===================================================== 00:25:24.424 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:24.424 ===================================================== 00:25:24.424 Controller Capabilities/Features 00:25:24.424 ================================ 00:25:24.424 Vendor ID: 1b36 00:25:24.424 Subsystem Vendor ID: 1af4 00:25:24.424 Serial Number: 12340 00:25:24.424 Model Number: QEMU NVMe Ctrl 00:25:24.424 Firmware Version: 8.0.0 00:25:24.425 Recommended Arb Burst: 6 00:25:24.425 IEEE OUI Identifier: 00 54 52 00:25:24.425 Multi-path I/O 00:25:24.425 May have multiple subsystem ports: No 00:25:24.425 May have multiple controllers: No 00:25:24.425 Associated with SR-IOV VF: No 00:25:24.425 Max Data Transfer Size: 524288 00:25:24.425 Max Number of Namespaces: 256 00:25:24.425 Max Number of I/O Queues: 64 00:25:24.425 NVMe Specification Version (VS): 1.4 00:25:24.425 NVMe Specification Version (Identify): 1.4 00:25:24.425 Maximum Queue Entries: 2048 00:25:24.425 Contiguous Queues Required: Yes 00:25:24.425 Arbitration Mechanisms Supported 00:25:24.425 Weighted Round Robin: Not Supported 00:25:24.425 Vendor Specific: Not Supported 00:25:24.425 Reset Timeout: 7500 ms 00:25:24.425 Doorbell Stride: 4 bytes 00:25:24.425 NVM Subsystem Reset: Not Supported 00:25:24.425 Command Sets Supported 00:25:24.425 NVM Command Set: Supported 00:25:24.425 Boot Partition: Not Supported 00:25:24.425 Memory Page Size Minimum: 4096 bytes 00:25:24.425 Memory Page Size Maximum: 65536 bytes 00:25:24.425 Persistent Memory Region: Not Supported 00:25:24.425 Optional Asynchronous Events Supported 00:25:24.425 Namespace Attribute Notices: Supported 00:25:24.425 Firmware Activation Notices: Not Supported 00:25:24.425 ANA Change Notices: Not Supported 00:25:24.425 PLE Aggregate Log Change Notices: Not Supported 00:25:24.425 LBA Status Info Alert Notices: Not Supported 00:25:24.425 EGE Aggregate Log Change Notices: Not Supported 00:25:24.425 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.425 Zone Descriptor Change Notices: Not Supported 00:25:24.425 Discovery Log Change Notices: Not Supported 00:25:24.425 Controller Attributes 00:25:24.425 128-bit Host Identifier: Not Supported 00:25:24.425 Non-Operational Permissive Mode: Not Supported 00:25:24.425 NVM Sets: Not Supported 00:25:24.425 Read Recovery Levels: Not Supported 00:25:24.425 Endurance Groups: Not Supported 00:25:24.425 Predictable Latency Mode: Not Supported 00:25:24.425 Traffic Based Keep ALive: Not Supported 00:25:24.425 Namespace Granularity: Not Supported 00:25:24.425 SQ Associations: Not Supported 00:25:24.425 UUID List: Not Supported 00:25:24.425 Multi-Domain Subsystem: Not Supported 00:25:24.425 Fixed Capacity Management: Not Supported 00:25:24.425 Variable Capacity Management: Not Supported 00:25:24.425 Delete Endurance Group: Not Supported 00:25:24.425 Delete NVM Set: Not Supported 00:25:24.425 Extended LBA Formats Supported: Supported 00:25:24.425 Flexible Data Placement Supported: Not Supported 00:25:24.425 00:25:24.425 Controller Memory Buffer Support 00:25:24.425 ================================ 00:25:24.425 Supported: No 00:25:24.425 00:25:24.425 Persistent Memory Region Support 00:25:24.425 ================================ 00:25:24.425 Supported: No 00:25:24.425 00:25:24.425 Admin Command Set Attributes 00:25:24.425 ============================ 00:25:24.425 Security Send/Receive: Not Supported 00:25:24.425 Format NVM: Supported 00:25:24.425 Firmware Activate/Download: Not Supported 00:25:24.425 Namespace Management: Supported 00:25:24.425 Device Self-Test: Not Supported 00:25:24.425 Directives: Supported 00:25:24.425 NVMe-MI: Not Supported 00:25:24.425 Virtualization Management: Not Supported 00:25:24.425 Doorbell Buffer Config: Supported 00:25:24.425 Get LBA Status Capability: Not Supported 00:25:24.425 Command & Feature Lockdown Capability: Not Supported 00:25:24.425 Abort Command Limit: 4 00:25:24.425 Async Event Request Limit: 4 00:25:24.425 Number of Firmware Slots: N/A 00:25:24.425 Firmware Slot 1 Read-Only: N/A 00:25:24.425 Firmware Activation Without Reset: N/A 00:25:24.425 Multiple Update Detection Support: N/A 00:25:24.425 Firmware Update Granularity: No Information Provided 00:25:24.425 Per-Namespace SMART Log: Yes 00:25:24.425 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.425 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:25:24.425 Command Effects Log Page: Supported 00:25:24.425 Get Log Page Extended Data: Supported 00:25:24.425 Telemetry Log Pages: Not Supported 00:25:24.425 Persistent Event Log Pages: Not Supported 00:25:24.425 Supported Log Pages Log Page: May Support 00:25:24.425 Commands Supported & Effects Log Page: Not Supported 00:25:24.425 Feature Identifiers & Effects Log Page:May Support 00:25:24.425 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.425 Data Area 4 for Telemetry Log: Not Supported 00:25:24.425 Error Log Page Entries Supported: 1 00:25:24.425 Keep Alive: Not Supported 00:25:24.425 00:25:24.425 NVM Command Set Attributes 00:25:24.425 ========================== 00:25:24.425 Submission Queue Entry Size 00:25:24.425 Max: 64 00:25:24.425 Min: 64 00:25:24.425 Completion Queue Entry Size 00:25:24.425 Max: 16 00:25:24.425 Min: 16 00:25:24.425 Number of Namespaces: 256 00:25:24.425 Compare Command: Supported 00:25:24.425 Write Uncorrectable Command: Not Supported 00:25:24.425 Dataset Management Command: Supported 00:25:24.425 Write Zeroes Command: Supported 00:25:24.425 Set Features Save Field: Supported 00:25:24.425 Reservations: Not Supported 00:25:24.425 Timestamp: Supported 00:25:24.425 Copy: Supported 00:25:24.425 Volatile Write Cache: Present 00:25:24.425 Atomic Write Unit (Normal): 1 00:25:24.425 Atomic Write Unit (PFail): 1 00:25:24.425 Atomic Compare & Write Unit: 1 00:25:24.425 Fused Compare & Write: Not Supported 00:25:24.425 Scatter-Gather List 00:25:24.425 SGL Command Set: Supported 00:25:24.425 SGL Keyed: Not Supported 00:25:24.425 SGL Bit Bucket Descriptor: Not Supported 00:25:24.425 SGL Metadata Pointer: Not Supported 00:25:24.425 Oversized SGL: Not Supported 00:25:24.425 SGL Metadata Address: Not Supported 00:25:24.425 SGL Offset: Not Supported 00:25:24.425 Transport SGL Data Block: Not Supported 00:25:24.425 Replay Protected Memory Block: Not Supported 00:25:24.425 00:25:24.425 Firmware Slot Information 00:25:24.425 ========================= 00:25:24.425 Active slot: 1 00:25:24.425 Slot 1 Firmware Revision: 1.0 00:25:24.425 00:25:24.425 00:25:24.425 Commands Supported and Effects 00:25:24.425 ============================== 00:25:24.425 Admin Commands 00:25:24.425 -------------- 00:25:24.425 Delete I/O Submission Queue (00h): Supported 00:25:24.425 Create I/O Submission Queue (01h): Supported 00:25:24.425 Get Log Page (02h): Supported 00:25:24.425 Delete I/O Completion Queue (04h): Supported 00:25:24.425 Create I/O Completion Queue (05h): Supported 00:25:24.425 Identify (06h): Supported 00:25:24.425 Abort (08h): Supported 00:25:24.425 Set Features (09h): Supported 00:25:24.425 Get Features (0Ah): Supported 00:25:24.425 Asynchronous Event Request (0Ch): Supported 00:25:24.425 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:24.425 Directive Send (19h): Supported 00:25:24.425 Directive Receive (1Ah): Supported 00:25:24.425 Virtualization Management (1Ch): Supported 00:25:24.425 Doorbell Buffer Config (7Ch): Supported 00:25:24.425 Format NVM (80h): Supported LBA-Change 00:25:24.425 I/O Commands 00:25:24.425 ------------ 00:25:24.425 Flush (00h): Supported LBA-Change 00:25:24.425 Write (01h): Supported LBA-Change 00:25:24.425 Read (02h): Supported 00:25:24.425 Compare (05h): Supported 00:25:24.425 Write Zeroes (08h): Supported LBA-Change 00:25:24.425 Dataset Management (09h): Supported LBA-Change 00:25:24.425 Unknown (0Ch): Supported 00:25:24.425 Unknown (12h): Supported 00:25:24.425 Copy (19h): Supported LBA-Change 00:25:24.425 Unknown (1Dh): Supported LBA-Change 00:25:24.425 00:25:24.425 Error Log 00:25:24.425 ========= 00:25:24.425 00:25:24.425 Arbitration 00:25:24.425 =========== 00:25:24.425 Arbitration Burst: no limit 00:25:24.425 00:25:24.425 Power Management 00:25:24.425 ================ 00:25:24.425 Number of Power States: 1 00:25:24.425 Current Power State: Power State #0 00:25:24.425 Power State #0: 00:25:24.425 Max Power: 25.00 W 00:25:24.425 Non-Operational State: Operational 00:25:24.425 Entry Latency: 16 microseconds 00:25:24.425 Exit Latency: 4 microseconds 00:25:24.425 Relative Read Throughput: 0 00:25:24.425 Relative Read Latency: 0 00:25:24.425 Relative Write Throughput: 0 00:25:24.425 Relative Write Latency: 0 00:25:24.425 Idle Power: Not Reported 00:25:24.425 Active Power: Not Reported 00:25:24.425 Non-Operational Permissive Mode: Not Supported 00:25:24.425 00:25:24.425 Health Information 00:25:24.425 ================== 00:25:24.425 Critical Warnings: 00:25:24.425 Available Spare Space: OK 00:25:24.425 Temperature: OK 00:25:24.425 Device Reliability: OK 00:25:24.425 Read Only: No 00:25:24.426 Volatile Memory Backup: OK 00:25:24.426 Current Temperature: 323 Kelvin (50 Celsius) 00:25:24.426 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:24.426 Available Spare: 0% 00:25:24.426 Available Spare Threshold: 0% 00:25:24.426 Life Percentage Used: 0% 00:25:24.426 Data Units Read: 8174 00:25:24.426 Data Units Written: 3970 00:25:24.426 Host Read Commands: 373673 00:25:24.426 Host Write Commands: 201787 00:25:24.426 Controller Busy Time: 0 minutes 00:25:24.426 Power Cycles: 0 00:25:24.426 Power On Hours: 0 hours 00:25:24.426 Unsafe Shutdowns: 0 00:25:24.426 Unrecoverable Media Errors: 0 00:25:24.426 Lifetime Error Log Entries: 0 00:25:24.426 Warning Temperature Time: 0 minutes 00:25:24.426 Critical Temperature Time: 0 minutes 00:25:24.426 00:25:24.426 Number of Queues 00:25:24.426 ================ 00:25:24.426 Number of I/O Submission Queues: 64 00:25:24.426 Number of I/O Completion Queues: 64 00:25:24.426 00:25:24.426 ZNS Specific Controller Data 00:25:24.426 ============================ 00:25:24.426 Zone Append Size Limit: 0 00:25:24.426 00:25:24.426 00:25:24.426 Active Namespaces 00:25:24.426 ================= 00:25:24.426 Namespace ID:1 00:25:24.426 Error Recovery Timeout: Unlimited 00:25:24.426 Command Set Identifier: NVM (00h) 00:25:24.426 Deallocate: Supported 00:25:24.426 Deallocated/Unwritten Error: Supported 00:25:24.426 Deallocated Read Value: All 0x00 00:25:24.426 Deallocate in Write Zeroes: Not Supported 00:25:24.426 Deallocated Guard Field: 0xFFFF 00:25:24.426 Flush: Supported 00:25:24.426 Reservation: Not Supported 00:25:24.426 Namespace Sharing Capabilities: Private 00:25:24.426 Size (in LBAs): 1310720 (5GiB) 00:25:24.426 Capacity (in LBAs): 1310720 (5GiB) 00:25:24.426 Utilization (in LBAs): 1310720 (5GiB) 00:25:24.426 Thin Provisioning: Not Supported 00:25:24.426 Per-NS Atomic Units: No 00:25:24.426 Maximum Single Source Range Length: 128 00:25:24.426 Maximum Copy Length: 128 00:25:24.426 Maximum Source Range Count: 128 00:25:24.426 NGUID/EUI64 Never Reused: No 00:25:24.426 Namespace Write Protected: No 00:25:24.426 Number of LBA Formats: 8 00:25:24.426 Current LBA Format: LBA Format #04 00:25:24.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:24.426 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:24.426 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:24.426 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:24.426 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:24.426 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:24.426 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:24.426 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:24.426 00:25:24.426 ************************************ 00:25:24.426 END TEST nvme_identify 00:25:24.426 ************************************ 00:25:24.426 00:25:24.426 real 0m0.631s 00:25:24.426 user 0m0.229s 00:25:24.426 sys 0m0.324s 00:25:24.426 11:35:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:24.426 11:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:24.426 11:35:42 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:25:24.426 11:35:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:24.426 11:35:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:24.426 11:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:24.426 ************************************ 00:25:24.426 START TEST nvme_perf 00:25:24.426 ************************************ 00:25:24.426 11:35:42 -- common/autotest_common.sh@1114 -- # nvme_perf 00:25:24.426 11:35:42 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:25:25.805 Initializing NVMe Controllers 00:25:25.805 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:25.805 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:25:25.805 Initialization complete. Launching workers. 00:25:25.805 ======================================================== 00:25:25.805 Latency(us) 00:25:25.805 Device Information : IOPS MiB/s Average min max 00:25:25.805 PCIE (0000:00:06.0) NSID 1 from core 0: 58834.52 689.47 2176.97 1167.81 5689.52 00:25:25.805 ======================================================== 00:25:25.805 Total : 58834.52 689.47 2176.97 1167.81 5689.52 00:25:25.805 00:25:25.805 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:25:25.805 ================================================================================= 00:25:25.805 1.00000% : 1303.273us 00:25:25.805 10.00000% : 1496.902us 00:25:25.805 25.00000% : 1742.662us 00:25:25.805 50.00000% : 2159.709us 00:25:25.805 75.00000% : 2561.862us 00:25:25.805 90.00000% : 2829.964us 00:25:25.805 95.00000% : 3112.960us 00:25:25.805 98.00000% : 3381.062us 00:25:25.805 99.00000% : 3485.324us 00:25:25.805 99.50000% : 3574.691us 00:25:25.805 99.90000% : 4587.520us 00:25:25.805 99.99000% : 5570.560us 00:25:25.805 99.99900% : 5689.716us 00:25:25.805 99.99990% : 5689.716us 00:25:25.805 99.99999% : 5689.716us 00:25:25.805 00:25:25.805 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:25:25.805 ============================================================================== 00:25:25.805 Range in us Cumulative IO count 00:25:25.805 1161.775 - 1169.222: 0.0017% ( 1) 00:25:25.805 1169.222 - 1176.669: 0.0085% ( 4) 00:25:25.805 1176.669 - 1184.116: 0.0136% ( 3) 00:25:25.805 1184.116 - 1191.564: 0.0187% ( 3) 00:25:25.805 1191.564 - 1199.011: 0.0323% ( 8) 00:25:25.805 1199.011 - 1206.458: 0.0425% ( 6) 00:25:25.805 1206.458 - 1213.905: 0.0696% ( 16) 00:25:25.805 1213.905 - 1221.353: 0.0883% ( 11) 00:25:25.805 1221.353 - 1228.800: 0.1155% ( 16) 00:25:25.805 1228.800 - 1236.247: 0.1444% ( 17) 00:25:25.805 1236.247 - 1243.695: 0.1868% ( 25) 00:25:25.805 1243.695 - 1251.142: 0.2412% ( 32) 00:25:25.805 1251.142 - 1258.589: 0.2989% ( 34) 00:25:25.805 1258.589 - 1266.036: 0.3889% ( 53) 00:25:25.805 1266.036 - 1273.484: 0.4755% ( 51) 00:25:25.805 1273.484 - 1280.931: 0.5757% ( 59) 00:25:25.805 1280.931 - 1288.378: 0.7303% ( 91) 00:25:25.805 1288.378 - 1295.825: 0.8662% ( 80) 00:25:25.805 1295.825 - 1303.273: 1.0598% ( 114) 00:25:25.805 1303.273 - 1310.720: 1.2551% ( 115) 00:25:25.805 1310.720 - 1318.167: 1.4844% ( 135) 00:25:25.805 1318.167 - 1325.615: 1.6780% ( 114) 00:25:25.805 1325.615 - 1333.062: 1.9412% ( 155) 00:25:25.805 1333.062 - 1340.509: 2.1943% ( 149) 00:25:25.805 1340.509 - 1347.956: 2.4626% ( 158) 00:25:25.805 1347.956 - 1355.404: 2.7751% ( 184) 00:25:25.805 1355.404 - 1362.851: 3.0774% ( 178) 00:25:25.805 1362.851 - 1370.298: 3.3679% ( 171) 00:25:25.805 1370.298 - 1377.745: 3.7126% ( 203) 00:25:25.805 1377.745 - 1385.193: 4.0608% ( 205) 00:25:25.805 1385.193 - 1392.640: 4.4005% ( 200) 00:25:25.805 1392.640 - 1400.087: 4.7622% ( 213) 00:25:25.806 1400.087 - 1407.535: 5.1410% ( 223) 00:25:25.806 1407.535 - 1414.982: 5.5197% ( 223) 00:25:25.806 1414.982 - 1422.429: 5.9069% ( 228) 00:25:25.806 1422.429 - 1429.876: 6.3077% ( 236) 00:25:25.806 1429.876 - 1437.324: 6.7086% ( 236) 00:25:25.806 1437.324 - 1444.771: 7.1298% ( 248) 00:25:25.806 1444.771 - 1452.218: 7.5459% ( 245) 00:25:25.806 1452.218 - 1459.665: 7.9518% ( 239) 00:25:25.806 1459.665 - 1467.113: 8.3849% ( 255) 00:25:25.806 1467.113 - 1474.560: 8.8060% ( 248) 00:25:25.806 1474.560 - 1482.007: 9.2408% ( 256) 00:25:25.806 1482.007 - 1489.455: 9.6722% ( 254) 00:25:25.806 1489.455 - 1496.902: 10.1223% ( 265) 00:25:25.806 1496.902 - 1504.349: 10.5656% ( 261) 00:25:25.806 1504.349 - 1511.796: 10.9969% ( 254) 00:25:25.806 1511.796 - 1519.244: 11.4725% ( 280) 00:25:25.806 1519.244 - 1526.691: 11.8886% ( 245) 00:25:25.806 1526.691 - 1534.138: 12.3709% ( 284) 00:25:25.806 1534.138 - 1541.585: 12.7853% ( 244) 00:25:25.806 1541.585 - 1549.033: 13.2507% ( 274) 00:25:25.806 1549.033 - 1556.480: 13.7075% ( 269) 00:25:25.806 1556.480 - 1563.927: 14.1542% ( 263) 00:25:25.806 1563.927 - 1571.375: 14.5992% ( 262) 00:25:25.806 1571.375 - 1578.822: 15.0476% ( 264) 00:25:25.806 1578.822 - 1586.269: 15.4908% ( 261) 00:25:25.806 1586.269 - 1593.716: 15.9732% ( 284) 00:25:25.806 1593.716 - 1601.164: 16.4096% ( 257) 00:25:25.806 1601.164 - 1608.611: 16.8818% ( 278) 00:25:25.806 1608.611 - 1616.058: 17.3183% ( 257) 00:25:25.806 1616.058 - 1623.505: 17.7955% ( 281) 00:25:25.806 1623.505 - 1630.953: 18.2541% ( 270) 00:25:25.806 1630.953 - 1638.400: 18.6838% ( 253) 00:25:25.806 1638.400 - 1645.847: 19.1610% ( 281) 00:25:25.806 1645.847 - 1653.295: 19.6060% ( 262) 00:25:25.806 1653.295 - 1660.742: 20.0560% ( 265) 00:25:25.806 1660.742 - 1668.189: 20.5214% ( 274) 00:25:25.806 1668.189 - 1675.636: 20.9647% ( 261) 00:25:25.806 1675.636 - 1683.084: 21.4300% ( 274) 00:25:25.806 1683.084 - 1690.531: 21.9073% ( 281) 00:25:25.806 1690.531 - 1697.978: 22.3454% ( 258) 00:25:25.806 1697.978 - 1705.425: 22.8142% ( 276) 00:25:25.806 1705.425 - 1712.873: 23.2609% ( 263) 00:25:25.806 1712.873 - 1720.320: 23.7245% ( 273) 00:25:25.806 1720.320 - 1727.767: 24.1950% ( 277) 00:25:25.806 1727.767 - 1735.215: 24.6281% ( 255) 00:25:25.806 1735.215 - 1742.662: 25.0798% ( 266) 00:25:25.806 1742.662 - 1750.109: 25.5435% ( 273) 00:25:25.806 1750.109 - 1757.556: 26.0105% ( 275) 00:25:25.806 1757.556 - 1765.004: 26.4657% ( 268) 00:25:25.806 1765.004 - 1772.451: 26.9327% ( 275) 00:25:25.806 1772.451 - 1779.898: 27.3879% ( 268) 00:25:25.806 1779.898 - 1787.345: 27.8448% ( 269) 00:25:25.806 1787.345 - 1794.793: 28.2914% ( 263) 00:25:25.806 1794.793 - 1802.240: 28.7636% ( 278) 00:25:25.806 1802.240 - 1809.687: 29.2035% ( 259) 00:25:25.806 1809.687 - 1817.135: 29.6773% ( 279) 00:25:25.806 1817.135 - 1824.582: 30.1223% ( 262) 00:25:25.806 1824.582 - 1832.029: 30.5520% ( 253) 00:25:25.806 1832.029 - 1839.476: 31.0343% ( 284) 00:25:25.806 1839.476 - 1846.924: 31.4827% ( 264) 00:25:25.806 1846.924 - 1854.371: 31.9327% ( 265) 00:25:25.806 1854.371 - 1861.818: 32.4270% ( 291) 00:25:25.806 1861.818 - 1869.265: 32.8312% ( 238) 00:25:25.806 1869.265 - 1876.713: 33.3084% ( 281) 00:25:25.806 1876.713 - 1884.160: 33.7619% ( 267) 00:25:25.806 1884.160 - 1891.607: 34.1882% ( 251) 00:25:25.806 1891.607 - 1899.055: 34.6501% ( 272) 00:25:25.806 1899.055 - 1906.502: 35.1053% ( 268) 00:25:25.806 1906.502 - 1921.396: 35.9986% ( 526) 00:25:25.806 1921.396 - 1936.291: 36.9243% ( 545) 00:25:25.806 1936.291 - 1951.185: 37.8057% ( 519) 00:25:25.806 1951.185 - 1966.080: 38.6906% ( 521) 00:25:25.806 1966.080 - 1980.975: 39.5873% ( 528) 00:25:25.806 1980.975 - 1995.869: 40.4959% ( 535) 00:25:25.806 1995.869 - 2010.764: 41.4012% ( 533) 00:25:25.806 2010.764 - 2025.658: 42.3149% ( 538) 00:25:25.806 2025.658 - 2040.553: 43.2490% ( 550) 00:25:25.806 2040.553 - 2055.447: 44.1661% ( 540) 00:25:25.806 2055.447 - 2070.342: 45.0645% ( 529) 00:25:25.806 2070.342 - 2085.236: 45.9969% ( 549) 00:25:25.806 2085.236 - 2100.131: 46.9073% ( 536) 00:25:25.806 2100.131 - 2115.025: 47.8363% ( 547) 00:25:25.806 2115.025 - 2129.920: 48.7432% ( 534) 00:25:25.806 2129.920 - 2144.815: 49.6671% ( 544) 00:25:25.806 2144.815 - 2159.709: 50.5639% ( 528) 00:25:25.806 2159.709 - 2174.604: 51.4521% ( 523) 00:25:25.806 2174.604 - 2189.498: 52.3709% ( 541) 00:25:25.806 2189.498 - 2204.393: 53.2643% ( 526) 00:25:25.806 2204.393 - 2219.287: 54.1661% ( 531) 00:25:25.806 2219.287 - 2234.182: 55.0832% ( 540) 00:25:25.806 2234.182 - 2249.076: 55.9579% ( 515) 00:25:25.806 2249.076 - 2263.971: 56.8716% ( 538) 00:25:25.806 2263.971 - 2278.865: 57.7649% ( 526) 00:25:25.806 2278.865 - 2293.760: 58.6464% ( 519) 00:25:25.806 2293.760 - 2308.655: 59.5550% ( 535) 00:25:25.806 2308.655 - 2323.549: 60.4806% ( 545) 00:25:25.806 2323.549 - 2338.444: 61.3587% ( 517) 00:25:25.806 2338.444 - 2353.338: 62.2894% ( 548) 00:25:25.806 2353.338 - 2368.233: 63.2286% ( 553) 00:25:25.806 2368.233 - 2383.127: 64.1406% ( 537) 00:25:25.806 2383.127 - 2398.022: 65.0577% ( 540) 00:25:25.806 2398.022 - 2412.916: 65.9851% ( 546) 00:25:25.806 2412.916 - 2427.811: 66.9056% ( 542) 00:25:25.806 2427.811 - 2442.705: 67.8312% ( 545) 00:25:25.806 2442.705 - 2457.600: 68.7500% ( 541) 00:25:25.806 2457.600 - 2472.495: 69.6637% ( 538) 00:25:25.806 2472.495 - 2487.389: 70.5808% ( 540) 00:25:25.806 2487.389 - 2502.284: 71.5065% ( 545) 00:25:25.806 2502.284 - 2517.178: 72.4168% ( 536) 00:25:25.806 2517.178 - 2532.073: 73.3322% ( 539) 00:25:25.806 2532.073 - 2546.967: 74.2459% ( 538) 00:25:25.806 2546.967 - 2561.862: 75.1783% ( 549) 00:25:25.806 2561.862 - 2576.756: 76.0734% ( 527) 00:25:25.806 2576.756 - 2591.651: 76.9854% ( 537) 00:25:25.806 2591.651 - 2606.545: 77.9195% ( 550) 00:25:25.806 2606.545 - 2621.440: 78.8298% ( 536) 00:25:25.806 2621.440 - 2636.335: 79.7198% ( 524) 00:25:25.806 2636.335 - 2651.229: 80.6539% ( 550) 00:25:25.806 2651.229 - 2666.124: 81.5846% ( 548) 00:25:25.806 2666.124 - 2681.018: 82.4983% ( 538) 00:25:25.806 2681.018 - 2695.913: 83.3849% ( 522) 00:25:25.806 2695.913 - 2710.807: 84.2952% ( 536) 00:25:25.806 2710.807 - 2725.702: 85.1613% ( 510) 00:25:25.806 2725.702 - 2740.596: 86.0003% ( 494) 00:25:25.806 2740.596 - 2755.491: 86.7952% ( 468) 00:25:25.806 2755.491 - 2770.385: 87.5713% ( 457) 00:25:25.806 2770.385 - 2785.280: 88.2643% ( 408) 00:25:25.806 2785.280 - 2800.175: 88.9300% ( 392) 00:25:25.806 2800.175 - 2815.069: 89.5109% ( 342) 00:25:25.806 2815.069 - 2829.964: 90.0526% ( 319) 00:25:25.806 2829.964 - 2844.858: 90.5503% ( 293) 00:25:25.806 2844.858 - 2859.753: 90.9935% ( 261) 00:25:25.806 2859.753 - 2874.647: 91.3876% ( 232) 00:25:25.806 2874.647 - 2889.542: 91.7255% ( 199) 00:25:25.806 2889.542 - 2904.436: 92.0329% ( 181) 00:25:25.806 2904.436 - 2919.331: 92.3166% ( 167) 00:25:25.806 2919.331 - 2934.225: 92.5934% ( 163) 00:25:25.806 2934.225 - 2949.120: 92.8584% ( 156) 00:25:25.806 2949.120 - 2964.015: 93.1012% ( 143) 00:25:25.806 2964.015 - 2978.909: 93.3220% ( 130) 00:25:25.806 2978.909 - 2993.804: 93.5309% ( 123) 00:25:25.806 2993.804 - 3008.698: 93.7381% ( 122) 00:25:25.806 3008.698 - 3023.593: 93.9215% ( 108) 00:25:25.806 3023.593 - 3038.487: 94.1050% ( 108) 00:25:25.806 3038.487 - 3053.382: 94.2986% ( 114) 00:25:25.806 3053.382 - 3068.276: 94.4718% ( 102) 00:25:25.806 3068.276 - 3083.171: 94.6501% ( 105) 00:25:25.806 3083.171 - 3098.065: 94.8404% ( 112) 00:25:25.806 3098.065 - 3112.960: 95.0221% ( 107) 00:25:25.806 3112.960 - 3127.855: 95.2089% ( 110) 00:25:25.806 3127.855 - 3142.749: 95.3906% ( 107) 00:25:25.806 3142.749 - 3157.644: 95.5757% ( 109) 00:25:25.806 3157.644 - 3172.538: 95.7558% ( 106) 00:25:25.806 3172.538 - 3187.433: 95.9358% ( 106) 00:25:25.806 3187.433 - 3202.327: 96.1073% ( 101) 00:25:25.806 3202.327 - 3217.222: 96.2687% ( 95) 00:25:25.806 3217.222 - 3232.116: 96.4470% ( 105) 00:25:25.806 3232.116 - 3247.011: 96.6202% ( 102) 00:25:25.806 3247.011 - 3261.905: 96.8037% ( 108) 00:25:25.806 3261.905 - 3276.800: 96.9803% ( 104) 00:25:25.806 3276.800 - 3291.695: 97.1569% ( 104) 00:25:25.806 3291.695 - 3306.589: 97.3336% ( 104) 00:25:25.806 3306.589 - 3321.484: 97.5119% ( 105) 00:25:25.806 3321.484 - 3336.378: 97.6800% ( 99) 00:25:25.806 3336.378 - 3351.273: 97.8414% ( 95) 00:25:25.806 3351.273 - 3366.167: 97.9976% ( 92) 00:25:25.806 3366.167 - 3381.062: 98.1539% ( 92) 00:25:25.806 3381.062 - 3395.956: 98.3118% ( 93) 00:25:25.806 3395.956 - 3410.851: 98.4664% ( 91) 00:25:25.806 3410.851 - 3425.745: 98.6090% ( 84) 00:25:25.806 3425.745 - 3440.640: 98.7432% ( 79) 00:25:25.806 3440.640 - 3455.535: 98.8638% ( 71) 00:25:25.806 3455.535 - 3470.429: 98.9742% ( 65) 00:25:25.806 3470.429 - 3485.324: 99.0829% ( 64) 00:25:25.806 3485.324 - 3500.218: 99.1814% ( 58) 00:25:25.806 3500.218 - 3515.113: 99.2714% ( 53) 00:25:25.806 3515.113 - 3530.007: 99.3529% ( 48) 00:25:25.806 3530.007 - 3544.902: 99.4175% ( 38) 00:25:25.806 3544.902 - 3559.796: 99.4735% ( 33) 00:25:25.806 3559.796 - 3574.691: 99.5143% ( 24) 00:25:25.806 3574.691 - 3589.585: 99.5516% ( 22) 00:25:25.806 3589.585 - 3604.480: 99.5839% ( 19) 00:25:25.806 3604.480 - 3619.375: 99.6077% ( 14) 00:25:25.806 3619.375 - 3634.269: 99.6230% ( 9) 00:25:25.806 3634.269 - 3649.164: 99.6399% ( 10) 00:25:25.806 3649.164 - 3664.058: 99.6501% ( 6) 00:25:25.806 3664.058 - 3678.953: 99.6586% ( 5) 00:25:25.806 3678.953 - 3693.847: 99.6654% ( 4) 00:25:25.807 3693.847 - 3708.742: 99.6688% ( 2) 00:25:25.807 3708.742 - 3723.636: 99.6756% ( 4) 00:25:25.807 3723.636 - 3738.531: 99.6807% ( 3) 00:25:25.807 3738.531 - 3753.425: 99.6875% ( 4) 00:25:25.807 3753.425 - 3768.320: 99.6926% ( 3) 00:25:25.807 3768.320 - 3783.215: 99.6994% ( 4) 00:25:25.807 3783.215 - 3798.109: 99.7045% ( 3) 00:25:25.807 3798.109 - 3813.004: 99.7113% ( 4) 00:25:25.807 3813.004 - 3842.793: 99.7249% ( 8) 00:25:25.807 3842.793 - 3872.582: 99.7334% ( 5) 00:25:25.807 3872.582 - 3902.371: 99.7452% ( 7) 00:25:25.807 3902.371 - 3932.160: 99.7588% ( 8) 00:25:25.807 3932.160 - 3961.949: 99.7690% ( 6) 00:25:25.807 3961.949 - 3991.738: 99.7758% ( 4) 00:25:25.807 3991.738 - 4021.527: 99.7843% ( 5) 00:25:25.807 4021.527 - 4051.316: 99.7911% ( 4) 00:25:25.807 4051.316 - 4081.105: 99.7996% ( 5) 00:25:25.807 4081.105 - 4110.895: 99.8098% ( 6) 00:25:25.807 4110.895 - 4140.684: 99.8183% ( 5) 00:25:25.807 4140.684 - 4170.473: 99.8268% ( 5) 00:25:25.807 4170.473 - 4200.262: 99.8336% ( 4) 00:25:25.807 4200.262 - 4230.051: 99.8421% ( 5) 00:25:25.807 4230.051 - 4259.840: 99.8471% ( 3) 00:25:25.807 4259.840 - 4289.629: 99.8539% ( 4) 00:25:25.807 4289.629 - 4319.418: 99.8607% ( 4) 00:25:25.807 4319.418 - 4349.207: 99.8658% ( 3) 00:25:25.807 4349.207 - 4378.996: 99.8709% ( 3) 00:25:25.807 4378.996 - 4408.785: 99.8777% ( 4) 00:25:25.807 4408.785 - 4438.575: 99.8828% ( 3) 00:25:25.807 4438.575 - 4468.364: 99.8896% ( 4) 00:25:25.807 4468.364 - 4498.153: 99.8930% ( 2) 00:25:25.807 4498.153 - 4527.942: 99.8964% ( 2) 00:25:25.807 4527.942 - 4557.731: 99.8998% ( 2) 00:25:25.807 4557.731 - 4587.520: 99.9032% ( 2) 00:25:25.807 4587.520 - 4617.309: 99.9049% ( 1) 00:25:25.807 4617.309 - 4647.098: 99.9083% ( 2) 00:25:25.807 4647.098 - 4676.887: 99.9100% ( 1) 00:25:25.807 4676.887 - 4706.676: 99.9134% ( 2) 00:25:25.807 4706.676 - 4736.465: 99.9168% ( 2) 00:25:25.807 4736.465 - 4766.255: 99.9185% ( 1) 00:25:25.807 4766.255 - 4796.044: 99.9219% ( 2) 00:25:25.807 4796.044 - 4825.833: 99.9236% ( 1) 00:25:25.807 4825.833 - 4855.622: 99.9253% ( 1) 00:25:25.807 4855.622 - 4885.411: 99.9270% ( 1) 00:25:25.807 4885.411 - 4915.200: 99.9304% ( 2) 00:25:25.807 4915.200 - 4944.989: 99.9338% ( 2) 00:25:25.807 4944.989 - 4974.778: 99.9355% ( 1) 00:25:25.807 4974.778 - 5004.567: 99.9389% ( 2) 00:25:25.807 5004.567 - 5034.356: 99.9423% ( 2) 00:25:25.807 5034.356 - 5064.145: 99.9457% ( 2) 00:25:25.807 5064.145 - 5093.935: 99.9474% ( 1) 00:25:25.807 5093.935 - 5123.724: 99.9507% ( 2) 00:25:25.807 5123.724 - 5153.513: 99.9541% ( 2) 00:25:25.807 5153.513 - 5183.302: 99.9575% ( 2) 00:25:25.807 5213.091 - 5242.880: 99.9609% ( 2) 00:25:25.807 5242.880 - 5272.669: 99.9643% ( 2) 00:25:25.807 5272.669 - 5302.458: 99.9660% ( 1) 00:25:25.807 5302.458 - 5332.247: 99.9694% ( 2) 00:25:25.807 5332.247 - 5362.036: 99.9728% ( 2) 00:25:25.807 5362.036 - 5391.825: 99.9745% ( 1) 00:25:25.807 5391.825 - 5421.615: 99.9779% ( 2) 00:25:25.807 5421.615 - 5451.404: 99.9796% ( 1) 00:25:25.807 5451.404 - 5481.193: 99.9830% ( 2) 00:25:25.807 5481.193 - 5510.982: 99.9864% ( 2) 00:25:25.807 5510.982 - 5540.771: 99.9898% ( 2) 00:25:25.807 5540.771 - 5570.560: 99.9915% ( 1) 00:25:25.807 5570.560 - 5600.349: 99.9949% ( 2) 00:25:25.807 5600.349 - 5630.138: 99.9983% ( 2) 00:25:25.807 5659.927 - 5689.716: 100.0000% ( 1) 00:25:25.807 00:25:25.807 11:35:43 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:25:27.189 Initializing NVMe Controllers 00:25:27.189 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:27.189 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:25:27.189 Initialization complete. Launching workers. 00:25:27.189 ======================================================== 00:25:27.189 Latency(us) 00:25:27.189 Device Information : IOPS MiB/s Average min max 00:25:27.189 PCIE (0000:00:06.0) NSID 1 from core 0: 48250.86 565.44 2653.27 1364.15 7401.16 00:25:27.189 ======================================================== 00:25:27.189 Total : 48250.86 565.44 2653.27 1364.15 7401.16 00:25:27.189 00:25:27.189 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:25:27.189 ================================================================================= 00:25:27.189 1.00000% : 1742.662us 00:25:27.189 10.00000% : 1966.080us 00:25:27.189 25.00000% : 2189.498us 00:25:27.189 50.00000% : 2591.651us 00:25:27.189 75.00000% : 2993.804us 00:25:27.189 90.00000% : 3276.800us 00:25:27.189 95.00000% : 3470.429us 00:25:27.189 98.00000% : 4676.887us 00:25:27.189 99.00000% : 5719.505us 00:25:27.189 99.50000% : 6315.287us 00:25:27.189 99.90000% : 6881.280us 00:25:27.189 99.99000% : 7268.538us 00:25:27.189 99.99900% : 7417.484us 00:25:27.189 99.99990% : 7417.484us 00:25:27.189 99.99999% : 7417.484us 00:25:27.189 00:25:27.189 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:25:27.189 ============================================================================== 00:25:27.189 Range in us Cumulative IO count 00:25:27.189 1362.851 - 1370.298: 0.0021% ( 1) 00:25:27.189 1422.429 - 1429.876: 0.0041% ( 1) 00:25:27.189 1467.113 - 1474.560: 0.0062% ( 1) 00:25:27.189 1504.349 - 1511.796: 0.0124% ( 3) 00:25:27.189 1511.796 - 1519.244: 0.0145% ( 1) 00:25:27.189 1519.244 - 1526.691: 0.0166% ( 1) 00:25:27.189 1526.691 - 1534.138: 0.0249% ( 4) 00:25:27.189 1534.138 - 1541.585: 0.0311% ( 3) 00:25:27.189 1541.585 - 1549.033: 0.0394% ( 4) 00:25:27.189 1549.033 - 1556.480: 0.0456% ( 3) 00:25:27.189 1556.480 - 1563.927: 0.0477% ( 1) 00:25:27.189 1563.927 - 1571.375: 0.0580% ( 5) 00:25:27.189 1571.375 - 1578.822: 0.0746% ( 8) 00:25:27.189 1578.822 - 1586.269: 0.0912% ( 8) 00:25:27.189 1586.269 - 1593.716: 0.1057% ( 7) 00:25:27.189 1593.716 - 1601.164: 0.1264% ( 10) 00:25:27.189 1601.164 - 1608.611: 0.1430% ( 8) 00:25:27.189 1608.611 - 1616.058: 0.1699% ( 13) 00:25:27.189 1616.058 - 1623.505: 0.1886% ( 9) 00:25:27.189 1623.505 - 1630.953: 0.2052% ( 8) 00:25:27.189 1630.953 - 1638.400: 0.2321% ( 13) 00:25:27.189 1638.400 - 1645.847: 0.2570% ( 12) 00:25:27.189 1645.847 - 1653.295: 0.3026% ( 22) 00:25:27.189 1653.295 - 1660.742: 0.3254% ( 11) 00:25:27.189 1660.742 - 1668.189: 0.3710% ( 22) 00:25:27.189 1668.189 - 1675.636: 0.4041% ( 16) 00:25:27.189 1675.636 - 1683.084: 0.4414% ( 18) 00:25:27.189 1683.084 - 1690.531: 0.4974% ( 27) 00:25:27.189 1690.531 - 1697.978: 0.5575% ( 29) 00:25:27.189 1697.978 - 1705.425: 0.6155% ( 28) 00:25:27.189 1705.425 - 1712.873: 0.6756% ( 29) 00:25:27.189 1712.873 - 1720.320: 0.7565% ( 39) 00:25:27.189 1720.320 - 1727.767: 0.8373% ( 39) 00:25:27.189 1727.767 - 1735.215: 0.9533% ( 56) 00:25:27.189 1735.215 - 1742.662: 1.0549% ( 49) 00:25:27.189 1742.662 - 1750.109: 1.1710% ( 56) 00:25:27.189 1750.109 - 1757.556: 1.2932% ( 59) 00:25:27.189 1757.556 - 1765.004: 1.4404% ( 71) 00:25:27.189 1765.004 - 1772.451: 1.6062% ( 80) 00:25:27.189 1772.451 - 1779.898: 1.8072% ( 97) 00:25:27.189 1779.898 - 1787.345: 2.0269% ( 106) 00:25:27.189 1787.345 - 1794.793: 2.2528% ( 109) 00:25:27.189 1794.793 - 1802.240: 2.5098% ( 124) 00:25:27.189 1802.240 - 1809.687: 2.7626% ( 122) 00:25:27.189 1809.687 - 1817.135: 3.0404% ( 134) 00:25:27.189 1817.135 - 1824.582: 3.3077% ( 129) 00:25:27.189 1824.582 - 1832.029: 3.6082% ( 145) 00:25:27.189 1832.029 - 1839.476: 3.9067% ( 144) 00:25:27.189 1839.476 - 1846.924: 4.2134% ( 148) 00:25:27.189 1846.924 - 1854.371: 4.5927% ( 183) 00:25:27.189 1854.371 - 1861.818: 4.9429% ( 169) 00:25:27.189 1861.818 - 1869.265: 5.3201% ( 182) 00:25:27.189 1869.265 - 1876.713: 5.7367% ( 201) 00:25:27.189 1876.713 - 1884.160: 6.1615% ( 205) 00:25:27.189 1884.160 - 1891.607: 6.5864% ( 205) 00:25:27.189 1891.607 - 1899.055: 7.0278% ( 213) 00:25:27.189 1899.055 - 1906.502: 7.4465% ( 202) 00:25:27.189 1906.502 - 1921.396: 8.3045% ( 414) 00:25:27.189 1921.396 - 1936.291: 9.1832% ( 424) 00:25:27.189 1936.291 - 1951.185: 9.9915% ( 390) 00:25:27.189 1951.185 - 1966.080: 10.8454% ( 412) 00:25:27.189 1966.080 - 1980.975: 11.7137% ( 419) 00:25:27.189 1980.975 - 1995.869: 12.6132% ( 434) 00:25:27.189 1995.869 - 2010.764: 13.5458% ( 450) 00:25:27.189 2010.764 - 2025.658: 14.4743% ( 448) 00:25:27.189 2025.658 - 2040.553: 15.4401% ( 466) 00:25:27.190 2040.553 - 2055.447: 16.4349% ( 480) 00:25:27.190 2055.447 - 2070.342: 17.4131% ( 472) 00:25:27.190 2070.342 - 2085.236: 18.4141% ( 483) 00:25:27.190 2085.236 - 2100.131: 19.4131% ( 482) 00:25:27.190 2100.131 - 2115.025: 20.3747% ( 464) 00:25:27.190 2115.025 - 2129.920: 21.3964% ( 493) 00:25:27.190 2129.920 - 2144.815: 22.3767% ( 473) 00:25:27.190 2144.815 - 2159.709: 23.3840% ( 486) 00:25:27.190 2159.709 - 2174.604: 24.4762% ( 527) 00:25:27.190 2174.604 - 2189.498: 25.5186% ( 503) 00:25:27.190 2189.498 - 2204.393: 26.4678% ( 458) 00:25:27.190 2204.393 - 2219.287: 27.4191% ( 459) 00:25:27.190 2219.287 - 2234.182: 28.4284% ( 487) 00:25:27.190 2234.182 - 2249.076: 29.4087% ( 473) 00:25:27.190 2249.076 - 2263.971: 30.3745% ( 466) 00:25:27.190 2263.971 - 2278.865: 31.2926% ( 443) 00:25:27.190 2278.865 - 2293.760: 32.2377% ( 456) 00:25:27.190 2293.760 - 2308.655: 33.1848% ( 457) 00:25:27.190 2308.655 - 2323.549: 34.0905% ( 437) 00:25:27.190 2323.549 - 2338.444: 35.0065% ( 442) 00:25:27.190 2338.444 - 2353.338: 35.9143% ( 438) 00:25:27.190 2353.338 - 2368.233: 36.8241% ( 439) 00:25:27.190 2368.233 - 2383.127: 37.7132% ( 429) 00:25:27.190 2383.127 - 2398.022: 38.5857% ( 421) 00:25:27.190 2398.022 - 2412.916: 39.4665% ( 425) 00:25:27.190 2412.916 - 2427.811: 40.3308% ( 417) 00:25:27.190 2427.811 - 2442.705: 41.1888% ( 414) 00:25:27.190 2442.705 - 2457.600: 42.0758% ( 428) 00:25:27.190 2457.600 - 2472.495: 42.9359% ( 415) 00:25:27.190 2472.495 - 2487.389: 43.8250% ( 429) 00:25:27.190 2487.389 - 2502.284: 44.7245% ( 434) 00:25:27.190 2502.284 - 2517.178: 45.5991% ( 422) 00:25:27.190 2517.178 - 2532.073: 46.5027% ( 436) 00:25:27.190 2532.073 - 2546.967: 47.4042% ( 435) 00:25:27.190 2546.967 - 2561.862: 48.3057% ( 435) 00:25:27.190 2561.862 - 2576.756: 49.1907% ( 427) 00:25:27.190 2576.756 - 2591.651: 50.0528% ( 416) 00:25:27.190 2591.651 - 2606.545: 50.9502% ( 433) 00:25:27.190 2606.545 - 2621.440: 51.8456% ( 432) 00:25:27.190 2621.440 - 2636.335: 52.7160% ( 420) 00:25:27.190 2636.335 - 2651.229: 53.6051% ( 429) 00:25:27.190 2651.229 - 2666.124: 54.5087% ( 436) 00:25:27.190 2666.124 - 2681.018: 55.4413% ( 450) 00:25:27.190 2681.018 - 2695.913: 56.3367% ( 432) 00:25:27.190 2695.913 - 2710.807: 57.2734% ( 452) 00:25:27.190 2710.807 - 2725.702: 58.2164% ( 455) 00:25:27.190 2725.702 - 2740.596: 59.1242% ( 438) 00:25:27.190 2740.596 - 2755.491: 60.0506% ( 447) 00:25:27.190 2755.491 - 2770.385: 61.0122% ( 464) 00:25:27.190 2770.385 - 2785.280: 61.9407% ( 448) 00:25:27.190 2785.280 - 2800.175: 62.8899% ( 458) 00:25:27.190 2800.175 - 2815.069: 63.8370% ( 457) 00:25:27.190 2815.069 - 2829.964: 64.7862% ( 458) 00:25:27.190 2829.964 - 2844.858: 65.7126% ( 447) 00:25:27.190 2844.858 - 2859.753: 66.6328% ( 444) 00:25:27.190 2859.753 - 2874.647: 67.6090% ( 471) 00:25:27.190 2874.647 - 2889.542: 68.5437% ( 451) 00:25:27.190 2889.542 - 2904.436: 69.4721% ( 448) 00:25:27.190 2904.436 - 2919.331: 70.4234% ( 459) 00:25:27.190 2919.331 - 2934.225: 71.3353% ( 440) 00:25:27.190 2934.225 - 2949.120: 72.2700% ( 451) 00:25:27.190 2949.120 - 2964.015: 73.2192% ( 458) 00:25:27.190 2964.015 - 2978.909: 74.1601% ( 454) 00:25:27.190 2978.909 - 2993.804: 75.1197% ( 463) 00:25:27.190 2993.804 - 3008.698: 76.0316% ( 440) 00:25:27.190 3008.698 - 3023.593: 76.9932% ( 464) 00:25:27.190 3023.593 - 3038.487: 77.9134% ( 444) 00:25:27.190 3038.487 - 3053.382: 78.8170% ( 436) 00:25:27.190 3053.382 - 3068.276: 79.7144% ( 433) 00:25:27.190 3068.276 - 3083.171: 80.6180% ( 436) 00:25:27.190 3083.171 - 3098.065: 81.5258% ( 438) 00:25:27.190 3098.065 - 3112.960: 82.3755% ( 410) 00:25:27.190 3112.960 - 3127.855: 83.2231% ( 409) 00:25:27.190 3127.855 - 3142.749: 84.0646% ( 406) 00:25:27.190 3142.749 - 3157.644: 84.8687% ( 388) 00:25:27.190 3157.644 - 3172.538: 85.6417% ( 373) 00:25:27.190 3172.538 - 3187.433: 86.4210% ( 376) 00:25:27.190 3187.433 - 3202.327: 87.1547% ( 354) 00:25:27.190 3202.327 - 3217.222: 87.8303% ( 326) 00:25:27.190 3217.222 - 3232.116: 88.4935% ( 320) 00:25:27.190 3232.116 - 3247.011: 89.1546% ( 319) 00:25:27.190 3247.011 - 3261.905: 89.7639% ( 294) 00:25:27.190 3261.905 - 3276.800: 90.3277% ( 272) 00:25:27.190 3276.800 - 3291.695: 90.8914% ( 272) 00:25:27.190 3291.695 - 3306.589: 91.3909% ( 241) 00:25:27.190 3306.589 - 3321.484: 91.8737% ( 233) 00:25:27.190 3321.484 - 3336.378: 92.3110% ( 211) 00:25:27.190 3336.378 - 3351.273: 92.7421% ( 208) 00:25:27.190 3351.273 - 3366.167: 93.1235% ( 184) 00:25:27.190 3366.167 - 3381.062: 93.4716% ( 168) 00:25:27.190 3381.062 - 3395.956: 93.8219% ( 169) 00:25:27.190 3395.956 - 3410.851: 94.1120% ( 140) 00:25:27.190 3410.851 - 3425.745: 94.4188% ( 148) 00:25:27.190 3425.745 - 3440.640: 94.6758% ( 124) 00:25:27.190 3440.640 - 3455.535: 94.9017% ( 109) 00:25:27.190 3455.535 - 3470.429: 95.1338% ( 112) 00:25:27.190 3470.429 - 3485.324: 95.3120% ( 86) 00:25:27.190 3485.324 - 3500.218: 95.5048% ( 93) 00:25:27.190 3500.218 - 3515.113: 95.6892% ( 89) 00:25:27.190 3515.113 - 3530.007: 95.8281% ( 67) 00:25:27.190 3530.007 - 3544.902: 95.9566% ( 62) 00:25:27.190 3544.902 - 3559.796: 96.0664% ( 53) 00:25:27.190 3559.796 - 3574.691: 96.1617% ( 46) 00:25:27.190 3574.691 - 3589.585: 96.2612% ( 48) 00:25:27.190 3589.585 - 3604.480: 96.3317% ( 34) 00:25:27.190 3604.480 - 3619.375: 96.4001% ( 33) 00:25:27.190 3619.375 - 3634.269: 96.4726% ( 35) 00:25:27.190 3634.269 - 3649.164: 96.5244% ( 25) 00:25:27.190 3649.164 - 3664.058: 96.5700% ( 22) 00:25:27.190 3664.058 - 3678.953: 96.6073% ( 18) 00:25:27.190 3678.953 - 3693.847: 96.6612% ( 26) 00:25:27.190 3693.847 - 3708.742: 96.6944% ( 16) 00:25:27.190 3708.742 - 3723.636: 96.7275% ( 16) 00:25:27.190 3723.636 - 3738.531: 96.7628% ( 17) 00:25:27.190 3738.531 - 3753.425: 96.7897% ( 13) 00:25:27.190 3753.425 - 3768.320: 96.8229% ( 16) 00:25:27.190 3768.320 - 3783.215: 96.8540% ( 15) 00:25:27.190 3783.215 - 3798.109: 96.8809% ( 13) 00:25:27.190 3798.109 - 3813.004: 96.9058% ( 12) 00:25:27.190 3813.004 - 3842.793: 96.9596% ( 26) 00:25:27.190 3842.793 - 3872.582: 97.0135% ( 26) 00:25:27.190 3872.582 - 3902.371: 97.0653% ( 25) 00:25:27.190 3902.371 - 3932.160: 97.1213% ( 27) 00:25:27.190 3932.160 - 3961.949: 97.1773% ( 27) 00:25:27.190 3961.949 - 3991.738: 97.2311% ( 26) 00:25:27.190 3991.738 - 4021.527: 97.2892% ( 28) 00:25:27.190 4021.527 - 4051.316: 97.3389% ( 24) 00:25:27.190 4051.316 - 4081.105: 97.3928% ( 26) 00:25:27.190 4081.105 - 4110.895: 97.4343% ( 20) 00:25:27.190 4110.895 - 4140.684: 97.4653% ( 15) 00:25:27.190 4140.684 - 4170.473: 97.4964% ( 15) 00:25:27.190 4170.473 - 4200.262: 97.5296% ( 16) 00:25:27.190 4200.262 - 4230.051: 97.5627% ( 16) 00:25:27.190 4230.051 - 4259.840: 97.6000% ( 18) 00:25:27.190 4259.840 - 4289.629: 97.6291% ( 14) 00:25:27.190 4289.629 - 4319.418: 97.6622% ( 16) 00:25:27.190 4319.418 - 4349.207: 97.6871% ( 12) 00:25:27.190 4349.207 - 4378.996: 97.7265% ( 19) 00:25:27.190 4378.996 - 4408.785: 97.7555% ( 14) 00:25:27.190 4408.785 - 4438.575: 97.7907% ( 17) 00:25:27.190 4438.575 - 4468.364: 97.8218% ( 15) 00:25:27.190 4468.364 - 4498.153: 97.8467% ( 12) 00:25:27.190 4498.153 - 4527.942: 97.8778% ( 15) 00:25:27.190 4527.942 - 4557.731: 97.9130% ( 17) 00:25:27.190 4557.731 - 4587.520: 97.9316% ( 9) 00:25:27.190 4587.520 - 4617.309: 97.9565% ( 12) 00:25:27.190 4617.309 - 4647.098: 97.9897% ( 16) 00:25:27.190 4647.098 - 4676.887: 98.0208% ( 15) 00:25:27.190 4676.887 - 4706.676: 98.0477% ( 13) 00:25:27.190 4706.676 - 4736.465: 98.0726% ( 12) 00:25:27.190 4736.465 - 4766.255: 98.1057% ( 16) 00:25:27.190 4766.255 - 4796.044: 98.1368% ( 15) 00:25:27.190 4796.044 - 4825.833: 98.1638% ( 13) 00:25:27.190 4825.833 - 4855.622: 98.1969% ( 16) 00:25:27.190 4855.622 - 4885.411: 98.2259% ( 14) 00:25:27.190 4885.411 - 4915.200: 98.2467% ( 10) 00:25:27.190 4915.200 - 4944.989: 98.2798% ( 16) 00:25:27.190 4944.989 - 4974.778: 98.3130% ( 16) 00:25:27.190 4974.778 - 5004.567: 98.3399% ( 13) 00:25:27.190 5004.567 - 5034.356: 98.3731% ( 16) 00:25:27.190 5034.356 - 5064.145: 98.3959% ( 11) 00:25:27.190 5064.145 - 5093.935: 98.4270% ( 15) 00:25:27.190 5093.935 - 5123.724: 98.4581% ( 15) 00:25:27.190 5123.724 - 5153.513: 98.4954% ( 18) 00:25:27.190 5153.513 - 5183.302: 98.5140% ( 9) 00:25:27.190 5183.302 - 5213.091: 98.5451% ( 15) 00:25:27.190 5213.091 - 5242.880: 98.5721% ( 13) 00:25:27.190 5242.880 - 5272.669: 98.6011% ( 14) 00:25:27.190 5272.669 - 5302.458: 98.6322% ( 15) 00:25:27.190 5302.458 - 5332.247: 98.6632% ( 15) 00:25:27.190 5332.247 - 5362.036: 98.6923% ( 14) 00:25:27.190 5362.036 - 5391.825: 98.7151% ( 11) 00:25:27.190 5391.825 - 5421.615: 98.7420% ( 13) 00:25:27.190 5421.615 - 5451.404: 98.7710% ( 14) 00:25:27.190 5451.404 - 5481.193: 98.8021% ( 15) 00:25:27.190 5481.193 - 5510.982: 98.8290% ( 13) 00:25:27.190 5510.982 - 5540.771: 98.8539% ( 12) 00:25:27.190 5540.771 - 5570.560: 98.8829% ( 14) 00:25:27.190 5570.560 - 5600.349: 98.9057% ( 11) 00:25:27.190 5600.349 - 5630.138: 98.9368% ( 15) 00:25:27.190 5630.138 - 5659.927: 98.9679% ( 15) 00:25:27.190 5659.927 - 5689.716: 98.9928% ( 12) 00:25:27.190 5689.716 - 5719.505: 99.0176% ( 12) 00:25:27.190 5719.505 - 5749.295: 99.0446% ( 13) 00:25:27.190 5749.295 - 5779.084: 99.0694% ( 12) 00:25:27.190 5779.084 - 5808.873: 99.0943% ( 12) 00:25:27.190 5808.873 - 5838.662: 99.1130% ( 9) 00:25:27.191 5838.662 - 5868.451: 99.1420% ( 14) 00:25:27.191 5868.451 - 5898.240: 99.1669% ( 12) 00:25:27.191 5898.240 - 5928.029: 99.1897% ( 11) 00:25:27.191 5928.029 - 5957.818: 99.2104% ( 10) 00:25:27.191 5957.818 - 5987.607: 99.2394% ( 14) 00:25:27.191 5987.607 - 6017.396: 99.2622% ( 11) 00:25:27.191 6017.396 - 6047.185: 99.2829% ( 10) 00:25:27.191 6047.185 - 6076.975: 99.3099% ( 13) 00:25:27.191 6076.975 - 6106.764: 99.3347% ( 12) 00:25:27.191 6106.764 - 6136.553: 99.3617% ( 13) 00:25:27.191 6136.553 - 6166.342: 99.3824% ( 10) 00:25:27.191 6166.342 - 6196.131: 99.4052% ( 11) 00:25:27.191 6196.131 - 6225.920: 99.4301% ( 12) 00:25:27.191 6225.920 - 6255.709: 99.4549% ( 12) 00:25:27.191 6255.709 - 6285.498: 99.4777% ( 11) 00:25:27.191 6285.498 - 6315.287: 99.5047% ( 13) 00:25:27.191 6315.287 - 6345.076: 99.5316% ( 13) 00:25:27.191 6345.076 - 6374.865: 99.5482% ( 8) 00:25:27.191 6374.865 - 6404.655: 99.5751% ( 13) 00:25:27.191 6404.655 - 6434.444: 99.5979% ( 11) 00:25:27.191 6434.444 - 6464.233: 99.6228% ( 12) 00:25:27.191 6464.233 - 6494.022: 99.6477% ( 12) 00:25:27.191 6494.022 - 6523.811: 99.6746% ( 13) 00:25:27.191 6523.811 - 6553.600: 99.6974% ( 11) 00:25:27.191 6553.600 - 6583.389: 99.7181% ( 10) 00:25:27.191 6583.389 - 6613.178: 99.7368% ( 9) 00:25:27.191 6613.178 - 6642.967: 99.7596% ( 11) 00:25:27.191 6642.967 - 6672.756: 99.7803% ( 10) 00:25:27.191 6672.756 - 6702.545: 99.8031% ( 11) 00:25:27.191 6702.545 - 6732.335: 99.8280% ( 12) 00:25:27.191 6732.335 - 6762.124: 99.8487% ( 10) 00:25:27.191 6762.124 - 6791.913: 99.8632% ( 7) 00:25:27.191 6791.913 - 6821.702: 99.8798% ( 8) 00:25:27.191 6821.702 - 6851.491: 99.8943% ( 7) 00:25:27.191 6851.491 - 6881.280: 99.9088% ( 7) 00:25:27.191 6881.280 - 6911.069: 99.9192% ( 5) 00:25:27.191 6911.069 - 6940.858: 99.9316% ( 6) 00:25:27.191 6940.858 - 6970.647: 99.9399% ( 4) 00:25:27.191 6970.647 - 7000.436: 99.9482% ( 4) 00:25:27.191 7000.436 - 7030.225: 99.9606% ( 6) 00:25:27.191 7030.225 - 7060.015: 99.9668% ( 3) 00:25:27.191 7060.015 - 7089.804: 99.9710% ( 2) 00:25:27.191 7089.804 - 7119.593: 99.9772% ( 3) 00:25:27.191 7119.593 - 7149.382: 99.9813% ( 2) 00:25:27.191 7149.382 - 7179.171: 99.9855% ( 2) 00:25:27.191 7179.171 - 7208.960: 99.9876% ( 1) 00:25:27.191 7208.960 - 7238.749: 99.9896% ( 1) 00:25:27.191 7238.749 - 7268.538: 99.9917% ( 1) 00:25:27.191 7268.538 - 7298.327: 99.9938% ( 1) 00:25:27.191 7298.327 - 7328.116: 99.9959% ( 1) 00:25:27.191 7357.905 - 7387.695: 99.9979% ( 1) 00:25:27.191 7387.695 - 7417.484: 100.0000% ( 1) 00:25:27.191 00:25:27.191 ************************************ 00:25:27.191 END TEST nvme_perf 00:25:27.191 ************************************ 00:25:27.191 11:35:45 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:25:27.191 00:25:27.191 real 0m2.577s 00:25:27.191 user 0m2.201s 00:25:27.191 sys 0m0.301s 00:25:27.191 11:35:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.191 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.191 11:35:45 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:25:27.191 11:35:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:27.191 11:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:27.191 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.191 ************************************ 00:25:27.191 START TEST nvme_hello_world 00:25:27.191 ************************************ 00:25:27.191 11:35:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:25:27.451 Initializing NVMe Controllers 00:25:27.451 Attached to 0000:00:06.0 00:25:27.451 Namespace ID: 1 size: 5GB 00:25:27.451 Initialization complete. 00:25:27.451 INFO: using host memory buffer for IO 00:25:27.451 Hello world! 00:25:27.451 00:25:27.451 real 0m0.291s 00:25:27.451 user 0m0.108s 00:25:27.451 sys 0m0.145s 00:25:27.451 11:35:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.451 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.451 ************************************ 00:25:27.451 END TEST nvme_hello_world 00:25:27.451 ************************************ 00:25:27.451 11:35:45 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:25:27.451 11:35:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:27.451 11:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:27.451 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.451 ************************************ 00:25:27.451 START TEST nvme_sgl 00:25:27.451 ************************************ 00:25:27.451 11:35:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:25:27.711 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:25:27.711 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:25:27.711 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:25:27.711 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:25:27.711 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:25:27.711 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:25:27.711 NVMe Readv/Writev Request test 00:25:27.711 Attached to 0000:00:06.0 00:25:27.711 0000:00:06.0: build_io_request_2 test passed 00:25:27.711 0000:00:06.0: build_io_request_4 test passed 00:25:27.711 0000:00:06.0: build_io_request_5 test passed 00:25:27.711 0000:00:06.0: build_io_request_6 test passed 00:25:27.711 0000:00:06.0: build_io_request_7 test passed 00:25:27.711 0000:00:06.0: build_io_request_10 test passed 00:25:27.711 Cleaning up... 00:25:27.711 ************************************ 00:25:27.711 END TEST nvme_sgl 00:25:27.711 ************************************ 00:25:27.711 00:25:27.711 real 0m0.333s 00:25:27.711 user 0m0.145s 00:25:27.711 sys 0m0.138s 00:25:27.711 11:35:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.711 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.711 11:35:45 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:25:27.711 11:35:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:27.711 11:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:27.711 11:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.711 ************************************ 00:25:27.711 START TEST nvme_e2edp 00:25:27.711 ************************************ 00:25:27.711 11:35:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:25:27.971 NVMe Write/Read with End-to-End data protection test 00:25:27.971 Attached to 0000:00:06.0 00:25:27.971 Cleaning up... 00:25:27.971 00:25:27.971 real 0m0.289s 00:25:27.971 user 0m0.093s 00:25:27.971 sys 0m0.151s 00:25:27.971 11:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:27.971 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.971 ************************************ 00:25:27.971 END TEST nvme_e2edp 00:25:27.971 ************************************ 00:25:28.230 11:35:46 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:25:28.230 11:35:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:28.230 11:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.230 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.230 ************************************ 00:25:28.230 START TEST nvme_reserve 00:25:28.230 ************************************ 00:25:28.230 11:35:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:25:28.490 ===================================================== 00:25:28.490 NVMe Controller at PCI bus 0, device 6, function 0 00:25:28.490 ===================================================== 00:25:28.490 Reservations: Not Supported 00:25:28.490 Reservation test passed 00:25:28.490 ************************************ 00:25:28.490 END TEST nvme_reserve 00:25:28.490 ************************************ 00:25:28.490 00:25:28.490 real 0m0.295s 00:25:28.490 user 0m0.090s 00:25:28.490 sys 0m0.157s 00:25:28.490 11:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:28.490 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.490 11:35:46 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:25:28.490 11:35:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:28.490 11:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.490 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.490 ************************************ 00:25:28.490 START TEST nvme_err_injection 00:25:28.490 ************************************ 00:25:28.490 11:35:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:25:28.749 NVMe Error Injection test 00:25:28.749 Attached to 0000:00:06.0 00:25:28.749 0000:00:06.0: get features failed as expected 00:25:28.749 0000:00:06.0: get features successfully as expected 00:25:28.749 0000:00:06.0: read failed as expected 00:25:28.749 0000:00:06.0: read successfully as expected 00:25:28.749 Cleaning up... 00:25:28.749 ************************************ 00:25:28.749 END TEST nvme_err_injection 00:25:28.749 ************************************ 00:25:28.749 00:25:28.749 real 0m0.298s 00:25:28.749 user 0m0.106s 00:25:28.749 sys 0m0.139s 00:25:28.749 11:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:28.749 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.749 11:35:46 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:25:28.749 11:35:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:25:28.749 11:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.749 11:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.749 ************************************ 00:25:28.749 START TEST nvme_overhead 00:25:28.749 ************************************ 00:25:28.749 11:35:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:25:30.130 Initializing NVMe Controllers 00:25:30.130 Attached to 0000:00:06.0 00:25:30.130 Initialization complete. Launching workers. 00:25:30.130 submit (in ns) avg, min, max = 16844.7, 12720.9, 69570.9 00:25:30.130 complete (in ns) avg, min, max = 11630.7, 8313.6, 91802.7 00:25:30.130 00:25:30.130 Submit histogram 00:25:30.130 ================ 00:25:30.130 Range in us Cumulative Count 00:25:30.130 12.684 - 12.742: 0.0117% ( 1) 00:25:30.130 12.742 - 12.800: 0.0351% ( 2) 00:25:30.130 12.800 - 12.858: 0.0585% ( 2) 00:25:30.130 12.858 - 12.916: 0.1054% ( 4) 00:25:30.130 12.916 - 12.975: 0.4449% ( 29) 00:25:30.130 12.975 - 13.033: 1.5806% ( 97) 00:25:30.130 13.033 - 13.091: 4.2384% ( 227) 00:25:30.130 13.091 - 13.149: 7.0015% ( 236) 00:25:30.130 13.149 - 13.207: 9.3900% ( 204) 00:25:30.130 13.207 - 13.265: 11.1814% ( 153) 00:25:30.130 13.265 - 13.324: 13.2654% ( 178) 00:25:30.130 13.324 - 13.382: 16.1457% ( 246) 00:25:30.130 13.382 - 13.440: 20.5831% ( 379) 00:25:30.130 13.440 - 13.498: 25.6762% ( 435) 00:25:30.130 13.498 - 13.556: 29.5516% ( 331) 00:25:30.130 13.556 - 13.615: 32.2796% ( 233) 00:25:30.130 13.615 - 13.673: 34.3988% ( 181) 00:25:30.130 13.673 - 13.731: 36.8692% ( 211) 00:25:30.130 13.731 - 13.789: 39.9017% ( 259) 00:25:30.130 13.789 - 13.847: 42.7233% ( 241) 00:25:30.130 13.847 - 13.905: 45.1352% ( 206) 00:25:30.130 13.905 - 13.964: 46.8446% ( 146) 00:25:30.130 13.964 - 14.022: 47.8164% ( 83) 00:25:30.130 14.022 - 14.080: 48.7648% ( 81) 00:25:30.130 14.080 - 14.138: 49.6780% ( 78) 00:25:30.130 14.138 - 14.196: 50.5561% ( 75) 00:25:30.130 14.196 - 14.255: 51.3523% ( 68) 00:25:30.130 14.255 - 14.313: 51.9963% ( 55) 00:25:30.130 14.313 - 14.371: 52.5465% ( 47) 00:25:30.130 14.371 - 14.429: 53.1202% ( 49) 00:25:30.130 14.429 - 14.487: 53.6471% ( 45) 00:25:30.130 14.487 - 14.545: 54.5486% ( 77) 00:25:30.130 14.545 - 14.604: 55.6141% ( 91) 00:25:30.130 14.604 - 14.662: 56.9371% ( 113) 00:25:30.130 14.662 - 14.720: 58.0260% ( 93) 00:25:30.130 14.720 - 14.778: 59.0095% ( 84) 00:25:30.130 14.778 - 14.836: 59.6534% ( 55) 00:25:30.130 14.836 - 14.895: 60.1101% ( 39) 00:25:30.130 14.895 - 15.011: 60.7540% ( 55) 00:25:30.130 15.011 - 15.127: 61.6790% ( 79) 00:25:30.130 15.127 - 15.244: 62.3815% ( 60) 00:25:30.130 15.244 - 15.360: 62.8381% ( 39) 00:25:30.130 15.360 - 15.476: 63.0839% ( 21) 00:25:30.130 15.476 - 15.593: 63.1542% ( 6) 00:25:30.130 15.593 - 15.709: 63.3181% ( 14) 00:25:30.130 15.709 - 15.825: 63.4118% ( 8) 00:25:30.130 15.825 - 15.942: 63.4820% ( 6) 00:25:30.130 15.942 - 16.058: 63.5172% ( 3) 00:25:30.130 16.058 - 16.175: 63.5757% ( 5) 00:25:30.130 16.175 - 16.291: 63.6342% ( 5) 00:25:30.130 16.291 - 16.407: 63.6459% ( 1) 00:25:30.130 16.407 - 16.524: 63.7045% ( 5) 00:25:30.130 16.640 - 16.756: 63.7162% ( 1) 00:25:30.130 16.873 - 16.989: 63.7279% ( 1) 00:25:30.130 16.989 - 17.105: 63.7630% ( 3) 00:25:30.130 17.105 - 17.222: 63.9035% ( 12) 00:25:30.130 17.222 - 17.338: 65.2266% ( 113) 00:25:30.131 17.338 - 17.455: 69.6171% ( 375) 00:25:30.131 17.455 - 17.571: 74.3238% ( 402) 00:25:30.131 17.571 - 17.687: 76.7826% ( 210) 00:25:30.131 17.687 - 17.804: 78.1407% ( 116) 00:25:30.131 17.804 - 17.920: 79.0774% ( 80) 00:25:30.131 17.920 - 18.036: 80.0258% ( 81) 00:25:30.131 18.036 - 18.153: 81.0092% ( 84) 00:25:30.131 18.153 - 18.269: 81.6532% ( 55) 00:25:30.131 18.269 - 18.385: 82.2972% ( 55) 00:25:30.131 18.385 - 18.502: 82.7304% ( 37) 00:25:30.131 18.502 - 18.618: 83.1284% ( 34) 00:25:30.131 18.618 - 18.735: 83.3509% ( 19) 00:25:30.131 18.735 - 18.851: 83.5382% ( 16) 00:25:30.131 18.851 - 18.967: 83.7490% ( 18) 00:25:30.131 18.967 - 19.084: 83.9714% ( 19) 00:25:30.131 19.084 - 19.200: 84.1471% ( 15) 00:25:30.131 19.200 - 19.316: 84.3695% ( 19) 00:25:30.131 19.316 - 19.433: 84.4983% ( 11) 00:25:30.131 19.433 - 19.549: 84.6388% ( 12) 00:25:30.131 19.549 - 19.665: 84.7793% ( 12) 00:25:30.131 19.665 - 19.782: 84.8730% ( 8) 00:25:30.131 19.782 - 19.898: 84.9783% ( 9) 00:25:30.131 19.898 - 20.015: 85.1071% ( 11) 00:25:30.131 20.015 - 20.131: 85.2242% ( 10) 00:25:30.131 20.131 - 20.247: 85.3296% ( 9) 00:25:30.131 20.247 - 20.364: 85.4701% ( 12) 00:25:30.131 20.364 - 20.480: 85.5286% ( 5) 00:25:30.131 20.480 - 20.596: 85.6574% ( 11) 00:25:30.131 20.596 - 20.713: 85.7160% ( 5) 00:25:30.131 20.713 - 20.829: 85.8330% ( 10) 00:25:30.131 20.829 - 20.945: 85.8682% ( 3) 00:25:30.131 20.945 - 21.062: 85.9618% ( 8) 00:25:30.131 21.062 - 21.178: 86.0438% ( 7) 00:25:30.131 21.178 - 21.295: 86.0789% ( 3) 00:25:30.131 21.295 - 21.411: 86.1609% ( 7) 00:25:30.131 21.411 - 21.527: 86.2428% ( 7) 00:25:30.131 21.527 - 21.644: 86.3131% ( 6) 00:25:30.131 21.644 - 21.760: 86.3365% ( 2) 00:25:30.131 21.760 - 21.876: 86.4419% ( 9) 00:25:30.131 21.876 - 21.993: 86.5238% ( 7) 00:25:30.131 21.993 - 22.109: 86.5707% ( 4) 00:25:30.131 22.109 - 22.225: 86.6058% ( 3) 00:25:30.131 22.225 - 22.342: 86.6409% ( 3) 00:25:30.131 22.342 - 22.458: 86.6760% ( 3) 00:25:30.131 22.458 - 22.575: 86.7463% ( 6) 00:25:30.131 22.575 - 22.691: 86.7814% ( 3) 00:25:30.131 22.691 - 22.807: 86.8399% ( 5) 00:25:30.131 22.807 - 22.924: 86.8751% ( 3) 00:25:30.131 22.924 - 23.040: 86.8868% ( 1) 00:25:30.131 23.040 - 23.156: 86.9102% ( 2) 00:25:30.131 23.156 - 23.273: 86.9336% ( 2) 00:25:30.131 23.273 - 23.389: 86.9804% ( 4) 00:25:30.131 23.389 - 23.505: 87.0507% ( 6) 00:25:30.131 23.505 - 23.622: 87.1092% ( 5) 00:25:30.131 23.622 - 23.738: 87.2029% ( 8) 00:25:30.131 23.738 - 23.855: 87.3083% ( 9) 00:25:30.131 23.855 - 23.971: 87.3200% ( 1) 00:25:30.131 23.971 - 24.087: 87.3668% ( 4) 00:25:30.131 24.087 - 24.204: 87.4137% ( 4) 00:25:30.131 24.204 - 24.320: 87.4371% ( 2) 00:25:30.131 24.320 - 24.436: 87.4605% ( 2) 00:25:30.131 24.436 - 24.553: 87.5190% ( 5) 00:25:30.131 24.553 - 24.669: 87.5659% ( 4) 00:25:30.131 24.669 - 24.785: 87.6361% ( 6) 00:25:30.131 24.785 - 24.902: 87.6712% ( 3) 00:25:30.131 24.902 - 25.018: 87.7298% ( 5) 00:25:30.131 25.018 - 25.135: 87.7766% ( 4) 00:25:30.131 25.135 - 25.251: 87.8351% ( 5) 00:25:30.131 25.251 - 25.367: 87.8703% ( 3) 00:25:30.131 25.367 - 25.484: 87.9288% ( 5) 00:25:30.131 25.484 - 25.600: 87.9522% ( 2) 00:25:30.131 25.600 - 25.716: 87.9874% ( 3) 00:25:30.131 25.716 - 25.833: 88.0225% ( 3) 00:25:30.131 25.833 - 25.949: 88.0693% ( 4) 00:25:30.131 25.949 - 26.065: 88.1044% ( 3) 00:25:30.131 26.065 - 26.182: 88.1630% ( 5) 00:25:30.131 26.182 - 26.298: 88.1981% ( 3) 00:25:30.131 26.298 - 26.415: 88.2566% ( 5) 00:25:30.131 26.415 - 26.531: 88.3035% ( 4) 00:25:30.131 26.531 - 26.647: 88.4323% ( 11) 00:25:30.131 26.647 - 26.764: 88.4557% ( 2) 00:25:30.131 26.764 - 26.880: 88.5376% ( 7) 00:25:30.131 26.880 - 26.996: 88.5728% ( 3) 00:25:30.131 26.996 - 27.113: 88.6079% ( 3) 00:25:30.131 27.229 - 27.345: 88.6547% ( 4) 00:25:30.131 27.345 - 27.462: 88.7016% ( 4) 00:25:30.131 27.462 - 27.578: 88.7718% ( 6) 00:25:30.131 27.578 - 27.695: 88.8538% ( 7) 00:25:30.131 27.695 - 27.811: 89.0762% ( 19) 00:25:30.131 27.811 - 27.927: 89.4626% ( 33) 00:25:30.131 27.927 - 28.044: 90.1768% ( 61) 00:25:30.131 28.044 - 28.160: 91.2891% ( 95) 00:25:30.131 28.160 - 28.276: 92.7058% ( 121) 00:25:30.131 28.276 - 28.393: 93.6658% ( 82) 00:25:30.131 28.393 - 28.509: 94.7781% ( 95) 00:25:30.131 28.509 - 28.625: 95.2933% ( 44) 00:25:30.131 28.625 - 28.742: 95.8202% ( 45) 00:25:30.131 28.742 - 28.858: 96.2651% ( 38) 00:25:30.131 28.858 - 28.975: 96.4875% ( 19) 00:25:30.131 28.975 - 29.091: 96.7334% ( 21) 00:25:30.131 29.091 - 29.207: 96.9559% ( 19) 00:25:30.131 29.207 - 29.324: 97.1549% ( 17) 00:25:30.131 29.324 - 29.440: 97.3188% ( 14) 00:25:30.131 29.440 - 29.556: 97.5061% ( 16) 00:25:30.131 29.556 - 29.673: 97.5764% ( 6) 00:25:30.131 29.673 - 29.789: 97.6466% ( 6) 00:25:30.131 29.789 - 30.022: 97.7052% ( 5) 00:25:30.131 30.022 - 30.255: 97.7286% ( 2) 00:25:30.131 30.255 - 30.487: 97.7989% ( 6) 00:25:30.131 30.487 - 30.720: 97.8457% ( 4) 00:25:30.131 30.720 - 30.953: 97.8925% ( 4) 00:25:30.131 30.953 - 31.185: 97.9862% ( 8) 00:25:30.131 31.185 - 31.418: 98.0213% ( 3) 00:25:30.131 31.418 - 31.651: 98.0330% ( 1) 00:25:30.131 31.651 - 31.884: 98.0681% ( 3) 00:25:30.131 31.884 - 32.116: 98.0916% ( 2) 00:25:30.131 32.349 - 32.582: 98.1033% ( 1) 00:25:30.131 32.582 - 32.815: 98.1267% ( 2) 00:25:30.131 32.815 - 33.047: 98.1501% ( 2) 00:25:30.131 33.047 - 33.280: 98.1969% ( 4) 00:25:30.131 33.280 - 33.513: 98.3140% ( 10) 00:25:30.131 33.513 - 33.745: 98.4077% ( 8) 00:25:30.131 33.745 - 33.978: 98.4779% ( 6) 00:25:30.131 33.978 - 34.211: 98.5833% ( 9) 00:25:30.131 34.211 - 34.444: 98.7238% ( 12) 00:25:30.131 34.444 - 34.676: 98.7941% ( 6) 00:25:30.131 34.676 - 34.909: 98.8409% ( 4) 00:25:30.131 34.909 - 35.142: 98.9228% ( 7) 00:25:30.131 35.142 - 35.375: 98.9463% ( 2) 00:25:30.131 35.375 - 35.607: 99.0165% ( 6) 00:25:30.131 35.607 - 35.840: 99.0399% ( 2) 00:25:30.131 35.840 - 36.073: 99.1102% ( 6) 00:25:30.131 36.073 - 36.305: 99.1570% ( 4) 00:25:30.131 36.305 - 36.538: 99.1921% ( 3) 00:25:30.131 36.538 - 36.771: 99.2155% ( 2) 00:25:30.131 36.771 - 37.004: 99.2507% ( 3) 00:25:30.131 37.004 - 37.236: 99.2624% ( 1) 00:25:30.131 37.236 - 37.469: 99.2741% ( 1) 00:25:30.131 37.469 - 37.702: 99.2975% ( 2) 00:25:30.131 37.702 - 37.935: 99.3209% ( 2) 00:25:30.131 38.167 - 38.400: 99.3560% ( 3) 00:25:30.131 38.400 - 38.633: 99.3678% ( 1) 00:25:30.131 38.865 - 39.098: 99.3912% ( 2) 00:25:30.131 40.029 - 40.262: 99.4146% ( 2) 00:25:30.131 40.495 - 40.727: 99.4380% ( 2) 00:25:30.131 40.727 - 40.960: 99.4497% ( 1) 00:25:30.131 40.960 - 41.193: 99.4614% ( 1) 00:25:30.131 42.124 - 42.356: 99.4731% ( 1) 00:25:30.131 42.356 - 42.589: 99.4965% ( 2) 00:25:30.131 43.287 - 43.520: 99.5317% ( 3) 00:25:30.131 43.520 - 43.753: 99.5434% ( 1) 00:25:30.131 43.753 - 43.985: 99.5668% ( 2) 00:25:30.131 43.985 - 44.218: 99.5902% ( 2) 00:25:30.131 44.218 - 44.451: 99.6253% ( 3) 00:25:30.131 44.451 - 44.684: 99.6370% ( 1) 00:25:30.131 44.684 - 44.916: 99.6605% ( 2) 00:25:30.131 45.847 - 46.080: 99.6722% ( 1) 00:25:30.131 46.313 - 46.545: 99.6839% ( 1) 00:25:30.131 46.778 - 47.011: 99.6956% ( 1) 00:25:30.131 47.709 - 47.942: 99.7073% ( 1) 00:25:30.131 48.175 - 48.407: 99.7190% ( 1) 00:25:30.131 48.407 - 48.640: 99.7307% ( 1) 00:25:30.131 48.640 - 48.873: 99.7658% ( 3) 00:25:30.131 49.105 - 49.338: 99.7893% ( 2) 00:25:30.131 49.571 - 49.804: 99.8010% ( 1) 00:25:30.131 50.502 - 50.735: 99.8361% ( 3) 00:25:30.131 50.735 - 50.967: 99.8595% ( 2) 00:25:30.131 50.967 - 51.200: 99.8712% ( 1) 00:25:30.131 51.200 - 51.433: 99.8829% ( 1) 00:25:30.131 51.433 - 51.665: 99.8946% ( 1) 00:25:30.131 52.131 - 52.364: 99.9063% ( 1) 00:25:30.131 52.364 - 52.596: 99.9180% ( 1) 00:25:30.131 54.225 - 54.458: 99.9415% ( 2) 00:25:30.131 55.855 - 56.087: 99.9532% ( 1) 00:25:30.131 56.087 - 56.320: 99.9649% ( 1) 00:25:30.131 56.553 - 56.785: 99.9766% ( 1) 00:25:30.131 67.025 - 67.491: 99.9883% ( 1) 00:25:30.131 69.353 - 69.818: 100.0000% ( 1) 00:25:30.131 00:25:30.131 Complete histogram 00:25:30.131 ================== 00:25:30.131 Range in us Cumulative Count 00:25:30.131 8.262 - 8.320: 0.0117% ( 1) 00:25:30.131 8.320 - 8.378: 0.1171% ( 9) 00:25:30.131 8.378 - 8.436: 0.5035% ( 33) 00:25:30.131 8.436 - 8.495: 1.4284% ( 79) 00:25:30.131 8.495 - 8.553: 2.2363% ( 69) 00:25:30.131 8.553 - 8.611: 3.0090% ( 66) 00:25:30.131 8.611 - 8.669: 4.1213% ( 95) 00:25:30.131 8.669 - 8.727: 7.2474% ( 267) 00:25:30.132 8.727 - 8.785: 12.3990% ( 440) 00:25:30.132 8.785 - 8.844: 17.2813% ( 417) 00:25:30.132 8.844 - 8.902: 20.4660% ( 272) 00:25:30.132 8.902 - 8.960: 23.6155% ( 269) 00:25:30.132 8.960 - 9.018: 28.8374% ( 446) 00:25:30.132 9.018 - 9.076: 35.3471% ( 556) 00:25:30.132 9.076 - 9.135: 40.0421% ( 401) 00:25:30.132 9.135 - 9.193: 42.1262% ( 178) 00:25:30.132 9.193 - 9.251: 43.7302% ( 137) 00:25:30.132 9.251 - 9.309: 45.6855% ( 167) 00:25:30.132 9.309 - 9.367: 47.5940% ( 163) 00:25:30.132 9.367 - 9.425: 48.7882% ( 102) 00:25:30.132 9.425 - 9.484: 49.5609% ( 66) 00:25:30.132 9.484 - 9.542: 50.4391% ( 75) 00:25:30.132 9.542 - 9.600: 51.9377% ( 128) 00:25:30.132 9.600 - 9.658: 53.9398% ( 171) 00:25:30.132 9.658 - 9.716: 55.7897% ( 158) 00:25:30.132 9.716 - 9.775: 57.0659% ( 109) 00:25:30.132 9.775 - 9.833: 58.0845% ( 87) 00:25:30.132 9.833 - 9.891: 58.8456% ( 65) 00:25:30.132 9.891 - 9.949: 59.4076% ( 48) 00:25:30.132 9.949 - 10.007: 60.0164% ( 52) 00:25:30.132 10.007 - 10.065: 60.5198% ( 43) 00:25:30.132 10.065 - 10.124: 60.8594% ( 29) 00:25:30.132 10.124 - 10.182: 61.2106% ( 30) 00:25:30.132 10.182 - 10.240: 61.4799% ( 23) 00:25:30.132 10.240 - 10.298: 61.8546% ( 32) 00:25:30.132 10.298 - 10.356: 62.1941% ( 29) 00:25:30.132 10.356 - 10.415: 62.4517% ( 22) 00:25:30.132 10.415 - 10.473: 62.6859% ( 20) 00:25:30.132 10.473 - 10.531: 62.8849% ( 17) 00:25:30.132 10.531 - 10.589: 63.0254% ( 12) 00:25:30.132 10.589 - 10.647: 63.1308% ( 9) 00:25:30.132 10.647 - 10.705: 63.2010% ( 6) 00:25:30.132 10.705 - 10.764: 63.3181% ( 10) 00:25:30.132 10.764 - 10.822: 63.4469% ( 11) 00:25:30.132 10.822 - 10.880: 63.5991% ( 13) 00:25:30.132 10.880 - 10.938: 63.6694% ( 6) 00:25:30.132 10.938 - 10.996: 63.7630% ( 8) 00:25:30.132 10.996 - 11.055: 63.7982% ( 3) 00:25:30.132 11.055 - 11.113: 63.8801% ( 7) 00:25:30.132 11.113 - 11.171: 63.9972% ( 10) 00:25:30.132 11.171 - 11.229: 64.0440% ( 4) 00:25:30.132 11.229 - 11.287: 64.1143% ( 6) 00:25:30.132 11.287 - 11.345: 64.1494% ( 3) 00:25:30.132 11.345 - 11.404: 64.2665% ( 10) 00:25:30.132 11.404 - 11.462: 64.6997% ( 37) 00:25:30.132 11.462 - 11.520: 66.0813% ( 118) 00:25:30.132 11.520 - 11.578: 69.0551% ( 254) 00:25:30.132 11.578 - 11.636: 73.4574% ( 376) 00:25:30.132 11.636 - 11.695: 77.4382% ( 340) 00:25:30.132 11.695 - 11.753: 79.5808% ( 183) 00:25:30.132 11.753 - 11.811: 80.7165% ( 97) 00:25:30.132 11.811 - 11.869: 81.0912% ( 32) 00:25:30.132 11.869 - 11.927: 81.3956% ( 26) 00:25:30.132 11.927 - 11.985: 81.5361% ( 12) 00:25:30.132 11.985 - 12.044: 81.6766% ( 12) 00:25:30.132 12.044 - 12.102: 81.7820% ( 9) 00:25:30.132 12.102 - 12.160: 81.7937% ( 1) 00:25:30.132 12.160 - 12.218: 81.9225% ( 11) 00:25:30.132 12.218 - 12.276: 81.9693% ( 4) 00:25:30.132 12.276 - 12.335: 82.0044% ( 3) 00:25:30.132 12.335 - 12.393: 82.0864% ( 7) 00:25:30.132 12.393 - 12.451: 82.1567% ( 6) 00:25:30.132 12.451 - 12.509: 82.3791% ( 19) 00:25:30.132 12.509 - 12.567: 82.6601% ( 24) 00:25:30.132 12.567 - 12.625: 83.0348% ( 32) 00:25:30.132 12.625 - 12.684: 83.7139% ( 58) 00:25:30.132 12.684 - 12.742: 84.1353% ( 36) 00:25:30.132 12.742 - 12.800: 84.6271% ( 42) 00:25:30.132 12.800 - 12.858: 85.0018% ( 32) 00:25:30.132 12.858 - 12.916: 85.3413% ( 29) 00:25:30.132 12.916 - 12.975: 85.5872% ( 21) 00:25:30.132 12.975 - 13.033: 85.7862% ( 17) 00:25:30.132 13.033 - 13.091: 85.9150% ( 11) 00:25:30.132 13.091 - 13.149: 86.0321% ( 10) 00:25:30.132 13.149 - 13.207: 86.0672% ( 3) 00:25:30.132 13.207 - 13.265: 86.1492% ( 7) 00:25:30.132 13.265 - 13.324: 86.1960% ( 4) 00:25:30.132 13.324 - 13.382: 86.2077% ( 1) 00:25:30.132 13.382 - 13.440: 86.2194% ( 1) 00:25:30.132 13.498 - 13.556: 86.2428% ( 2) 00:25:30.132 13.556 - 13.615: 86.2662% ( 2) 00:25:30.132 13.615 - 13.673: 86.2897% ( 2) 00:25:30.132 13.673 - 13.731: 86.3599% ( 6) 00:25:30.132 13.731 - 13.789: 86.3950% ( 3) 00:25:30.132 13.789 - 13.847: 86.4185% ( 2) 00:25:30.132 13.847 - 13.905: 86.4770% ( 5) 00:25:30.132 13.905 - 13.964: 86.5004% ( 2) 00:25:30.132 13.964 - 14.022: 86.5121% ( 1) 00:25:30.132 14.022 - 14.080: 86.5590% ( 4) 00:25:30.132 14.080 - 14.138: 86.5941% ( 3) 00:25:30.132 14.138 - 14.196: 86.6058% ( 1) 00:25:30.132 14.196 - 14.255: 86.6643% ( 5) 00:25:30.132 14.255 - 14.313: 86.7346% ( 6) 00:25:30.132 14.313 - 14.371: 86.7697% ( 3) 00:25:30.132 14.371 - 14.429: 86.7931% ( 2) 00:25:30.132 14.429 - 14.487: 86.8165% ( 2) 00:25:30.132 14.487 - 14.545: 86.8634% ( 4) 00:25:30.132 14.545 - 14.604: 86.8985% ( 3) 00:25:30.132 14.604 - 14.662: 86.9570% ( 5) 00:25:30.132 14.662 - 14.720: 87.0156% ( 5) 00:25:30.132 14.720 - 14.778: 87.0507% ( 3) 00:25:30.132 14.778 - 14.836: 87.1092% ( 5) 00:25:30.132 14.836 - 14.895: 87.1678% ( 5) 00:25:30.132 14.895 - 15.011: 87.3083% ( 12) 00:25:30.132 15.011 - 15.127: 87.4488% ( 12) 00:25:30.132 15.127 - 15.244: 87.6010% ( 13) 00:25:30.132 15.244 - 15.360: 87.7883% ( 16) 00:25:30.132 15.360 - 15.476: 87.9288% ( 12) 00:25:30.132 15.476 - 15.593: 88.0576% ( 11) 00:25:30.132 15.593 - 15.709: 88.1630% ( 9) 00:25:30.132 15.709 - 15.825: 88.2918% ( 11) 00:25:30.132 15.825 - 15.942: 88.4440% ( 13) 00:25:30.132 15.942 - 16.058: 88.5376% ( 8) 00:25:30.132 16.058 - 16.175: 88.6196% ( 7) 00:25:30.132 16.175 - 16.291: 88.6898% ( 6) 00:25:30.132 16.291 - 16.407: 88.7952% ( 9) 00:25:30.132 16.407 - 16.524: 88.8655% ( 6) 00:25:30.132 16.524 - 16.640: 88.9357% ( 6) 00:25:30.132 16.640 - 16.756: 89.0411% ( 9) 00:25:30.132 16.756 - 16.873: 89.0762% ( 3) 00:25:30.132 16.873 - 16.989: 89.1582% ( 7) 00:25:30.132 16.989 - 17.105: 89.1816% ( 2) 00:25:30.132 17.105 - 17.222: 89.1933% ( 1) 00:25:30.132 17.222 - 17.338: 89.2050% ( 1) 00:25:30.132 17.338 - 17.455: 89.2167% ( 1) 00:25:30.132 17.455 - 17.571: 89.2401% ( 2) 00:25:30.132 17.571 - 17.687: 89.2636% ( 2) 00:25:30.132 17.687 - 17.804: 89.2870% ( 2) 00:25:30.132 17.804 - 17.920: 89.3221% ( 3) 00:25:30.132 17.920 - 18.036: 89.3455% ( 2) 00:25:30.132 18.036 - 18.153: 89.3689% ( 2) 00:25:30.132 18.153 - 18.269: 89.3806% ( 1) 00:25:30.132 18.385 - 18.502: 89.4041% ( 2) 00:25:30.132 18.502 - 18.618: 89.4275% ( 2) 00:25:30.132 18.735 - 18.851: 89.4392% ( 1) 00:25:30.132 18.967 - 19.084: 89.4860% ( 4) 00:25:30.132 19.084 - 19.200: 89.5094% ( 2) 00:25:30.132 19.316 - 19.433: 89.5328% ( 2) 00:25:30.132 19.549 - 19.665: 89.5680% ( 3) 00:25:30.132 19.665 - 19.782: 89.6265% ( 5) 00:25:30.132 20.015 - 20.131: 89.6382% ( 1) 00:25:30.132 20.131 - 20.247: 89.7436% ( 9) 00:25:30.132 20.247 - 20.364: 89.7904% ( 4) 00:25:30.132 20.480 - 20.596: 89.8255% ( 3) 00:25:30.132 20.596 - 20.713: 89.8958% ( 6) 00:25:30.132 20.713 - 20.829: 89.9192% ( 2) 00:25:30.132 20.829 - 20.945: 89.9543% ( 3) 00:25:30.132 20.945 - 21.062: 90.0012% ( 4) 00:25:30.132 21.178 - 21.295: 90.0480% ( 4) 00:25:30.132 21.295 - 21.411: 90.1065% ( 5) 00:25:30.132 21.411 - 21.527: 90.1417% ( 3) 00:25:30.132 21.527 - 21.644: 90.1534% ( 1) 00:25:30.132 21.644 - 21.760: 90.2119% ( 5) 00:25:30.132 21.760 - 21.876: 90.2353% ( 2) 00:25:30.132 21.876 - 21.993: 90.2470% ( 1) 00:25:30.132 21.993 - 22.109: 90.2705% ( 2) 00:25:30.132 22.109 - 22.225: 90.2939% ( 2) 00:25:30.132 22.342 - 22.458: 90.3056% ( 1) 00:25:30.132 22.458 - 22.575: 90.3173% ( 1) 00:25:30.132 22.575 - 22.691: 90.3290% ( 1) 00:25:30.132 22.691 - 22.807: 90.3641% ( 3) 00:25:30.132 22.807 - 22.924: 90.3758% ( 1) 00:25:30.132 22.924 - 23.040: 90.4110% ( 3) 00:25:30.132 23.040 - 23.156: 90.5632% ( 13) 00:25:30.132 23.156 - 23.273: 90.8559% ( 25) 00:25:30.132 23.273 - 23.389: 91.2657% ( 35) 00:25:30.132 23.389 - 23.505: 92.0501% ( 67) 00:25:30.132 23.505 - 23.622: 93.1156% ( 91) 00:25:30.132 23.622 - 23.738: 94.1810% ( 91) 00:25:30.132 23.738 - 23.855: 95.3752% ( 102) 00:25:30.132 23.855 - 23.971: 96.1363% ( 65) 00:25:30.132 23.971 - 24.087: 96.8037% ( 57) 00:25:30.132 24.087 - 24.204: 97.2017% ( 34) 00:25:30.132 24.204 - 24.320: 97.5530% ( 30) 00:25:30.132 24.320 - 24.436: 97.8340% ( 24) 00:25:30.132 24.436 - 24.553: 98.0681% ( 20) 00:25:30.132 24.553 - 24.669: 98.1969% ( 11) 00:25:30.132 24.669 - 24.785: 98.3023% ( 9) 00:25:30.132 24.785 - 24.902: 98.3374% ( 3) 00:25:30.132 24.902 - 25.018: 98.3960% ( 5) 00:25:30.132 25.018 - 25.135: 98.4896% ( 8) 00:25:30.132 25.135 - 25.251: 98.5131% ( 2) 00:25:30.132 25.251 - 25.367: 98.5365% ( 2) 00:25:30.132 25.367 - 25.484: 98.5833% ( 4) 00:25:30.132 25.484 - 25.600: 98.6536% ( 6) 00:25:30.132 25.600 - 25.716: 98.6770% ( 2) 00:25:30.132 25.716 - 25.833: 98.6887% ( 1) 00:25:30.133 25.833 - 25.949: 98.7004% ( 1) 00:25:30.133 26.065 - 26.182: 98.7472% ( 4) 00:25:30.133 26.182 - 26.298: 98.7589% ( 1) 00:25:30.133 26.298 - 26.415: 98.7706% ( 1) 00:25:30.133 26.415 - 26.531: 98.7823% ( 1) 00:25:30.133 26.531 - 26.647: 98.7941% ( 1) 00:25:30.133 26.647 - 26.764: 98.8175% ( 2) 00:25:30.133 26.880 - 26.996: 98.8409% ( 2) 00:25:30.133 26.996 - 27.113: 98.8526% ( 1) 00:25:30.133 27.578 - 27.695: 98.8643% ( 1) 00:25:30.133 27.695 - 27.811: 98.8760% ( 1) 00:25:30.133 27.811 - 27.927: 98.8877% ( 1) 00:25:30.133 28.160 - 28.276: 98.8994% ( 1) 00:25:30.133 28.276 - 28.393: 98.9111% ( 1) 00:25:30.133 28.509 - 28.625: 98.9580% ( 4) 00:25:30.133 28.858 - 28.975: 98.9697% ( 1) 00:25:30.133 29.091 - 29.207: 98.9814% ( 1) 00:25:30.133 29.207 - 29.324: 98.9931% ( 1) 00:25:30.133 29.324 - 29.440: 99.0282% ( 3) 00:25:30.133 29.440 - 29.556: 99.0399% ( 1) 00:25:30.133 29.556 - 29.673: 99.0750% ( 3) 00:25:30.133 29.673 - 29.789: 99.1102% ( 3) 00:25:30.133 29.789 - 30.022: 99.1453% ( 3) 00:25:30.133 30.022 - 30.255: 99.2038% ( 5) 00:25:30.133 30.255 - 30.487: 99.2390% ( 3) 00:25:30.133 30.487 - 30.720: 99.3326% ( 8) 00:25:30.133 30.720 - 30.953: 99.3560% ( 2) 00:25:30.133 30.953 - 31.185: 99.4263% ( 6) 00:25:30.133 31.185 - 31.418: 99.4614% ( 3) 00:25:30.133 31.418 - 31.651: 99.5083% ( 4) 00:25:30.133 31.884 - 32.116: 99.5200% ( 1) 00:25:30.133 32.349 - 32.582: 99.5317% ( 1) 00:25:30.133 32.815 - 33.047: 99.5785% ( 4) 00:25:30.133 33.047 - 33.280: 99.5902% ( 1) 00:25:30.133 33.513 - 33.745: 99.6136% ( 2) 00:25:30.133 34.444 - 34.676: 99.6253% ( 1) 00:25:30.133 35.607 - 35.840: 99.6370% ( 1) 00:25:30.133 36.538 - 36.771: 99.6605% ( 2) 00:25:30.133 38.167 - 38.400: 99.6722% ( 1) 00:25:30.133 38.400 - 38.633: 99.6956% ( 2) 00:25:30.133 38.865 - 39.098: 99.7307% ( 3) 00:25:30.133 39.098 - 39.331: 99.7424% ( 1) 00:25:30.133 39.331 - 39.564: 99.7541% ( 1) 00:25:30.133 39.564 - 39.796: 99.7658% ( 1) 00:25:30.133 40.727 - 40.960: 99.7775% ( 1) 00:25:30.133 42.356 - 42.589: 99.7893% ( 1) 00:25:30.133 44.684 - 44.916: 99.8010% ( 1) 00:25:30.133 44.916 - 45.149: 99.8127% ( 1) 00:25:30.133 45.149 - 45.382: 99.8244% ( 1) 00:25:30.133 45.382 - 45.615: 99.8361% ( 1) 00:25:30.133 46.080 - 46.313: 99.8478% ( 1) 00:25:30.133 46.313 - 46.545: 99.8829% ( 3) 00:25:30.133 46.778 - 47.011: 99.8946% ( 1) 00:25:30.133 47.709 - 47.942: 99.9180% ( 2) 00:25:30.133 48.407 - 48.640: 99.9298% ( 1) 00:25:30.133 48.873 - 49.105: 99.9415% ( 1) 00:25:30.133 49.105 - 49.338: 99.9532% ( 1) 00:25:30.133 49.571 - 49.804: 99.9649% ( 1) 00:25:30.133 50.735 - 50.967: 99.9766% ( 1) 00:25:30.133 84.247 - 84.713: 99.9883% ( 1) 00:25:30.133 91.695 - 92.160: 100.0000% ( 1) 00:25:30.133 00:25:30.133 ************************************ 00:25:30.133 END TEST nvme_overhead 00:25:30.133 ************************************ 00:25:30.133 00:25:30.133 real 0m1.248s 00:25:30.133 user 0m1.071s 00:25:30.133 sys 0m0.132s 00:25:30.133 11:35:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:30.133 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:30.133 11:35:48 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:25:30.133 11:35:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:30.133 11:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:30.133 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:30.133 ************************************ 00:25:30.133 START TEST nvme_arbitration 00:25:30.133 ************************************ 00:25:30.133 11:35:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:25:33.416 Initializing NVMe Controllers 00:25:33.416 Attached to 0000:00:06.0 00:25:33.416 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:25:33.416 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:25:33.416 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:25:33.416 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:25:33.416 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:25:33.416 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:25:33.416 Initialization complete. Launching workers. 00:25:33.416 Starting thread on core 1 with urgent priority queue 00:25:33.416 Starting thread on core 2 with urgent priority queue 00:25:33.416 Starting thread on core 3 with urgent priority queue 00:25:33.416 Starting thread on core 0 with urgent priority queue 00:25:33.416 QEMU NVMe Ctrl (12340 ) core 0: 10069.67 IO/s 9.93 secs/100000 ios 00:25:33.416 QEMU NVMe Ctrl (12340 ) core 1: 9984.33 IO/s 10.02 secs/100000 ios 00:25:33.416 QEMU NVMe Ctrl (12340 ) core 2: 4513.00 IO/s 22.16 secs/100000 ios 00:25:33.416 QEMU NVMe Ctrl (12340 ) core 3: 4746.00 IO/s 21.07 secs/100000 ios 00:25:33.416 ======================================================== 00:25:33.416 00:25:33.416 ************************************ 00:25:33.416 END TEST nvme_arbitration 00:25:33.416 ************************************ 00:25:33.416 00:25:33.416 real 0m3.310s 00:25:33.416 user 0m9.092s 00:25:33.416 sys 0m0.191s 00:25:33.416 11:35:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:33.416 11:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.416 11:35:51 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:25:33.417 11:35:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:33.417 11:35:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.417 11:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.417 ************************************ 00:25:33.417 START TEST nvme_single_aen 00:25:33.417 ************************************ 00:25:33.417 11:35:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:25:33.417 [2024-11-26 11:35:51.655424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:33.417 [2024-11-26 11:35:51.655606] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.675 [2024-11-26 11:35:51.896434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:33.934 Asynchronous Event Request test 00:25:33.934 Attached to 0000:00:06.0 00:25:33.934 Reset controller to setup AER completions for this process 00:25:33.934 Registering asynchronous event callbacks... 00:25:33.935 Getting orig temperature thresholds of all controllers 00:25:33.935 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:33.935 Setting all controllers temperature threshold low to trigger AER 00:25:33.935 Waiting for all controllers temperature threshold to be set lower 00:25:33.935 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:33.935 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:25:33.935 Waiting for all controllers to trigger AER and reset threshold 00:25:33.935 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:33.935 Cleaning up... 00:25:33.935 ************************************ 00:25:33.935 END TEST nvme_single_aen 00:25:33.935 ************************************ 00:25:33.935 00:25:33.935 real 0m0.327s 00:25:33.935 user 0m0.129s 00:25:33.935 sys 0m0.155s 00:25:33.935 11:35:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:33.935 11:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.935 11:35:51 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:25:33.935 11:35:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.935 11:35:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.935 11:35:51 -- common/autotest_common.sh@10 -- # set +x 00:25:33.935 ************************************ 00:25:33.935 START TEST nvme_doorbell_aers 00:25:33.935 ************************************ 00:25:33.935 11:35:51 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:25:33.935 11:35:51 -- nvme/nvme.sh@70 -- # bdfs=() 00:25:33.935 11:35:51 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:25:33.935 11:35:51 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:25:33.935 11:35:51 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:25:33.935 11:35:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:33.935 11:35:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:33.935 11:35:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:33.935 11:35:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:33.935 11:35:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:33.935 11:35:52 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:25:33.935 11:35:52 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:25:33.935 11:35:52 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:25:33.935 11:35:52 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:25:34.194 [2024-11-26 11:35:52.274337] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102089) is not found. Dropping the request. 00:25:44.245 Executing: test_write_invalid_db 00:25:44.245 Waiting for AER completion... 00:25:44.245 Failure: test_write_invalid_db 00:25:44.245 00:25:44.245 Executing: test_invalid_db_write_overflow_sq 00:25:44.245 Waiting for AER completion... 00:25:44.245 Failure: test_invalid_db_write_overflow_sq 00:25:44.245 00:25:44.245 Executing: test_invalid_db_write_overflow_cq 00:25:44.245 Waiting for AER completion... 00:25:44.245 Failure: test_invalid_db_write_overflow_cq 00:25:44.245 00:25:44.245 ************************************ 00:25:44.245 END TEST nvme_doorbell_aers 00:25:44.245 ************************************ 00:25:44.245 00:25:44.245 real 0m10.090s 00:25:44.245 user 0m8.653s 00:25:44.245 sys 0m1.382s 00:25:44.245 11:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:44.245 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:44.245 11:36:02 -- nvme/nvme.sh@97 -- # uname 00:25:44.245 11:36:02 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:25:44.245 11:36:02 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:25:44.245 11:36:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:25:44.245 11:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.245 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:44.245 ************************************ 00:25:44.245 START TEST nvme_multi_aen 00:25:44.245 ************************************ 00:25:44.245 11:36:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:25:44.245 [2024-11-26 11:36:02.168920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.245 [2024-11-26 11:36:02.169068] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.245 [2024-11-26 11:36:02.374691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:44.245 [2024-11-26 11:36:02.374762] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102089) is not found. Dropping the request. 00:25:44.245 [2024-11-26 11:36:02.374811] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102089) is not found. Dropping the request. 00:25:44.245 [2024-11-26 11:36:02.374831] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102089) is not found. Dropping the request. 00:25:44.245 [2024-11-26 11:36:02.388511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.245 Child process pid: 102263 00:25:44.245 [2024-11-26 11:36:02.388784] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.505 [Child] Asynchronous Event Request test 00:25:44.505 [Child] Attached to 0000:00:06.0 00:25:44.505 [Child] Registering asynchronous event callbacks... 00:25:44.505 [Child] Getting orig temperature thresholds of all controllers 00:25:44.505 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:44.505 [Child] Waiting for all controllers to trigger AER and reset threshold 00:25:44.505 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:44.505 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:44.505 [Child] Cleaning up... 00:25:44.505 Asynchronous Event Request test 00:25:44.505 Attached to 0000:00:06.0 00:25:44.505 Reset controller to setup AER completions for this process 00:25:44.505 Registering asynchronous event callbacks... 00:25:44.505 Getting orig temperature thresholds of all controllers 00:25:44.505 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:44.505 Setting all controllers temperature threshold low to trigger AER 00:25:44.505 Waiting for all controllers temperature threshold to be set lower 00:25:44.505 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:44.505 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:25:44.505 Waiting for all controllers to trigger AER and reset threshold 00:25:44.505 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:44.505 Cleaning up... 00:25:44.505 ************************************ 00:25:44.505 END TEST nvme_multi_aen 00:25:44.505 ************************************ 00:25:44.505 00:25:44.505 real 0m0.583s 00:25:44.505 user 0m0.207s 00:25:44.505 sys 0m0.266s 00:25:44.505 11:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:44.505 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:44.765 11:36:02 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:25:44.765 11:36:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:44.765 11:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.765 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:44.765 ************************************ 00:25:44.765 START TEST nvme_startup 00:25:44.765 ************************************ 00:25:44.765 11:36:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:25:45.024 Initializing NVMe Controllers 00:25:45.024 Attached to 0000:00:06.0 00:25:45.024 Initialization complete. 00:25:45.024 Time used:211754.188 (us). 00:25:45.024 ************************************ 00:25:45.024 END TEST nvme_startup 00:25:45.024 ************************************ 00:25:45.024 00:25:45.024 real 0m0.281s 00:25:45.024 user 0m0.101s 00:25:45.024 sys 0m0.138s 00:25:45.024 11:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:45.024 11:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:45.024 11:36:03 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:25:45.024 11:36:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:45.024 11:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.024 11:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:45.024 ************************************ 00:25:45.024 START TEST nvme_multi_secondary 00:25:45.024 ************************************ 00:25:45.024 11:36:03 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:25:45.024 11:36:03 -- nvme/nvme.sh@52 -- # pid0=102314 00:25:45.024 11:36:03 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:25:45.024 11:36:03 -- nvme/nvme.sh@54 -- # pid1=102315 00:25:45.024 11:36:03 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:25:45.024 11:36:03 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:25:48.312 Initializing NVMe Controllers 00:25:48.312 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:48.312 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:25:48.312 Initialization complete. Launching workers. 00:25:48.312 ======================================================== 00:25:48.312 Latency(us) 00:25:48.312 Device Information : IOPS MiB/s Average min max 00:25:48.312 PCIE (0000:00:06.0) NSID 1 from core 1: 34678.04 135.46 461.06 148.52 1384.51 00:25:48.312 ======================================================== 00:25:48.312 Total : 34678.04 135.46 461.06 148.52 1384.51 00:25:48.312 00:25:48.571 Initializing NVMe Controllers 00:25:48.571 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:48.571 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:25:48.571 Initialization complete. Launching workers. 00:25:48.571 ======================================================== 00:25:48.571 Latency(us) 00:25:48.571 Device Information : IOPS MiB/s Average min max 00:25:48.571 PCIE (0000:00:06.0) NSID 1 from core 2: 16009.35 62.54 997.78 155.23 8502.08 00:25:48.571 ======================================================== 00:25:48.571 Total : 16009.35 62.54 997.78 155.23 8502.08 00:25:48.571 00:25:48.571 11:36:06 -- nvme/nvme.sh@56 -- # wait 102314 00:25:50.475 Initializing NVMe Controllers 00:25:50.475 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:50.475 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:25:50.475 Initialization complete. Launching workers. 00:25:50.476 ======================================================== 00:25:50.476 Latency(us) 00:25:50.476 Device Information : IOPS MiB/s Average min max 00:25:50.476 PCIE (0000:00:06.0) NSID 1 from core 0: 44167.19 172.53 361.91 104.55 1333.02 00:25:50.476 ======================================================== 00:25:50.476 Total : 44167.19 172.53 361.91 104.55 1333.02 00:25:50.476 00:25:50.476 11:36:08 -- nvme/nvme.sh@57 -- # wait 102315 00:25:50.476 11:36:08 -- nvme/nvme.sh@61 -- # pid0=102384 00:25:50.476 11:36:08 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:25:50.476 11:36:08 -- nvme/nvme.sh@63 -- # pid1=102385 00:25:50.476 11:36:08 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:25:50.476 11:36:08 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:25:53.761 Initializing NVMe Controllers 00:25:53.761 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:53.761 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:25:53.761 Initialization complete. Launching workers. 00:25:53.761 ======================================================== 00:25:53.761 Latency(us) 00:25:53.761 Device Information : IOPS MiB/s Average min max 00:25:53.761 PCIE (0000:00:06.0) NSID 1 from core 0: 37049.10 144.72 431.49 130.58 1355.56 00:25:53.761 ======================================================== 00:25:53.761 Total : 37049.10 144.72 431.49 130.58 1355.56 00:25:53.761 00:25:53.761 Initializing NVMe Controllers 00:25:53.761 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:53.761 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:25:53.761 Initialization complete. Launching workers. 00:25:53.761 ======================================================== 00:25:53.761 Latency(us) 00:25:53.761 Device Information : IOPS MiB/s Average min max 00:25:53.761 PCIE (0000:00:06.0) NSID 1 from core 1: 37168.80 145.19 430.14 123.60 1258.42 00:25:53.761 ======================================================== 00:25:53.761 Total : 37168.80 145.19 430.14 123.60 1258.42 00:25:53.761 00:25:56.296 Initializing NVMe Controllers 00:25:56.296 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:25:56.296 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:25:56.296 Initialization complete. Launching workers. 00:25:56.296 ======================================================== 00:25:56.296 Latency(us) 00:25:56.296 Device Information : IOPS MiB/s Average min max 00:25:56.296 PCIE (0000:00:06.0) NSID 1 from core 2: 18552.45 72.47 861.83 149.03 8897.03 00:25:56.296 ======================================================== 00:25:56.296 Total : 18552.45 72.47 861.83 149.03 8897.03 00:25:56.296 00:25:56.296 ************************************ 00:25:56.296 END TEST nvme_multi_secondary 00:25:56.296 ************************************ 00:25:56.296 11:36:14 -- nvme/nvme.sh@65 -- # wait 102384 00:25:56.296 11:36:14 -- nvme/nvme.sh@66 -- # wait 102385 00:25:56.296 00:25:56.296 real 0m10.920s 00:25:56.296 user 0m18.544s 00:25:56.296 sys 0m0.970s 00:25:56.296 11:36:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.296 11:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.296 11:36:14 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:25:56.296 11:36:14 -- nvme/nvme.sh@102 -- # kill_stub 00:25:56.296 11:36:14 -- common/autotest_common.sh@1075 -- # [[ -e /proc/101723 ]] 00:25:56.296 11:36:14 -- common/autotest_common.sh@1076 -- # kill 101723 00:25:56.296 11:36:14 -- common/autotest_common.sh@1077 -- # wait 101723 00:25:56.864 [2024-11-26 11:36:14.929401] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102262) is not found. Dropping the request. 00:25:56.864 [2024-11-26 11:36:14.929534] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102262) is not found. Dropping the request. 00:25:56.864 [2024-11-26 11:36:14.929598] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102262) is not found. Dropping the request. 00:25:56.864 [2024-11-26 11:36:14.929633] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 102262) is not found. Dropping the request. 00:25:56.864 11:36:15 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:25:56.864 11:36:15 -- common/autotest_common.sh@1083 -- # echo 2 00:25:56.864 11:36:15 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:25:56.864 11:36:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:56.864 11:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.864 11:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:56.864 ************************************ 00:25:56.864 START TEST bdev_nvme_reset_stuck_adm_cmd 00:25:56.864 ************************************ 00:25:56.864 11:36:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:25:57.124 * Looking for test storage... 00:25:57.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:57.124 11:36:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:57.124 11:36:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:57.124 11:36:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:57.124 11:36:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:57.124 11:36:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:57.124 11:36:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:57.124 11:36:15 -- scripts/common.sh@335 -- # IFS=.-: 00:25:57.124 11:36:15 -- scripts/common.sh@335 -- # read -ra ver1 00:25:57.124 11:36:15 -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.124 11:36:15 -- scripts/common.sh@336 -- # read -ra ver2 00:25:57.124 11:36:15 -- scripts/common.sh@337 -- # local 'op=<' 00:25:57.124 11:36:15 -- scripts/common.sh@339 -- # ver1_l=2 00:25:57.124 11:36:15 -- scripts/common.sh@340 -- # ver2_l=1 00:25:57.124 11:36:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:57.124 11:36:15 -- scripts/common.sh@343 -- # case "$op" in 00:25:57.124 11:36:15 -- scripts/common.sh@344 -- # : 1 00:25:57.124 11:36:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:57.124 11:36:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.124 11:36:15 -- scripts/common.sh@364 -- # decimal 1 00:25:57.124 11:36:15 -- scripts/common.sh@352 -- # local d=1 00:25:57.124 11:36:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.124 11:36:15 -- scripts/common.sh@354 -- # echo 1 00:25:57.124 11:36:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:57.124 11:36:15 -- scripts/common.sh@365 -- # decimal 2 00:25:57.124 11:36:15 -- scripts/common.sh@352 -- # local d=2 00:25:57.124 11:36:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.124 11:36:15 -- scripts/common.sh@354 -- # echo 2 00:25:57.124 11:36:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:57.124 11:36:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:57.124 11:36:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:57.124 11:36:15 -- scripts/common.sh@367 -- # return 0 00:25:57.124 11:36:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.124 --rc genhtml_branch_coverage=1 00:25:57.124 --rc genhtml_function_coverage=1 00:25:57.124 --rc genhtml_legend=1 00:25:57.124 --rc geninfo_all_blocks=1 00:25:57.124 --rc geninfo_unexecuted_blocks=1 00:25:57.124 00:25:57.124 ' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.124 --rc genhtml_branch_coverage=1 00:25:57.124 --rc genhtml_function_coverage=1 00:25:57.124 --rc genhtml_legend=1 00:25:57.124 --rc geninfo_all_blocks=1 00:25:57.124 --rc geninfo_unexecuted_blocks=1 00:25:57.124 00:25:57.124 ' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.124 --rc genhtml_branch_coverage=1 00:25:57.124 --rc genhtml_function_coverage=1 00:25:57.124 --rc genhtml_legend=1 00:25:57.124 --rc geninfo_all_blocks=1 00:25:57.124 --rc geninfo_unexecuted_blocks=1 00:25:57.124 00:25:57.124 ' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:57.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.124 --rc genhtml_branch_coverage=1 00:25:57.124 --rc genhtml_function_coverage=1 00:25:57.124 --rc genhtml_legend=1 00:25:57.124 --rc geninfo_all_blocks=1 00:25:57.124 --rc geninfo_unexecuted_blocks=1 00:25:57.124 00:25:57.124 ' 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:25:57.124 11:36:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:57.124 11:36:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:57.124 11:36:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:57.124 11:36:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:57.124 11:36:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:57.124 11:36:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:57.124 11:36:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:57.124 11:36:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:57.124 11:36:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:57.124 11:36:15 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:25:57.124 11:36:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:25:57.124 11:36:15 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=102538 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:57.124 11:36:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 102538 00:25:57.124 11:36:15 -- common/autotest_common.sh@829 -- # '[' -z 102538 ']' 00:25:57.124 11:36:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.124 11:36:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.124 11:36:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.124 11:36:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.124 11:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:57.124 [2024-11-26 11:36:15.354518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.124 [2024-11-26 11:36:15.354696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102538 ] 00:25:57.383 [2024-11-26 11:36:15.545172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.383 [2024-11-26 11:36:15.590534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:57.383 [2024-11-26 11:36:15.591255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.383 [2024-11-26 11:36:15.591610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.383 [2024-11-26 11:36:15.591755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.383 [2024-11-26 11:36:15.591992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.320 11:36:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.320 11:36:16 -- common/autotest_common.sh@862 -- # return 0 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:25:58.320 11:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.320 11:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:58.320 nvme0n1 00:25:58.320 11:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_96NaH.txt 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:25:58.320 11:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.320 11:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:58.320 true 00:25:58.320 11:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732620976 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=102566 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:25:58.320 11:36:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:00.222 11:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.222 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:00.222 [2024-11-26 11:36:18.362790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:00.222 [2024-11-26 11:36:18.363594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.222 [2024-11-26 11:36:18.363754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:00.222 [2024-11-26 11:36:18.363875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.222 [2024-11-26 11:36:18.365939] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:00.222 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 102566 00:26:00.222 11:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 102566 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 102566 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.222 11:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.222 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:00.222 11:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_96NaH.txt 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_96NaH.txt 00:26:00.222 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 102538 00:26:00.222 11:36:18 -- common/autotest_common.sh@936 -- # '[' -z 102538 ']' 00:26:00.222 11:36:18 -- common/autotest_common.sh@940 -- # kill -0 102538 00:26:00.222 11:36:18 -- common/autotest_common.sh@941 -- # uname 00:26:00.222 11:36:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:00.222 11:36:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102538 00:26:00.482 killing process with pid 102538 00:26:00.482 11:36:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:00.482 11:36:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:00.482 11:36:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102538' 00:26:00.482 11:36:18 -- common/autotest_common.sh@955 -- # kill 102538 00:26:00.482 11:36:18 -- common/autotest_common.sh@960 -- # wait 102538 00:26:00.742 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:26:00.742 11:36:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:26:00.742 00:26:00.742 real 0m3.696s 00:26:00.742 user 0m12.966s 00:26:00.742 sys 0m0.585s 00:26:00.742 11:36:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:00.742 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:00.742 ************************************ 00:26:00.742 END TEST bdev_nvme_reset_stuck_adm_cmd 00:26:00.742 ************************************ 00:26:00.742 11:36:18 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:26:00.742 11:36:18 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:26:00.742 11:36:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:00.742 11:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:00.742 11:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:00.742 ************************************ 00:26:00.742 START TEST nvme_fio 00:26:00.742 ************************************ 00:26:00.742 11:36:18 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:26:00.742 11:36:18 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:00.742 11:36:18 -- nvme/nvme.sh@32 -- # ran_fio=false 00:26:00.742 11:36:18 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:26:00.742 11:36:18 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:00.742 11:36:18 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:00.742 11:36:18 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:00.742 11:36:18 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:00.742 11:36:18 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:00.742 11:36:18 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:00.742 11:36:18 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:00.742 11:36:18 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:26:00.742 11:36:18 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:26:00.742 11:36:18 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:00.742 11:36:18 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:00.742 11:36:18 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:01.000 11:36:19 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:01.000 11:36:19 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:01.258 11:36:19 -- nvme/nvme.sh@41 -- # bs=4096 00:26:01.258 11:36:19 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:01.258 11:36:19 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:01.258 11:36:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:01.258 11:36:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.258 11:36:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:01.258 11:36:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:01.258 11:36:19 -- common/autotest_common.sh@1330 -- # shift 00:26:01.258 11:36:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:01.258 11:36:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.258 11:36:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:01.258 11:36:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:01.258 11:36:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:01.258 11:36:19 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:26:01.258 11:36:19 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:26:01.258 11:36:19 -- common/autotest_common.sh@1336 -- # break 00:26:01.259 11:36:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:01.259 11:36:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:01.259 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:01.259 fio-3.35 00:26:01.259 Starting 1 thread 00:26:04.549 00:26:04.549 test: (groupid=0, jobs=1): err= 0: pid=102681: Tue Nov 26 11:36:22 2024 00:26:04.549 read: IOPS=13.3k, BW=52.0MiB/s (54.6MB/s)(104MiB/2001msec) 00:26:04.549 slat (nsec): min=3913, max=88741, avg=6863.73, stdev=4217.80 00:26:04.549 clat (usec): min=271, max=10631, avg=4785.29, stdev=547.36 00:26:04.549 lat (usec): min=276, max=10720, avg=4792.15, stdev=548.01 00:26:04.549 clat percentiles (usec): 00:26:04.549 | 1.00th=[ 3523], 5.00th=[ 4047], 10.00th=[ 4293], 20.00th=[ 4424], 00:26:04.549 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:26:04.549 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5473], 95.00th=[ 5669], 00:26:04.549 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 8848], 99.95th=[ 9503], 00:26:04.549 | 99.99th=[10421] 00:26:04.549 bw ( KiB/s): min=52976, max=55400, per=100.00%, avg=53890.67, stdev=1316.88, samples=3 00:26:04.549 iops : min=13244, max=13850, avg=13472.67, stdev=329.22, samples=3 00:26:04.549 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(104MiB/2001msec); 0 zone resets 00:26:04.549 slat (usec): min=3, max=111, avg= 7.12, stdev= 4.41 00:26:04.549 clat (usec): min=239, max=10390, avg=4798.63, stdev=542.45 00:26:04.549 lat (usec): min=244, max=10403, avg=4805.75, stdev=543.10 00:26:04.549 clat percentiles (usec): 00:26:04.549 | 1.00th=[ 3589], 5.00th=[ 4080], 10.00th=[ 4293], 20.00th=[ 4490], 00:26:04.549 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:26:04.549 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5473], 95.00th=[ 5669], 00:26:04.549 | 99.00th=[ 5997], 99.50th=[ 6915], 99.90th=[ 8979], 99.95th=[ 9503], 00:26:04.549 | 99.99th=[10159] 00:26:04.549 bw ( KiB/s): min=53264, max=55096, per=100.00%, avg=53904.00, stdev=1033.24, samples=3 00:26:04.549 iops : min=13316, max=13774, avg=13476.00, stdev=258.31, samples=3 00:26:04.549 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:26:04.549 lat (msec) : 2=0.05%, 4=4.20%, 10=95.68%, 20=0.03% 00:26:04.549 cpu : usr=99.90%, sys=0.00%, ctx=20, majf=0, minf=629 00:26:04.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:04.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:04.549 issued rwts: total=26655,26631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:04.549 00:26:04.549 Run status group 0 (all jobs): 00:26:04.549 READ: bw=52.0MiB/s (54.6MB/s), 52.0MiB/s-52.0MiB/s (54.6MB/s-54.6MB/s), io=104MiB (109MB), run=2001-2001msec 00:26:04.549 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=104MiB (109MB), run=2001-2001msec 00:26:04.549 ----------------------------------------------------- 00:26:04.549 Suppressions used: 00:26:04.549 count bytes template 00:26:04.549 1 32 /usr/src/fio/parse.c 00:26:04.549 ----------------------------------------------------- 00:26:04.549 00:26:04.549 11:36:22 -- nvme/nvme.sh@44 -- # ran_fio=true 00:26:04.549 11:36:22 -- nvme/nvme.sh@46 -- # true 00:26:04.549 ************************************ 00:26:04.549 END TEST nvme_fio 00:26:04.549 ************************************ 00:26:04.549 00:26:04.549 real 0m3.760s 00:26:04.549 user 0m3.055s 00:26:04.549 sys 0m0.351s 00:26:04.549 11:36:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:04.549 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 00:26:04.549 real 0m43.311s 00:26:04.549 user 1m55.628s 00:26:04.549 sys 0m8.020s 00:26:04.549 11:36:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:04.549 ************************************ 00:26:04.549 END TEST nvme 00:26:04.549 ************************************ 00:26:04.549 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 11:36:22 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:26:04.549 11:36:22 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:26:04.549 11:36:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:04.549 11:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.549 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 ************************************ 00:26:04.549 START TEST nvme_scc 00:26:04.549 ************************************ 00:26:04.549 11:36:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:26:04.549 * Looking for test storage... 00:26:04.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:04.549 11:36:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:04.549 11:36:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:04.549 11:36:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:04.809 11:36:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:04.809 11:36:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:04.809 11:36:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:04.809 11:36:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:04.809 11:36:22 -- scripts/common.sh@335 -- # IFS=.-: 00:26:04.809 11:36:22 -- scripts/common.sh@335 -- # read -ra ver1 00:26:04.809 11:36:22 -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.809 11:36:22 -- scripts/common.sh@336 -- # read -ra ver2 00:26:04.809 11:36:22 -- scripts/common.sh@337 -- # local 'op=<' 00:26:04.809 11:36:22 -- scripts/common.sh@339 -- # ver1_l=2 00:26:04.809 11:36:22 -- scripts/common.sh@340 -- # ver2_l=1 00:26:04.809 11:36:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:04.809 11:36:22 -- scripts/common.sh@343 -- # case "$op" in 00:26:04.809 11:36:22 -- scripts/common.sh@344 -- # : 1 00:26:04.809 11:36:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:04.809 11:36:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.809 11:36:22 -- scripts/common.sh@364 -- # decimal 1 00:26:04.809 11:36:22 -- scripts/common.sh@352 -- # local d=1 00:26:04.809 11:36:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.809 11:36:22 -- scripts/common.sh@354 -- # echo 1 00:26:04.809 11:36:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:04.809 11:36:22 -- scripts/common.sh@365 -- # decimal 2 00:26:04.809 11:36:22 -- scripts/common.sh@352 -- # local d=2 00:26:04.809 11:36:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.809 11:36:22 -- scripts/common.sh@354 -- # echo 2 00:26:04.809 11:36:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:04.809 11:36:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:04.809 11:36:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:04.809 11:36:22 -- scripts/common.sh@367 -- # return 0 00:26:04.809 11:36:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.809 11:36:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:04.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.809 --rc genhtml_branch_coverage=1 00:26:04.809 --rc genhtml_function_coverage=1 00:26:04.809 --rc genhtml_legend=1 00:26:04.809 --rc geninfo_all_blocks=1 00:26:04.809 --rc geninfo_unexecuted_blocks=1 00:26:04.809 00:26:04.809 ' 00:26:04.809 11:36:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:04.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.809 --rc genhtml_branch_coverage=1 00:26:04.809 --rc genhtml_function_coverage=1 00:26:04.809 --rc genhtml_legend=1 00:26:04.809 --rc geninfo_all_blocks=1 00:26:04.809 --rc geninfo_unexecuted_blocks=1 00:26:04.809 00:26:04.809 ' 00:26:04.809 11:36:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:04.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.809 --rc genhtml_branch_coverage=1 00:26:04.809 --rc genhtml_function_coverage=1 00:26:04.809 --rc genhtml_legend=1 00:26:04.809 --rc geninfo_all_blocks=1 00:26:04.809 --rc geninfo_unexecuted_blocks=1 00:26:04.809 00:26:04.809 ' 00:26:04.809 11:36:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:04.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.809 --rc genhtml_branch_coverage=1 00:26:04.809 --rc genhtml_function_coverage=1 00:26:04.809 --rc genhtml_legend=1 00:26:04.809 --rc geninfo_all_blocks=1 00:26:04.809 --rc geninfo_unexecuted_blocks=1 00:26:04.809 00:26:04.809 ' 00:26:04.809 11:36:22 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:26:04.809 11:36:22 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:26:04.809 11:36:22 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:26:04.809 11:36:22 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:04.809 11:36:22 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:04.809 11:36:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.809 11:36:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.809 11:36:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.809 11:36:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.809 11:36:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.809 11:36:22 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.809 11:36:22 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.809 11:36:22 -- paths/export.sh@6 -- # export PATH 00:26:04.809 11:36:22 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:04.809 11:36:22 -- nvme/functions.sh@10 -- # ctrls=() 00:26:04.809 11:36:22 -- nvme/functions.sh@10 -- # declare -A ctrls 00:26:04.809 11:36:22 -- nvme/functions.sh@11 -- # nvmes=() 00:26:04.809 11:36:22 -- nvme/functions.sh@11 -- # declare -A nvmes 00:26:04.809 11:36:22 -- nvme/functions.sh@12 -- # bdfs=() 00:26:04.809 11:36:22 -- nvme/functions.sh@12 -- # declare -A bdfs 00:26:04.809 11:36:22 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:26:04.809 11:36:22 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:26:04.809 11:36:22 -- nvme/functions.sh@14 -- # nvme_name= 00:26:04.809 11:36:22 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:04.809 11:36:22 -- nvme/nvme_scc.sh@12 -- # uname 00:26:04.809 11:36:22 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:26:04.809 11:36:22 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:26:04.809 11:36:22 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:05.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:26:05.069 Waiting for block devices as requested 00:26:05.069 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.069 11:36:23 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:26:05.069 11:36:23 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:26:05.069 11:36:23 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:26:05.069 11:36:23 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:26:05.069 11:36:23 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:26:05.069 11:36:23 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:26:05.069 11:36:23 -- scripts/common.sh@15 -- # local i 00:26:05.069 11:36:23 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:05.069 11:36:23 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:05.069 11:36:23 -- scripts/common.sh@24 -- # return 0 00:26:05.069 11:36:23 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:26:05.069 11:36:23 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:26:05.069 11:36:23 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:26:05.069 11:36:23 -- nvme/functions.sh@18 -- # shift 00:26:05.069 11:36:23 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:26:05.069 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.069 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.069 11:36:23 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:26:05.331 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.331 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.331 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.332 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.332 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.332 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.333 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:26:05.333 11:36:23 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:26:05.333 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:26:05.334 11:36:23 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:26:05.334 11:36:23 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:26:05.334 11:36:23 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:26:05.334 11:36:23 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@18 -- # shift 00:26:05.334 11:36:23 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.334 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:26:05.334 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:26:05.334 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:26:05.335 11:36:23 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # IFS=: 00:26:05.335 11:36:23 -- nvme/functions.sh@21 -- # read -r reg val 00:26:05.335 11:36:23 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:26:05.335 11:36:23 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:26:05.335 11:36:23 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:26:05.335 11:36:23 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:26:05.335 11:36:23 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:26:05.335 11:36:23 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:26:05.335 11:36:23 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:26:05.335 11:36:23 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:26:05.335 11:36:23 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:26:05.335 11:36:23 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:26:05.335 11:36:23 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:26:05.335 11:36:23 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:26:05.335 11:36:23 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:26:05.335 11:36:23 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:26:05.335 11:36:23 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:26:05.335 11:36:23 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:26:05.335 11:36:23 -- nvme/functions.sh@76 -- # echo 0x15d 00:26:05.335 11:36:23 -- nvme/functions.sh@184 -- # oncs=0x15d 00:26:05.335 11:36:23 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:26:05.335 11:36:23 -- nvme/functions.sh@197 -- # echo nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:26:05.335 11:36:23 -- nvme/functions.sh@206 -- # echo nvme0 00:26:05.335 11:36:23 -- nvme/functions.sh@207 -- # return 0 00:26:05.335 11:36:23 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:26:05.335 11:36:23 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:26:05.335 11:36:23 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:05.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:26:05.854 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:06.422 11:36:24 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:26:06.422 11:36:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:06.422 11:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.422 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:06.422 ************************************ 00:26:06.422 START TEST nvme_simple_copy 00:26:06.422 ************************************ 00:26:06.422 11:36:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:26:06.682 Initializing NVMe Controllers 00:26:06.682 Attaching to 0000:00:06.0 00:26:06.682 Controller supports SCC. Attached to 0000:00:06.0 00:26:06.682 Namespace ID: 1 size: 5GB 00:26:06.682 Initialization complete. 00:26:06.682 00:26:06.682 Controller QEMU NVMe Ctrl (12340 ) 00:26:06.682 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:26:06.682 Namespace Block Size:4096 00:26:06.682 Writing LBAs 0 to 63 with Random Data 00:26:06.682 Copied LBAs from 0 - 63 to the Destination LBA 256 00:26:06.682 LBAs matching Written Data: 64 00:26:06.682 00:26:06.682 real 0m0.279s 00:26:06.682 user 0m0.104s 00:26:06.682 sys 0m0.076s 00:26:06.682 11:36:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:06.682 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:06.682 ************************************ 00:26:06.682 END TEST nvme_simple_copy 00:26:06.682 ************************************ 00:26:06.682 00:26:06.682 real 0m2.184s 00:26:06.682 user 0m0.673s 00:26:06.682 sys 0m1.455s 00:26:06.682 11:36:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:06.682 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:06.682 ************************************ 00:26:06.682 END TEST nvme_scc 00:26:06.682 ************************************ 00:26:06.682 11:36:24 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:26:06.682 11:36:24 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:26:06.682 11:36:24 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:26:06.682 11:36:24 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:26:06.682 11:36:24 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:26:06.682 11:36:24 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:26:06.682 11:36:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:06.682 11:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.682 11:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:06.682 ************************************ 00:26:06.682 START TEST nvme_rpc 00:26:06.682 ************************************ 00:26:06.682 11:36:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:26:06.941 * Looking for test storage... 00:26:06.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:06.941 11:36:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:06.941 11:36:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:06.941 11:36:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:06.941 11:36:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:06.941 11:36:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:06.941 11:36:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:06.942 11:36:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:06.942 11:36:25 -- scripts/common.sh@335 -- # IFS=.-: 00:26:06.942 11:36:25 -- scripts/common.sh@335 -- # read -ra ver1 00:26:06.942 11:36:25 -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.942 11:36:25 -- scripts/common.sh@336 -- # read -ra ver2 00:26:06.942 11:36:25 -- scripts/common.sh@337 -- # local 'op=<' 00:26:06.942 11:36:25 -- scripts/common.sh@339 -- # ver1_l=2 00:26:06.942 11:36:25 -- scripts/common.sh@340 -- # ver2_l=1 00:26:06.942 11:36:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:06.942 11:36:25 -- scripts/common.sh@343 -- # case "$op" in 00:26:06.942 11:36:25 -- scripts/common.sh@344 -- # : 1 00:26:06.942 11:36:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:06.942 11:36:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.942 11:36:25 -- scripts/common.sh@364 -- # decimal 1 00:26:06.942 11:36:25 -- scripts/common.sh@352 -- # local d=1 00:26:06.942 11:36:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.942 11:36:25 -- scripts/common.sh@354 -- # echo 1 00:26:06.942 11:36:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:06.942 11:36:25 -- scripts/common.sh@365 -- # decimal 2 00:26:06.942 11:36:25 -- scripts/common.sh@352 -- # local d=2 00:26:06.942 11:36:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.942 11:36:25 -- scripts/common.sh@354 -- # echo 2 00:26:06.942 11:36:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:06.942 11:36:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:06.942 11:36:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:06.942 11:36:25 -- scripts/common.sh@367 -- # return 0 00:26:06.942 11:36:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.942 11:36:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.942 --rc genhtml_branch_coverage=1 00:26:06.942 --rc genhtml_function_coverage=1 00:26:06.942 --rc genhtml_legend=1 00:26:06.942 --rc geninfo_all_blocks=1 00:26:06.942 --rc geninfo_unexecuted_blocks=1 00:26:06.942 00:26:06.942 ' 00:26:06.942 11:36:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.942 --rc genhtml_branch_coverage=1 00:26:06.942 --rc genhtml_function_coverage=1 00:26:06.942 --rc genhtml_legend=1 00:26:06.942 --rc geninfo_all_blocks=1 00:26:06.942 --rc geninfo_unexecuted_blocks=1 00:26:06.942 00:26:06.942 ' 00:26:06.942 11:36:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.942 --rc genhtml_branch_coverage=1 00:26:06.942 --rc genhtml_function_coverage=1 00:26:06.942 --rc genhtml_legend=1 00:26:06.942 --rc geninfo_all_blocks=1 00:26:06.942 --rc geninfo_unexecuted_blocks=1 00:26:06.942 00:26:06.942 ' 00:26:06.942 11:36:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.942 --rc genhtml_branch_coverage=1 00:26:06.942 --rc genhtml_function_coverage=1 00:26:06.942 --rc genhtml_legend=1 00:26:06.942 --rc geninfo_all_blocks=1 00:26:06.942 --rc geninfo_unexecuted_blocks=1 00:26:06.942 00:26:06.942 ' 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:26:06.942 11:36:25 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:06.942 11:36:25 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:06.942 11:36:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:06.942 11:36:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:06.942 11:36:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:06.942 11:36:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:06.942 11:36:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:06.942 11:36:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:06.942 11:36:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:06.942 11:36:25 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:06.942 11:36:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:06.942 11:36:25 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:06.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=103129 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:26:06.942 11:36:25 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 103129 00:26:06.942 11:36:25 -- common/autotest_common.sh@829 -- # '[' -z 103129 ']' 00:26:06.942 11:36:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.942 11:36:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.942 11:36:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.942 11:36:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.942 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:26:07.201 [2024-11-26 11:36:25.184053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:07.201 [2024-11-26 11:36:25.184471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103129 ] 00:26:07.201 [2024-11-26 11:36:25.353249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:07.201 [2024-11-26 11:36:25.396621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.201 [2024-11-26 11:36:25.397358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.201 [2024-11-26 11:36:25.397429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.137 11:36:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.137 11:36:26 -- common/autotest_common.sh@862 -- # return 0 00:26:08.137 11:36:26 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:26:08.395 Nvme0n1 00:26:08.395 11:36:26 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:26:08.395 11:36:26 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:26:08.653 request: 00:26:08.653 { 00:26:08.653 "filename": "non_existing_file", 00:26:08.653 "bdev_name": "Nvme0n1", 00:26:08.653 "method": "bdev_nvme_apply_firmware", 00:26:08.653 "req_id": 1 00:26:08.653 } 00:26:08.653 Got JSON-RPC error response 00:26:08.653 response: 00:26:08.653 { 00:26:08.653 "code": -32603, 00:26:08.653 "message": "open file failed." 00:26:08.653 } 00:26:08.653 11:36:26 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:26:08.653 11:36:26 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:26:08.653 11:36:26 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:26:08.653 11:36:26 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:08.653 11:36:26 -- nvme/nvme_rpc.sh@40 -- # killprocess 103129 00:26:08.653 11:36:26 -- common/autotest_common.sh@936 -- # '[' -z 103129 ']' 00:26:08.653 11:36:26 -- common/autotest_common.sh@940 -- # kill -0 103129 00:26:08.653 11:36:26 -- common/autotest_common.sh@941 -- # uname 00:26:08.653 11:36:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.653 11:36:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103129 00:26:08.912 killing process with pid 103129 00:26:08.912 11:36:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:08.912 11:36:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:08.912 11:36:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103129' 00:26:08.912 11:36:26 -- common/autotest_common.sh@955 -- # kill 103129 00:26:08.912 11:36:26 -- common/autotest_common.sh@960 -- # wait 103129 00:26:09.171 00:26:09.171 real 0m2.269s 00:26:09.171 user 0m4.503s 00:26:09.171 sys 0m0.552s 00:26:09.171 11:36:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:09.171 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 ************************************ 00:26:09.171 END TEST nvme_rpc 00:26:09.171 ************************************ 00:26:09.171 11:36:27 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:26:09.171 11:36:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.171 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 ************************************ 00:26:09.171 START TEST nvme_rpc_timeouts 00:26:09.171 ************************************ 00:26:09.171 11:36:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:26:09.171 * Looking for test storage... 00:26:09.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:09.171 11:36:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:09.171 11:36:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:09.171 11:36:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:09.171 11:36:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:09.171 11:36:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:09.171 11:36:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:09.171 11:36:27 -- scripts/common.sh@335 -- # IFS=.-: 00:26:09.171 11:36:27 -- scripts/common.sh@335 -- # read -ra ver1 00:26:09.171 11:36:27 -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.171 11:36:27 -- scripts/common.sh@336 -- # read -ra ver2 00:26:09.171 11:36:27 -- scripts/common.sh@337 -- # local 'op=<' 00:26:09.171 11:36:27 -- scripts/common.sh@339 -- # ver1_l=2 00:26:09.171 11:36:27 -- scripts/common.sh@340 -- # ver2_l=1 00:26:09.171 11:36:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:09.171 11:36:27 -- scripts/common.sh@343 -- # case "$op" in 00:26:09.171 11:36:27 -- scripts/common.sh@344 -- # : 1 00:26:09.171 11:36:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:09.171 11:36:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.171 11:36:27 -- scripts/common.sh@364 -- # decimal 1 00:26:09.171 11:36:27 -- scripts/common.sh@352 -- # local d=1 00:26:09.171 11:36:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.171 11:36:27 -- scripts/common.sh@354 -- # echo 1 00:26:09.171 11:36:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:09.171 11:36:27 -- scripts/common.sh@365 -- # decimal 2 00:26:09.171 11:36:27 -- scripts/common.sh@352 -- # local d=2 00:26:09.171 11:36:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.171 11:36:27 -- scripts/common.sh@354 -- # echo 2 00:26:09.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.171 11:36:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:09.171 11:36:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:09.171 11:36:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:09.171 11:36:27 -- scripts/common.sh@367 -- # return 0 00:26:09.171 11:36:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:09.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.171 --rc genhtml_branch_coverage=1 00:26:09.171 --rc genhtml_function_coverage=1 00:26:09.171 --rc genhtml_legend=1 00:26:09.171 --rc geninfo_all_blocks=1 00:26:09.171 --rc geninfo_unexecuted_blocks=1 00:26:09.171 00:26:09.171 ' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:09.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.171 --rc genhtml_branch_coverage=1 00:26:09.171 --rc genhtml_function_coverage=1 00:26:09.171 --rc genhtml_legend=1 00:26:09.171 --rc geninfo_all_blocks=1 00:26:09.171 --rc geninfo_unexecuted_blocks=1 00:26:09.171 00:26:09.171 ' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:09.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.171 --rc genhtml_branch_coverage=1 00:26:09.171 --rc genhtml_function_coverage=1 00:26:09.171 --rc genhtml_legend=1 00:26:09.171 --rc geninfo_all_blocks=1 00:26:09.171 --rc geninfo_unexecuted_blocks=1 00:26:09.171 00:26:09.171 ' 00:26:09.171 11:36:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:09.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.171 --rc genhtml_branch_coverage=1 00:26:09.171 --rc genhtml_function_coverage=1 00:26:09.171 --rc genhtml_legend=1 00:26:09.171 --rc geninfo_all_blocks=1 00:26:09.171 --rc geninfo_unexecuted_blocks=1 00:26:09.171 00:26:09.171 ' 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_103178 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_103178 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=103213 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 103213 00:26:09.171 11:36:27 -- common/autotest_common.sh@829 -- # '[' -z 103213 ']' 00:26:09.171 11:36:27 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:26:09.171 11:36:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.171 11:36:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.171 11:36:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.171 11:36:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.171 11:36:27 -- common/autotest_common.sh@10 -- # set +x 00:26:09.430 [2024-11-26 11:36:27.453228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:09.430 [2024-11-26 11:36:27.453592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103213 ] 00:26:09.430 [2024-11-26 11:36:27.616933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:09.430 [2024-11-26 11:36:27.647969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:09.430 [2024-11-26 11:36:27.648550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.430 [2024-11-26 11:36:27.648607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.366 11:36:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.366 11:36:28 -- common/autotest_common.sh@862 -- # return 0 00:26:10.366 11:36:28 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:26:10.366 Checking default timeout settings: 00:26:10.366 11:36:28 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:10.625 11:36:28 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:26:10.625 Making settings changes with rpc: 00:26:10.625 11:36:28 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:26:10.625 11:36:28 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:26:10.625 Check default vs. modified settings: 00:26:10.625 11:36:28 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 Setting action_on_timeout is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 Setting timeout_us is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:26:11.190 Setting timeout_admin_us is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_103178 /tmp/settings_modified_103178 00:26:11.190 11:36:29 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 103213 00:26:11.190 11:36:29 -- common/autotest_common.sh@936 -- # '[' -z 103213 ']' 00:26:11.190 11:36:29 -- common/autotest_common.sh@940 -- # kill -0 103213 00:26:11.190 11:36:29 -- common/autotest_common.sh@941 -- # uname 00:26:11.190 11:36:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:11.190 11:36:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103213 00:26:11.190 killing process with pid 103213 00:26:11.190 11:36:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:11.190 11:36:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:11.190 11:36:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103213' 00:26:11.190 11:36:29 -- common/autotest_common.sh@955 -- # kill 103213 00:26:11.190 11:36:29 -- common/autotest_common.sh@960 -- # wait 103213 00:26:11.449 RPC TIMEOUT SETTING TEST PASSED. 00:26:11.449 11:36:29 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:26:11.449 00:26:11.449 real 0m2.338s 00:26:11.449 user 0m4.766s 00:26:11.449 sys 0m0.486s 00:26:11.449 ************************************ 00:26:11.449 END TEST nvme_rpc_timeouts 00:26:11.449 ************************************ 00:26:11.449 11:36:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:11.449 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:26:11.449 11:36:29 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:26:11.449 11:36:29 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@255 -- # timing_exit lib 00:26:11.449 11:36:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:11.449 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:26:11.449 11:36:29 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:11.449 11:36:29 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:11.449 11:36:29 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:11.449 11:36:29 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:11.449 11:36:29 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:26:11.449 11:36:29 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:11.449 11:36:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:11.449 11:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:11.449 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:26:11.449 ************************************ 00:26:11.449 START TEST blockdev_raid5f 00:26:11.449 ************************************ 00:26:11.449 11:36:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:11.709 * Looking for test storage... 00:26:11.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:11.709 11:36:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:11.709 11:36:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:11.709 11:36:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:11.709 11:36:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:11.709 11:36:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:11.709 11:36:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:11.709 11:36:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:11.709 11:36:29 -- scripts/common.sh@335 -- # IFS=.-: 00:26:11.709 11:36:29 -- scripts/common.sh@335 -- # read -ra ver1 00:26:11.709 11:36:29 -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.709 11:36:29 -- scripts/common.sh@336 -- # read -ra ver2 00:26:11.709 11:36:29 -- scripts/common.sh@337 -- # local 'op=<' 00:26:11.709 11:36:29 -- scripts/common.sh@339 -- # ver1_l=2 00:26:11.709 11:36:29 -- scripts/common.sh@340 -- # ver2_l=1 00:26:11.709 11:36:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:11.709 11:36:29 -- scripts/common.sh@343 -- # case "$op" in 00:26:11.709 11:36:29 -- scripts/common.sh@344 -- # : 1 00:26:11.709 11:36:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:11.709 11:36:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.709 11:36:29 -- scripts/common.sh@364 -- # decimal 1 00:26:11.709 11:36:29 -- scripts/common.sh@352 -- # local d=1 00:26:11.709 11:36:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.709 11:36:29 -- scripts/common.sh@354 -- # echo 1 00:26:11.709 11:36:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:11.709 11:36:29 -- scripts/common.sh@365 -- # decimal 2 00:26:11.709 11:36:29 -- scripts/common.sh@352 -- # local d=2 00:26:11.709 11:36:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.709 11:36:29 -- scripts/common.sh@354 -- # echo 2 00:26:11.709 11:36:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:11.709 11:36:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:11.709 11:36:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:11.709 11:36:29 -- scripts/common.sh@367 -- # return 0 00:26:11.709 11:36:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.709 11:36:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.709 --rc genhtml_branch_coverage=1 00:26:11.709 --rc genhtml_function_coverage=1 00:26:11.709 --rc genhtml_legend=1 00:26:11.709 --rc geninfo_all_blocks=1 00:26:11.709 --rc geninfo_unexecuted_blocks=1 00:26:11.709 00:26:11.709 ' 00:26:11.709 11:36:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.709 --rc genhtml_branch_coverage=1 00:26:11.709 --rc genhtml_function_coverage=1 00:26:11.709 --rc genhtml_legend=1 00:26:11.709 --rc geninfo_all_blocks=1 00:26:11.709 --rc geninfo_unexecuted_blocks=1 00:26:11.709 00:26:11.709 ' 00:26:11.709 11:36:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.709 --rc genhtml_branch_coverage=1 00:26:11.709 --rc genhtml_function_coverage=1 00:26:11.709 --rc genhtml_legend=1 00:26:11.709 --rc geninfo_all_blocks=1 00:26:11.709 --rc geninfo_unexecuted_blocks=1 00:26:11.709 00:26:11.709 ' 00:26:11.709 11:36:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.709 --rc genhtml_branch_coverage=1 00:26:11.709 --rc genhtml_function_coverage=1 00:26:11.709 --rc genhtml_legend=1 00:26:11.709 --rc geninfo_all_blocks=1 00:26:11.709 --rc geninfo_unexecuted_blocks=1 00:26:11.709 00:26:11.709 ' 00:26:11.709 11:36:29 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:11.709 11:36:29 -- bdev/nbd_common.sh@6 -- # set -e 00:26:11.709 11:36:29 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:11.709 11:36:29 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:11.709 11:36:29 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:11.709 11:36:29 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:11.709 11:36:29 -- bdev/blockdev.sh@18 -- # : 00:26:11.709 11:36:29 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:26:11.709 11:36:29 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:26:11.709 11:36:29 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:26:11.709 11:36:29 -- bdev/blockdev.sh@672 -- # uname -s 00:26:11.709 11:36:29 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:26:11.709 11:36:29 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:26:11.709 11:36:29 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:26:11.709 11:36:29 -- bdev/blockdev.sh@681 -- # crypto_device= 00:26:11.709 11:36:29 -- bdev/blockdev.sh@682 -- # dek= 00:26:11.709 11:36:29 -- bdev/blockdev.sh@683 -- # env_ctx= 00:26:11.709 11:36:29 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:26:11.709 11:36:29 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:26:11.709 11:36:29 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:26:11.709 11:36:29 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:26:11.709 11:36:29 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:26:11.709 11:36:29 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=103346 00:26:11.709 11:36:29 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:11.709 11:36:29 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:11.709 11:36:29 -- bdev/blockdev.sh@47 -- # waitforlisten 103346 00:26:11.709 11:36:29 -- common/autotest_common.sh@829 -- # '[' -z 103346 ']' 00:26:11.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.709 11:36:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.709 11:36:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.709 11:36:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.709 11:36:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.709 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:26:11.709 [2024-11-26 11:36:29.898719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:11.709 [2024-11-26 11:36:29.898945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103346 ] 00:26:11.968 [2024-11-26 11:36:30.062947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.968 [2024-11-26 11:36:30.093824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.968 [2024-11-26 11:36:30.094129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.904 11:36:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.904 11:36:30 -- common/autotest_common.sh@862 -- # return 0 00:26:12.904 11:36:30 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:26:12.904 11:36:30 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:26:12.904 11:36:30 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 Malloc0 00:26:12.904 Malloc1 00:26:12.904 Malloc2 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@738 -- # cat 00:26:12.904 11:36:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:26:12.904 11:36:30 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:26:12.904 11:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.904 11:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 11:36:30 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:26:12.904 11:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.904 11:36:30 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:26:12.904 11:36:30 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b267ff14-878a-402e-9878-4828a496d72e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b267ff14-878a-402e-9878-4828a496d72e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b267ff14-878a-402e-9878-4828a496d72e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6928fb74-4002-4dfc-83ed-cad34a188e43",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0ca03ac6-f5cc-4cfa-a2cb-c565429c4fad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "95e17609-038e-4d8a-a5d9-e92286a24d93",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:12.904 11:36:30 -- bdev/blockdev.sh@747 -- # jq -r .name 00:26:12.904 11:36:30 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:26:12.904 11:36:30 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:26:12.904 11:36:30 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:26:12.904 11:36:30 -- bdev/blockdev.sh@752 -- # killprocess 103346 00:26:12.904 11:36:30 -- common/autotest_common.sh@936 -- # '[' -z 103346 ']' 00:26:12.904 11:36:30 -- common/autotest_common.sh@940 -- # kill -0 103346 00:26:12.904 11:36:30 -- common/autotest_common.sh@941 -- # uname 00:26:12.904 11:36:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:12.904 11:36:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103346 00:26:12.904 killing process with pid 103346 00:26:12.904 11:36:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:12.904 11:36:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:12.904 11:36:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103346' 00:26:12.904 11:36:31 -- common/autotest_common.sh@955 -- # kill 103346 00:26:12.904 11:36:31 -- common/autotest_common.sh@960 -- # wait 103346 00:26:13.163 11:36:31 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:13.163 11:36:31 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:13.163 11:36:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:13.163 11:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.163 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:13.163 ************************************ 00:26:13.163 START TEST bdev_hello_world 00:26:13.163 ************************************ 00:26:13.163 11:36:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:13.163 [2024-11-26 11:36:31.384767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:13.163 [2024-11-26 11:36:31.384964] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103381 ] 00:26:13.422 [2024-11-26 11:36:31.547963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.422 [2024-11-26 11:36:31.583162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.682 [2024-11-26 11:36:31.746962] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:13.682 [2024-11-26 11:36:31.747037] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:26:13.682 [2024-11-26 11:36:31.747081] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:13.682 [2024-11-26 11:36:31.747664] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:13.682 [2024-11-26 11:36:31.747896] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:13.682 [2024-11-26 11:36:31.747968] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:13.682 [2024-11-26 11:36:31.748074] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:13.682 00:26:13.682 [2024-11-26 11:36:31.748110] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:13.941 00:26:13.941 real 0m0.594s 00:26:13.941 user 0m0.313s 00:26:13.941 sys 0m0.170s 00:26:13.941 11:36:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:13.941 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:13.941 ************************************ 00:26:13.941 END TEST bdev_hello_world 00:26:13.941 ************************************ 00:26:13.941 11:36:31 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:26:13.941 11:36:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:13.941 11:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.941 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:13.941 ************************************ 00:26:13.941 START TEST bdev_bounds 00:26:13.941 ************************************ 00:26:13.941 11:36:31 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:26:13.941 11:36:31 -- bdev/blockdev.sh@288 -- # bdevio_pid=103407 00:26:13.941 11:36:31 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:13.941 Process bdevio pid: 103407 00:26:13.941 11:36:31 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 103407' 00:26:13.941 11:36:31 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:13.941 11:36:31 -- bdev/blockdev.sh@291 -- # waitforlisten 103407 00:26:13.941 11:36:31 -- common/autotest_common.sh@829 -- # '[' -z 103407 ']' 00:26:13.941 11:36:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.941 11:36:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.941 11:36:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.941 11:36:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.941 11:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:13.941 [2024-11-26 11:36:32.036119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:13.941 [2024-11-26 11:36:32.036290] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103407 ] 00:26:14.200 [2024-11-26 11:36:32.201522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.200 [2024-11-26 11:36:32.234788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.200 [2024-11-26 11:36:32.234856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.200 [2024-11-26 11:36:32.234971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.813 11:36:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.813 11:36:33 -- common/autotest_common.sh@862 -- # return 0 00:26:14.813 11:36:33 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:15.109 I/O targets: 00:26:15.109 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:26:15.109 00:26:15.109 00:26:15.109 CUnit - A unit testing framework for C - Version 2.1-3 00:26:15.109 http://cunit.sourceforge.net/ 00:26:15.109 00:26:15.109 00:26:15.109 Suite: bdevio tests on: raid5f 00:26:15.109 Test: blockdev write read block ...passed 00:26:15.109 Test: blockdev write zeroes read block ...passed 00:26:15.109 Test: blockdev write zeroes read no split ...passed 00:26:15.109 Test: blockdev write zeroes read split ...passed 00:26:15.109 Test: blockdev write zeroes read split partial ...passed 00:26:15.109 Test: blockdev reset ...passed 00:26:15.109 Test: blockdev write read 8 blocks ...passed 00:26:15.109 Test: blockdev write read size > 128k ...passed 00:26:15.109 Test: blockdev write read invalid size ...passed 00:26:15.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:15.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:15.109 Test: blockdev write read max offset ...passed 00:26:15.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:15.109 Test: blockdev writev readv 8 blocks ...passed 00:26:15.109 Test: blockdev writev readv 30 x 1block ...passed 00:26:15.109 Test: blockdev writev readv block ...passed 00:26:15.109 Test: blockdev writev readv size > 128k ...passed 00:26:15.109 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:15.109 Test: blockdev comparev and writev ...passed 00:26:15.109 Test: blockdev nvme passthru rw ...passed 00:26:15.109 Test: blockdev nvme passthru vendor specific ...passed 00:26:15.109 Test: blockdev nvme admin passthru ...passed 00:26:15.109 Test: blockdev copy ...passed 00:26:15.109 00:26:15.109 Run Summary: Type Total Ran Passed Failed Inactive 00:26:15.109 suites 1 1 n/a 0 0 00:26:15.109 tests 23 23 23 0 0 00:26:15.109 asserts 130 130 130 0 n/a 00:26:15.109 00:26:15.109 Elapsed time = 0.292 seconds 00:26:15.109 0 00:26:15.109 11:36:33 -- bdev/blockdev.sh@293 -- # killprocess 103407 00:26:15.109 11:36:33 -- common/autotest_common.sh@936 -- # '[' -z 103407 ']' 00:26:15.109 11:36:33 -- common/autotest_common.sh@940 -- # kill -0 103407 00:26:15.109 11:36:33 -- common/autotest_common.sh@941 -- # uname 00:26:15.109 11:36:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.109 11:36:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103407 00:26:15.109 11:36:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:15.109 11:36:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:15.109 killing process with pid 103407 00:26:15.109 11:36:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103407' 00:26:15.109 11:36:33 -- common/autotest_common.sh@955 -- # kill 103407 00:26:15.109 11:36:33 -- common/autotest_common.sh@960 -- # wait 103407 00:26:15.369 11:36:33 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:26:15.369 00:26:15.369 real 0m1.514s 00:26:15.369 user 0m3.895s 00:26:15.369 sys 0m0.308s 00:26:15.369 11:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:15.369 ************************************ 00:26:15.369 END TEST bdev_bounds 00:26:15.369 ************************************ 00:26:15.369 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.369 11:36:33 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:15.369 11:36:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:15.369 11:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.369 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.369 ************************************ 00:26:15.369 START TEST bdev_nbd 00:26:15.369 ************************************ 00:26:15.369 11:36:33 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:15.369 11:36:33 -- bdev/blockdev.sh@298 -- # uname -s 00:26:15.369 11:36:33 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:26:15.369 11:36:33 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:15.369 11:36:33 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:15.369 11:36:33 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:26:15.369 11:36:33 -- bdev/blockdev.sh@302 -- # local bdev_all 00:26:15.369 11:36:33 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:26:15.369 11:36:33 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:26:15.369 11:36:33 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:15.369 11:36:33 -- bdev/blockdev.sh@309 -- # local nbd_all 00:26:15.369 11:36:33 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:26:15.369 11:36:33 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:26:15.369 11:36:33 -- bdev/blockdev.sh@312 -- # local nbd_list 00:26:15.369 11:36:33 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:26:15.369 11:36:33 -- bdev/blockdev.sh@313 -- # local bdev_list 00:26:15.369 11:36:33 -- bdev/blockdev.sh@316 -- # nbd_pid=103454 00:26:15.369 11:36:33 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:15.369 11:36:33 -- bdev/blockdev.sh@318 -- # waitforlisten 103454 /var/tmp/spdk-nbd.sock 00:26:15.369 11:36:33 -- common/autotest_common.sh@829 -- # '[' -z 103454 ']' 00:26:15.369 11:36:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:15.369 11:36:33 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:15.369 11:36:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:15.369 11:36:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:15.369 11:36:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.369 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:26:15.369 [2024-11-26 11:36:33.605860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:15.369 [2024-11-26 11:36:33.606083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.628 [2024-11-26 11:36:33.771783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.629 [2024-11-26 11:36:33.804641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.566 11:36:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.566 11:36:34 -- common/autotest_common.sh@862 -- # return 0 00:26:16.566 11:36:34 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@24 -- # local i 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:16.566 11:36:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:16.566 11:36:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:16.566 11:36:34 -- common/autotest_common.sh@867 -- # local i 00:26:16.566 11:36:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:16.566 11:36:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:16.566 11:36:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:16.566 11:36:34 -- common/autotest_common.sh@871 -- # break 00:26:16.566 11:36:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:16.566 11:36:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:16.566 11:36:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:16.566 1+0 records in 00:26:16.566 1+0 records out 00:26:16.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379894 s, 10.8 MB/s 00:26:16.825 11:36:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:16.825 11:36:34 -- common/autotest_common.sh@884 -- # size=4096 00:26:16.825 11:36:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:16.825 11:36:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:16.825 11:36:34 -- common/autotest_common.sh@887 -- # return 0 00:26:16.825 11:36:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:16.825 11:36:34 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:16.825 11:36:34 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:17.083 { 00:26:17.083 "nbd_device": "/dev/nbd0", 00:26:17.083 "bdev_name": "raid5f" 00:26:17.083 } 00:26:17.083 ]' 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:17.083 { 00:26:17.083 "nbd_device": "/dev/nbd0", 00:26:17.083 "bdev_name": "raid5f" 00:26:17.083 } 00:26:17.083 ]' 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@51 -- # local i 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@41 -- # break 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@45 -- # return 0 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.083 11:36:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@65 -- # true 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@65 -- # count=0 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@122 -- # count=0 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@127 -- # return 0 00:26:17.341 11:36:35 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:17.341 11:36:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@12 -- # local i 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:17.342 11:36:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:17.600 /dev/nbd0 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:17.600 11:36:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:17.600 11:36:35 -- common/autotest_common.sh@867 -- # local i 00:26:17.600 11:36:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:17.600 11:36:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:17.600 11:36:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:17.600 11:36:35 -- common/autotest_common.sh@871 -- # break 00:26:17.600 11:36:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:17.600 11:36:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:17.600 11:36:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:17.600 1+0 records in 00:26:17.600 1+0 records out 00:26:17.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412609 s, 9.9 MB/s 00:26:17.600 11:36:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:17.600 11:36:35 -- common/autotest_common.sh@884 -- # size=4096 00:26:17.600 11:36:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:17.600 11:36:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:17.600 11:36:35 -- common/autotest_common.sh@887 -- # return 0 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:17.600 11:36:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:17.858 { 00:26:17.858 "nbd_device": "/dev/nbd0", 00:26:17.858 "bdev_name": "raid5f" 00:26:17.858 } 00:26:17.858 ]' 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:17.858 { 00:26:17.858 "nbd_device": "/dev/nbd0", 00:26:17.858 "bdev_name": "raid5f" 00:26:17.858 } 00:26:17.858 ]' 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@65 -- # count=1 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@66 -- # echo 1 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@95 -- # count=1 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:17.858 256+0 records in 00:26:17.858 256+0 records out 00:26:17.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011509 s, 91.1 MB/s 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:17.858 11:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:18.117 256+0 records in 00:26:18.117 256+0 records out 00:26:18.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036533 s, 28.7 MB/s 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@51 -- # local i 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:18.117 11:36:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@41 -- # break 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@45 -- # return 0 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:18.376 11:36:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:18.634 11:36:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:18.634 11:36:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:18.634 11:36:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:18.634 11:36:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:18.634 11:36:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@65 -- # true 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@65 -- # count=0 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@104 -- # count=0 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@109 -- # return 0 00:26:18.635 11:36:36 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:18.635 malloc_lvol_verify 00:26:18.635 11:36:36 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:18.893 e0904477-ce48-4d9f-96a4-be9751427303 00:26:18.893 11:36:37 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:19.152 6223a215-6282-4b72-a827-082936fb7486 00:26:19.153 11:36:37 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:19.412 /dev/nbd0 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:19.412 mke2fs 1.47.0 (5-Feb-2023) 00:26:19.412 00:26:19.412 Filesystem too small for a journal 00:26:19.412 Discarding device blocks: 0/1024 done 00:26:19.412 Creating filesystem with 1024 4k blocks and 1024 inodes 00:26:19.412 00:26:19.412 Allocating group tables: 0/1 done 00:26:19.412 Writing inode tables: 0/1 done 00:26:19.412 Writing superblocks and filesystem accounting information: 0/1 done 00:26:19.412 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@51 -- # local i 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:19.412 11:36:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@41 -- # break 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@45 -- # return 0 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:19.671 11:36:37 -- bdev/nbd_common.sh@147 -- # return 0 00:26:19.671 11:36:37 -- bdev/blockdev.sh@324 -- # killprocess 103454 00:26:19.671 11:36:37 -- common/autotest_common.sh@936 -- # '[' -z 103454 ']' 00:26:19.671 11:36:37 -- common/autotest_common.sh@940 -- # kill -0 103454 00:26:19.671 11:36:37 -- common/autotest_common.sh@941 -- # uname 00:26:19.671 11:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.671 11:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103454 00:26:19.671 killing process with pid 103454 00:26:19.671 11:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:19.671 11:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:19.671 11:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103454' 00:26:19.671 11:36:37 -- common/autotest_common.sh@955 -- # kill 103454 00:26:19.671 11:36:37 -- common/autotest_common.sh@960 -- # wait 103454 00:26:19.931 ************************************ 00:26:19.931 END TEST bdev_nbd 00:26:19.931 ************************************ 00:26:19.931 11:36:38 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:26:19.931 00:26:19.931 real 0m4.478s 00:26:19.931 user 0m6.960s 00:26:19.931 sys 0m1.014s 00:26:19.931 11:36:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.931 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:26:19.931 11:36:38 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:26:19.931 11:36:38 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.931 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:26:19.931 ************************************ 00:26:19.931 START TEST bdev_fio 00:26:19.931 ************************************ 00:26:19.931 11:36:38 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@329 -- # local env_context 00:26:19.931 11:36:38 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:19.931 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:19.931 11:36:38 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:19.931 11:36:38 -- bdev/blockdev.sh@337 -- # echo '' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:26:19.931 11:36:38 -- bdev/blockdev.sh@337 -- # env_context= 00:26:19.931 11:36:38 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:19.931 11:36:38 -- common/autotest_common.sh@1270 -- # local workload=verify 00:26:19.931 11:36:38 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:26:19.931 11:36:38 -- common/autotest_common.sh@1272 -- # local env_context= 00:26:19.931 11:36:38 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:26:19.931 11:36:38 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:19.931 11:36:38 -- common/autotest_common.sh@1290 -- # cat 00:26:19.931 11:36:38 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1303 -- # cat 00:26:19.931 11:36:38 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:26:19.931 11:36:38 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:19.931 11:36:38 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:26:19.931 11:36:38 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:26:19.931 11:36:38 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:26:19.931 11:36:38 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:19.931 11:36:38 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:19.931 11:36:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.931 11:36:38 -- common/autotest_common.sh@10 -- # set +x 00:26:19.931 ************************************ 00:26:19.931 START TEST bdev_fio_rw_verify 00:26:19.931 ************************************ 00:26:19.931 11:36:38 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:19.931 11:36:38 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:19.931 11:36:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:19.931 11:36:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:19.931 11:36:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:19.931 11:36:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.931 11:36:38 -- common/autotest_common.sh@1330 -- # shift 00:26:19.931 11:36:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:19.931 11:36:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.931 11:36:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.931 11:36:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:19.931 11:36:38 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:26:19.931 11:36:38 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:26:19.931 11:36:38 -- common/autotest_common.sh@1336 -- # break 00:26:19.931 11:36:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:19.931 11:36:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:20.190 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:20.190 fio-3.35 00:26:20.190 Starting 1 thread 00:26:32.394 00:26:32.394 job_raid5f: (groupid=0, jobs=1): err= 0: pid=103657: Tue Nov 26 11:36:48 2024 00:26:32.394 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec) 00:26:32.394 slat (usec): min=18, max=126, avg=21.41, stdev= 4.68 00:26:32.394 clat (usec): min=10, max=431, avg=139.53, stdev=53.11 00:26:32.394 lat (usec): min=30, max=455, avg=160.94, stdev=54.08 00:26:32.394 clat percentiles (usec): 00:26:32.394 | 50.000th=[ 139], 99.000th=[ 265], 99.900th=[ 330], 99.990th=[ 347], 00:26:32.394 | 99.999th=[ 371] 00:26:32.394 write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(456MiB/9884msec); 0 zone resets 00:26:32.394 slat (usec): min=9, max=241, avg=18.86, stdev= 4.96 00:26:32.394 clat (usec): min=57, max=880, avg=320.14, stdev=53.10 00:26:32.394 lat (usec): min=73, max=1061, avg=338.99, stdev=54.90 00:26:32.394 clat percentiles (usec): 00:26:32.394 | 50.000th=[ 318], 99.000th=[ 490], 99.900th=[ 611], 99.990th=[ 799], 00:26:32.394 | 99.999th=[ 881] 00:26:32.394 bw ( KiB/s): min=42160, max=50688, per=98.81%, avg=46684.21, stdev=2281.60, samples=19 00:26:32.394 iops : min=10540, max=12672, avg=11671.05, stdev=570.40, samples=19 00:26:32.394 lat (usec) : 20=0.01%, 50=0.01%, 100=14.66%, 250=38.15%, 500=46.74% 00:26:32.394 lat (usec) : 750=0.43%, 1000=0.01% 00:26:32.394 cpu : usr=99.47%, sys=0.52%, ctx=43, majf=0, minf=12440 00:26:32.394 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.394 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.394 issued rwts: total=112652,116749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.394 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.394 00:26:32.394 Run status group 0 (all jobs): 00:26:32.394 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:26:32.394 WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=456MiB (478MB), run=9884-9884msec 00:26:32.394 ----------------------------------------------------- 00:26:32.394 Suppressions used: 00:26:32.394 count bytes template 00:26:32.394 1 7 /usr/src/fio/parse.c 00:26:32.394 587 56352 /usr/src/fio/iolog.c 00:26:32.394 1 904 libcrypto.so 00:26:32.394 ----------------------------------------------------- 00:26:32.394 00:26:32.394 00:26:32.394 real 0m11.029s 00:26:32.394 user 0m11.693s 00:26:32.394 sys 0m0.598s 00:26:32.394 11:36:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.394 11:36:49 -- common/autotest_common.sh@10 -- # set +x 00:26:32.394 ************************************ 00:26:32.394 END TEST bdev_fio_rw_verify 00:26:32.394 ************************************ 00:26:32.395 11:36:49 -- bdev/blockdev.sh@348 -- # rm -f 00:26:32.395 11:36:49 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:32.395 11:36:49 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:32.395 11:36:49 -- common/autotest_common.sh@1270 -- # local workload=trim 00:26:32.395 11:36:49 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:26:32.395 11:36:49 -- common/autotest_common.sh@1272 -- # local env_context= 00:26:32.395 11:36:49 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:26:32.395 11:36:49 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:32.395 11:36:49 -- common/autotest_common.sh@1290 -- # cat 00:26:32.395 11:36:49 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:26:32.395 11:36:49 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b267ff14-878a-402e-9878-4828a496d72e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b267ff14-878a-402e-9878-4828a496d72e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b267ff14-878a-402e-9878-4828a496d72e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6928fb74-4002-4dfc-83ed-cad34a188e43",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0ca03ac6-f5cc-4cfa-a2cb-c565429c4fad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "95e17609-038e-4d8a-a5d9-e92286a24d93",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:32.395 11:36:49 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:32.395 11:36:49 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:26:32.395 11:36:49 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:32.395 /home/vagrant/spdk_repo/spdk 00:26:32.395 11:36:49 -- bdev/blockdev.sh@360 -- # popd 00:26:32.395 11:36:49 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:26:32.395 11:36:49 -- bdev/blockdev.sh@362 -- # return 0 00:26:32.395 00:26:32.395 real 0m11.158s 00:26:32.395 user 0m11.745s 00:26:32.395 sys 0m0.678s 00:26:32.395 11:36:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.395 ************************************ 00:26:32.395 END TEST bdev_fio 00:26:32.395 ************************************ 00:26:32.395 11:36:49 -- common/autotest_common.sh@10 -- # set +x 00:26:32.395 11:36:49 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:32.395 11:36:49 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:26:32.395 11:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.395 11:36:49 -- common/autotest_common.sh@10 -- # set +x 00:26:32.395 ************************************ 00:26:32.395 START TEST bdev_verify 00:26:32.395 ************************************ 00:26:32.395 11:36:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:32.395 [2024-11-26 11:36:49.326399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:32.395 [2024-11-26 11:36:49.326547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103806 ] 00:26:32.395 [2024-11-26 11:36:49.475151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:32.395 [2024-11-26 11:36:49.508312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.395 [2024-11-26 11:36:49.508393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.395 Running I/O for 5 seconds... 00:26:36.587 00:26:36.587 Latency(us) 00:26:36.587 [2024-11-26T11:36:54.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.587 [2024-11-26T11:36:54.817Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.587 Verification LBA range: start 0x0 length 0x2000 00:26:36.587 raid5f : 5.01 12321.07 48.13 0.00 0.00 16464.12 294.17 13226.36 00:26:36.587 [2024-11-26T11:36:54.817Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:36.587 Verification LBA range: start 0x2000 length 0x2000 00:26:36.587 raid5f : 5.01 12254.04 47.87 0.00 0.00 16548.55 325.82 18707.55 00:26:36.587 [2024-11-26T11:36:54.817Z] =================================================================================================================== 00:26:36.587 [2024-11-26T11:36:54.817Z] Total : 24575.11 96.00 0.00 0.00 16506.22 294.17 18707.55 00:26:36.846 00:26:36.846 real 0m5.590s 00:26:36.846 user 0m10.575s 00:26:36.846 sys 0m0.172s 00:26:36.846 11:36:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:36.846 11:36:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.846 ************************************ 00:26:36.846 END TEST bdev_verify 00:26:36.846 ************************************ 00:26:36.846 11:36:54 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:36.846 11:36:54 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:26:36.846 11:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.846 11:36:54 -- common/autotest_common.sh@10 -- # set +x 00:26:36.846 ************************************ 00:26:36.846 START TEST bdev_verify_big_io 00:26:36.846 ************************************ 00:26:36.846 11:36:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:36.846 [2024-11-26 11:36:54.967510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:36.846 [2024-11-26 11:36:54.967690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103882 ] 00:26:37.106 [2024-11-26 11:36:55.115332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.106 [2024-11-26 11:36:55.148084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.106 [2024-11-26 11:36:55.148163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.106 Running I/O for 5 seconds... 00:26:42.378 00:26:42.378 Latency(us) 00:26:42.378 [2024-11-26T11:37:00.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.378 [2024-11-26T11:37:00.608Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:42.378 Verification LBA range: start 0x0 length 0x200 00:26:42.378 raid5f : 5.12 856.89 53.56 0.00 0.00 3903484.97 132.19 120109.61 00:26:42.378 [2024-11-26T11:37:00.608Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:42.378 Verification LBA range: start 0x200 length 0x200 00:26:42.378 raid5f : 5.12 859.40 53.71 0.00 0.00 3890098.47 117.76 119156.36 00:26:42.378 [2024-11-26T11:37:00.608Z] =================================================================================================================== 00:26:42.378 [2024-11-26T11:37:00.608Z] Total : 1716.29 107.27 0.00 0.00 3896781.81 117.76 120109.61 00:26:42.637 00:26:42.637 real 0m5.696s 00:26:42.637 user 0m10.780s 00:26:42.637 sys 0m0.178s 00:26:42.637 11:37:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.637 11:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.637 ************************************ 00:26:42.637 END TEST bdev_verify_big_io 00:26:42.637 ************************************ 00:26:42.637 11:37:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.637 11:37:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:42.637 11:37:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.637 11:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.637 ************************************ 00:26:42.637 START TEST bdev_write_zeroes 00:26:42.637 ************************************ 00:26:42.637 11:37:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:42.637 [2024-11-26 11:37:00.732835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.637 [2024-11-26 11:37:00.733020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103963 ] 00:26:42.896 [2024-11-26 11:37:00.896964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.896 [2024-11-26 11:37:00.932413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.896 Running I/O for 1 seconds... 00:26:44.274 00:26:44.274 Latency(us) 00:26:44.274 [2024-11-26T11:37:02.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.274 [2024-11-26T11:37:02.504Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:44.274 raid5f : 1.00 27149.87 106.05 0.00 0.00 4699.74 1601.16 6345.08 00:26:44.274 [2024-11-26T11:37:02.504Z] =================================================================================================================== 00:26:44.274 [2024-11-26T11:37:02.504Z] Total : 27149.87 106.05 0.00 0.00 4699.74 1601.16 6345.08 00:26:44.274 00:26:44.274 real 0m1.599s 00:26:44.274 user 0m1.325s 00:26:44.274 sys 0m0.162s 00:26:44.274 11:37:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.274 11:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.274 ************************************ 00:26:44.274 END TEST bdev_write_zeroes 00:26:44.274 ************************************ 00:26:44.274 11:37:02 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:44.274 11:37:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:44.274 11:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.274 11:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.274 ************************************ 00:26:44.274 START TEST bdev_json_nonenclosed 00:26:44.274 ************************************ 00:26:44.274 11:37:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:44.274 [2024-11-26 11:37:02.364269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.274 [2024-11-26 11:37:02.364411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104001 ] 00:26:44.533 [2024-11-26 11:37:02.516246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.533 [2024-11-26 11:37:02.547716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.533 [2024-11-26 11:37:02.547946] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:44.533 [2024-11-26 11:37:02.547979] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:44.533 00:26:44.533 real 0m0.319s 00:26:44.533 user 0m0.133s 00:26:44.533 sys 0m0.086s 00:26:44.533 11:37:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.533 11:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 ************************************ 00:26:44.533 END TEST bdev_json_nonenclosed 00:26:44.533 ************************************ 00:26:44.533 11:37:02 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:44.533 11:37:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:44.533 11:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.533 11:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.533 ************************************ 00:26:44.533 START TEST bdev_json_nonarray 00:26:44.533 ************************************ 00:26:44.533 11:37:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:44.533 [2024-11-26 11:37:02.753094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.533 [2024-11-26 11:37:02.753272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104022 ] 00:26:44.791 [2024-11-26 11:37:02.916169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.791 [2024-11-26 11:37:02.954069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.791 [2024-11-26 11:37:02.954321] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:44.791 [2024-11-26 11:37:02.954351] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:45.049 00:26:45.049 real 0m0.360s 00:26:45.049 user 0m0.162s 00:26:45.049 sys 0m0.096s 00:26:45.049 11:37:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:45.049 11:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.049 ************************************ 00:26:45.049 END TEST bdev_json_nonarray 00:26:45.049 ************************************ 00:26:45.049 11:37:03 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:26:45.049 11:37:03 -- bdev/blockdev.sh@809 -- # cleanup 00:26:45.049 11:37:03 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:45.049 11:37:03 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:45.049 11:37:03 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:26:45.049 11:37:03 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:26:45.049 00:26:45.049 real 0m33.467s 00:26:45.049 user 0m47.803s 00:26:45.049 sys 0m3.582s 00:26:45.049 11:37:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:45.049 11:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.049 ************************************ 00:26:45.049 END TEST blockdev_raid5f 00:26:45.049 ************************************ 00:26:45.049 11:37:03 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:45.049 11:37:03 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:45.049 11:37:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.049 11:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.049 11:37:03 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:45.049 11:37:03 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:45.049 11:37:03 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:45.049 11:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:26:46.954 Waiting for block devices as requested 00:26:46.954 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:47.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:26:47.526 Cleaning 00:26:47.526 Removing: /var/run/dpdk/spdk0/config 00:26:47.526 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:47.526 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:47.526 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:47.526 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:47.526 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:47.526 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:47.526 Removing: /dev/shm/spdk_tgt_trace.pid72211 00:26:47.526 Removing: /var/run/dpdk/spdk0 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100078 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100197 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100280 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100312 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100338 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100415 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100790 00:26:47.526 Removing: /var/run/dpdk/spdk_pid100820 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101091 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101205 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101283 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101318 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101345 00:26:47.526 Removing: /var/run/dpdk/spdk_pid101365 00:26:47.526 Removing: /var/run/dpdk/spdk_pid102538 00:26:47.526 Removing: /var/run/dpdk/spdk_pid102650 00:26:47.526 Removing: /var/run/dpdk/spdk_pid102654 00:26:47.526 Removing: /var/run/dpdk/spdk_pid102671 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103129 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103213 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103346 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103381 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103407 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103649 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103806 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103882 00:26:47.526 Removing: /var/run/dpdk/spdk_pid103963 00:26:47.526 Removing: /var/run/dpdk/spdk_pid104001 00:26:47.526 Removing: /var/run/dpdk/spdk_pid104022 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72048 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72211 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72455 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72694 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72856 00:26:47.526 Removing: /var/run/dpdk/spdk_pid72930 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73014 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73108 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73187 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73232 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73263 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73327 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73411 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73892 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73945 00:26:47.526 Removing: /var/run/dpdk/spdk_pid73996 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74008 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74071 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74087 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74156 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74171 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74220 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74238 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74280 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74298 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74425 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74462 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74498 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74576 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74630 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74650 00:26:47.526 Removing: /var/run/dpdk/spdk_pid74717 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74732 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74773 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74788 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74818 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74844 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74874 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74889 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74929 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74945 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74981 00:26:47.785 Removing: /var/run/dpdk/spdk_pid74996 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75031 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75052 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75082 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75108 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75138 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75153 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75194 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75209 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75245 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75261 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75295 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75316 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75346 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75372 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75402 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75417 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75458 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75473 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75509 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75529 00:26:47.785 Removing: /var/run/dpdk/spdk_pid75564 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75583 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75621 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75645 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75678 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75704 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75734 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75749 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75791 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75863 00:26:47.786 Removing: /var/run/dpdk/spdk_pid75960 00:26:47.786 Removing: /var/run/dpdk/spdk_pid76117 00:26:47.786 Removing: /var/run/dpdk/spdk_pid76163 00:26:47.786 Removing: /var/run/dpdk/spdk_pid76194 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77328 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77511 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77680 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77768 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77861 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77903 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77923 00:26:47.786 Removing: /var/run/dpdk/spdk_pid77954 00:26:47.786 Removing: /var/run/dpdk/spdk_pid78361 00:26:47.786 Removing: /var/run/dpdk/spdk_pid78427 00:26:47.786 Removing: /var/run/dpdk/spdk_pid78517 00:26:47.786 Removing: /var/run/dpdk/spdk_pid78559 00:26:47.786 Removing: /var/run/dpdk/spdk_pid79601 00:26:47.786 Removing: /var/run/dpdk/spdk_pid80378 00:26:47.786 Removing: /var/run/dpdk/spdk_pid81155 00:26:47.786 Removing: /var/run/dpdk/spdk_pid82139 00:26:47.786 Removing: /var/run/dpdk/spdk_pid83092 00:26:47.786 Removing: /var/run/dpdk/spdk_pid84040 00:26:47.786 Removing: /var/run/dpdk/spdk_pid85360 00:26:47.786 Removing: /var/run/dpdk/spdk_pid86420 00:26:47.786 Removing: /var/run/dpdk/spdk_pid87474 00:26:47.786 Removing: /var/run/dpdk/spdk_pid88072 00:26:47.786 Removing: /var/run/dpdk/spdk_pid88560 00:26:47.786 Removing: /var/run/dpdk/spdk_pid89123 00:26:47.786 Removing: /var/run/dpdk/spdk_pid89544 00:26:47.786 Removing: /var/run/dpdk/spdk_pid90053 00:26:47.786 Removing: /var/run/dpdk/spdk_pid90549 00:26:47.786 Removing: /var/run/dpdk/spdk_pid91150 00:26:47.786 Removing: /var/run/dpdk/spdk_pid91611 00:26:47.786 Removing: /var/run/dpdk/spdk_pid92820 00:26:47.786 Removing: /var/run/dpdk/spdk_pid93351 00:26:47.786 Removing: /var/run/dpdk/spdk_pid93832 00:26:47.786 Removing: /var/run/dpdk/spdk_pid95153 00:26:47.786 Removing: /var/run/dpdk/spdk_pid95740 00:26:47.786 Removing: /var/run/dpdk/spdk_pid96293 00:26:47.786 Removing: /var/run/dpdk/spdk_pid96984 00:26:47.786 Removing: /var/run/dpdk/spdk_pid97014 00:26:47.786 Removing: /var/run/dpdk/spdk_pid97054 00:26:47.786 Removing: /var/run/dpdk/spdk_pid97092 00:26:47.786 Removing: /var/run/dpdk/spdk_pid97215 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97351 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97573 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97838 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97852 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97884 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97898 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97912 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97931 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97945 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97965 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97983 00:26:48.045 Removing: /var/run/dpdk/spdk_pid97992 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98011 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98026 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98039 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98054 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98073 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98086 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98101 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98120 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98138 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98148 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98179 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98192 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98221 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98292 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98318 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98330 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98354 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98370 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98373 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98415 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98427 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98455 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98463 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98472 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98479 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98489 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98492 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98506 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98509 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98542 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98564 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98579 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98604 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98615 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98623 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98665 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98672 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98705 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98708 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98722 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98725 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98738 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98742 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98751 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98759 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98836 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98879 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98990 00:26:48.045 Removing: /var/run/dpdk/spdk_pid98999 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99033 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99078 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99099 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99115 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99136 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99165 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99182 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99258 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99293 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99331 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99562 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99658 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99692 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99775 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99838 00:26:48.045 Removing: /var/run/dpdk/spdk_pid99865 00:26:48.045 Clean 00:26:48.305 killing process with pid 63227 00:26:48.305 killing process with pid 63228 00:26:48.305 11:37:06 -- common/autotest_common.sh@1446 -- # return 0 00:26:48.305 11:37:06 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:26:48.305 11:37:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.305 11:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:48.305 11:37:06 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:26:48.305 11:37:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.305 11:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:48.305 11:37:06 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:48.305 11:37:06 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:48.305 11:37:06 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:48.305 11:37:06 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:26:48.305 11:37:06 -- spdk/autotest.sh@383 -- # hostname 00:26:48.305 11:37:06 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:48.565 geninfo: WARNING: invalid characters removed from testname! 00:27:35.254 11:37:50 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.873 11:37:55 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.409 11:37:58 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.699 11:38:01 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.235 11:38:04 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.771 11:38:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:52.057 11:38:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:52.057 11:38:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:52.057 11:38:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:52.057 11:38:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:52.057 11:38:09 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:52.057 11:38:09 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:52.057 11:38:09 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:52.057 11:38:09 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:52.057 11:38:09 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:52.058 11:38:09 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:52.058 11:38:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:52.058 11:38:09 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:52.058 11:38:09 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:52.058 11:38:09 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:52.058 11:38:09 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:52.058 11:38:09 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:52.058 11:38:09 -- scripts/common.sh@343 -- $ case "$op" in 00:27:52.058 11:38:09 -- scripts/common.sh@344 -- $ : 1 00:27:52.058 11:38:09 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:52.058 11:38:09 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.058 11:38:09 -- scripts/common.sh@364 -- $ decimal 1 00:27:52.058 11:38:09 -- scripts/common.sh@352 -- $ local d=1 00:27:52.058 11:38:09 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:52.058 11:38:09 -- scripts/common.sh@354 -- $ echo 1 00:27:52.058 11:38:09 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:52.058 11:38:09 -- scripts/common.sh@365 -- $ decimal 2 00:27:52.058 11:38:09 -- scripts/common.sh@352 -- $ local d=2 00:27:52.058 11:38:09 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:52.058 11:38:09 -- scripts/common.sh@354 -- $ echo 2 00:27:52.058 11:38:09 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:52.058 11:38:09 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:52.058 11:38:09 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:52.058 11:38:09 -- scripts/common.sh@367 -- $ return 0 00:27:52.058 11:38:09 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.058 11:38:09 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:52.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.058 --rc genhtml_branch_coverage=1 00:27:52.058 --rc genhtml_function_coverage=1 00:27:52.058 --rc genhtml_legend=1 00:27:52.058 --rc geninfo_all_blocks=1 00:27:52.058 --rc geninfo_unexecuted_blocks=1 00:27:52.058 00:27:52.058 ' 00:27:52.058 11:38:09 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:52.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.058 --rc genhtml_branch_coverage=1 00:27:52.058 --rc genhtml_function_coverage=1 00:27:52.058 --rc genhtml_legend=1 00:27:52.058 --rc geninfo_all_blocks=1 00:27:52.058 --rc geninfo_unexecuted_blocks=1 00:27:52.058 00:27:52.058 ' 00:27:52.058 11:38:09 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:52.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.058 --rc genhtml_branch_coverage=1 00:27:52.058 --rc genhtml_function_coverage=1 00:27:52.058 --rc genhtml_legend=1 00:27:52.058 --rc geninfo_all_blocks=1 00:27:52.058 --rc geninfo_unexecuted_blocks=1 00:27:52.058 00:27:52.058 ' 00:27:52.058 11:38:09 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:52.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.058 --rc genhtml_branch_coverage=1 00:27:52.058 --rc genhtml_function_coverage=1 00:27:52.058 --rc genhtml_legend=1 00:27:52.058 --rc geninfo_all_blocks=1 00:27:52.058 --rc geninfo_unexecuted_blocks=1 00:27:52.058 00:27:52.058 ' 00:27:52.058 11:38:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.058 11:38:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:52.058 11:38:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.058 11:38:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.058 11:38:09 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.058 11:38:09 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.058 11:38:09 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.058 11:38:09 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.058 11:38:09 -- paths/export.sh@6 -- $ export PATH 00:27:52.058 11:38:09 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:52.058 11:38:09 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:52.058 11:38:09 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:52.058 11:38:09 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732621089.XXXXXX 00:27:52.058 11:38:09 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732621089.9xVGxq 00:27:52.058 11:38:09 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:52.058 11:38:09 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:52.058 11:38:09 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:52.058 11:38:09 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:52.058 11:38:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:52.058 11:38:09 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:52.058 11:38:09 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:52.058 11:38:09 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:52.058 11:38:09 -- common/autotest_common.sh@10 -- $ set +x 00:27:52.058 11:38:09 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:27:52.058 11:38:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:52.058 11:38:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:52.058 11:38:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:27:52.058 11:38:09 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:27:52.058 11:38:09 -- common/autotest_common.sh@10 -- $ set +x 00:27:52.058 11:38:09 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@36 -- $ [[ -n v23.11 ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:27:52.058 11:38:09 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:27:52.058 11:38:09 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:52.058 11:38:09 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:52.058 11:38:09 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:27:52.058 11:38:09 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:27:52.058 11:38:09 -- spdk/autopackage.sh@40 -- $ get_config_params 00:27:52.058 11:38:09 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:27:52.058 11:38:09 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:52.058 11:38:09 -- common/autotest_common.sh@10 -- $ set +x 00:27:52.058 11:38:09 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:27:52.058 11:38:09 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:27:52.058 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:27:52.058 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:27:52.058 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:27:52.058 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:52.317 Using 'verbs' RDMA provider 00:28:05.095 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:28:17.306 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:28:17.306 Creating mk/config.mk...done. 00:28:17.306 Creating mk/cc.flags.mk...done. 00:28:17.306 Type 'make' to build. 00:28:17.306 11:38:34 -- spdk/autopackage.sh@43 -- $ make -j10 00:28:17.306 make[1]: Nothing to be done for 'all'. 00:28:17.306 CC lib/log/log.o 00:28:17.306 CC lib/log/log_deprecated.o 00:28:17.306 CC lib/log/log_flags.o 00:28:17.306 CC lib/ut/ut.o 00:28:17.306 CC lib/ut_mock/mock.o 00:28:17.306 LIB libspdk_ut_mock.a 00:28:17.306 LIB libspdk_log.a 00:28:17.306 LIB libspdk_ut.a 00:28:17.306 CC lib/ioat/ioat.o 00:28:17.306 CXX lib/trace_parser/trace.o 00:28:17.306 CC lib/dma/dma.o 00:28:17.306 CC lib/util/base64.o 00:28:17.306 CC lib/util/bit_array.o 00:28:17.306 CC lib/util/crc16.o 00:28:17.306 CC lib/util/cpuset.o 00:28:17.306 CC lib/util/crc32.o 00:28:17.306 CC lib/util/crc32c.o 00:28:17.306 CC lib/vfio_user/host/vfio_user_pci.o 00:28:17.306 CC lib/util/crc32_ieee.o 00:28:17.306 CC lib/vfio_user/host/vfio_user.o 00:28:17.306 CC lib/util/crc64.o 00:28:17.306 CC lib/util/dif.o 00:28:17.306 LIB libspdk_dma.a 00:28:17.306 CC lib/util/fd.o 00:28:17.306 CC lib/util/file.o 00:28:17.306 CC lib/util/hexlify.o 00:28:17.306 LIB libspdk_ioat.a 00:28:17.306 CC lib/util/iov.o 00:28:17.306 CC lib/util/math.o 00:28:17.306 CC lib/util/pipe.o 00:28:17.306 CC lib/util/strerror_tls.o 00:28:17.306 CC lib/util/string.o 00:28:17.306 LIB libspdk_vfio_user.a 00:28:17.306 CC lib/util/uuid.o 00:28:17.306 CC lib/util/fd_group.o 00:28:17.306 CC lib/util/xor.o 00:28:17.306 CC lib/util/zipf.o 00:28:17.565 LIB libspdk_util.a 00:28:17.565 CC lib/conf/conf.o 00:28:17.565 CC lib/rdma/common.o 00:28:17.565 CC lib/rdma/rdma_verbs.o 00:28:17.565 CC lib/json/json_parse.o 00:28:17.565 CC lib/vmd/vmd.o 00:28:17.565 CC lib/idxd/idxd.o 00:28:17.565 CC lib/vmd/led.o 00:28:17.565 CC lib/json/json_util.o 00:28:17.565 CC lib/env_dpdk/env.o 00:28:17.565 LIB libspdk_trace_parser.a 00:28:17.565 CC lib/idxd/idxd_user.o 00:28:17.824 CC lib/idxd/idxd_kernel.o 00:28:17.824 LIB libspdk_conf.a 00:28:17.824 CC lib/json/json_write.o 00:28:17.824 CC lib/env_dpdk/memory.o 00:28:17.824 CC lib/env_dpdk/pci.o 00:28:17.824 CC lib/env_dpdk/init.o 00:28:17.824 LIB libspdk_rdma.a 00:28:17.824 CC lib/env_dpdk/threads.o 00:28:17.824 CC lib/env_dpdk/pci_ioat.o 00:28:17.824 CC lib/env_dpdk/pci_virtio.o 00:28:18.082 LIB libspdk_idxd.a 00:28:18.082 CC lib/env_dpdk/pci_vmd.o 00:28:18.082 CC lib/env_dpdk/pci_idxd.o 00:28:18.082 CC lib/env_dpdk/pci_event.o 00:28:18.082 LIB libspdk_vmd.a 00:28:18.082 LIB libspdk_json.a 00:28:18.082 CC lib/env_dpdk/sigbus_handler.o 00:28:18.082 CC lib/env_dpdk/pci_dpdk.o 00:28:18.082 CC lib/env_dpdk/pci_dpdk_2207.o 00:28:18.082 CC lib/env_dpdk/pci_dpdk_2211.o 00:28:18.082 CC lib/jsonrpc/jsonrpc_server.o 00:28:18.082 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:28:18.082 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:28:18.082 CC lib/jsonrpc/jsonrpc_client.o 00:28:18.341 LIB libspdk_jsonrpc.a 00:28:18.600 CC lib/rpc/rpc.o 00:28:18.600 LIB libspdk_rpc.a 00:28:18.859 CC lib/notify/notify.o 00:28:18.859 CC lib/notify/notify_rpc.o 00:28:18.859 CC lib/trace/trace.o 00:28:18.859 CC lib/trace/trace_flags.o 00:28:18.859 CC lib/trace/trace_rpc.o 00:28:18.859 CC lib/sock/sock.o 00:28:18.859 CC lib/sock/sock_rpc.o 00:28:18.859 LIB libspdk_notify.a 00:28:19.118 LIB libspdk_trace.a 00:28:19.118 LIB libspdk_env_dpdk.a 00:28:19.118 LIB libspdk_sock.a 00:28:19.118 CC lib/thread/thread.o 00:28:19.118 CC lib/thread/iobuf.o 00:28:19.118 CC lib/nvme/nvme_ctrlr_cmd.o 00:28:19.118 CC lib/nvme/nvme_ctrlr.o 00:28:19.118 CC lib/nvme/nvme_fabric.o 00:28:19.118 CC lib/nvme/nvme_ns.o 00:28:19.118 CC lib/nvme/nvme_qpair.o 00:28:19.118 CC lib/nvme/nvme_pcie_common.o 00:28:19.118 CC lib/nvme/nvme_ns_cmd.o 00:28:19.118 CC lib/nvme/nvme_pcie.o 00:28:19.376 CC lib/nvme/nvme.o 00:28:19.945 LIB libspdk_thread.a 00:28:19.945 CC lib/nvme/nvme_quirks.o 00:28:20.204 CC lib/nvme/nvme_transport.o 00:28:20.204 CC lib/nvme/nvme_discovery.o 00:28:20.204 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:28:20.204 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:28:20.204 CC lib/nvme/nvme_tcp.o 00:28:20.204 CC lib/nvme/nvme_opal.o 00:28:20.204 CC lib/nvme/nvme_io_msg.o 00:28:20.463 CC lib/nvme/nvme_poll_group.o 00:28:20.721 CC lib/nvme/nvme_zns.o 00:28:20.721 CC lib/nvme/nvme_cuse.o 00:28:20.721 CC lib/nvme/nvme_vfio_user.o 00:28:20.981 CC lib/nvme/nvme_rdma.o 00:28:20.981 CC lib/accel/accel.o 00:28:20.981 CC lib/blob/blobstore.o 00:28:20.981 CC lib/init/json_config.o 00:28:20.981 CC lib/init/subsystem.o 00:28:21.239 CC lib/init/subsystem_rpc.o 00:28:21.239 CC lib/init/rpc.o 00:28:21.239 CC lib/accel/accel_rpc.o 00:28:21.239 CC lib/virtio/virtio.o 00:28:21.498 CC lib/virtio/virtio_vhost_user.o 00:28:21.498 LIB libspdk_init.a 00:28:21.498 CC lib/virtio/virtio_vfio_user.o 00:28:21.498 CC lib/virtio/virtio_pci.o 00:28:21.498 CC lib/blob/request.o 00:28:21.498 CC lib/event/app.o 00:28:21.498 CC lib/accel/accel_sw.o 00:28:21.498 CC lib/event/reactor.o 00:28:21.757 CC lib/blob/zeroes.o 00:28:21.757 CC lib/blob/blob_bs_dev.o 00:28:21.757 LIB libspdk_virtio.a 00:28:21.757 CC lib/event/log_rpc.o 00:28:21.757 CC lib/event/app_rpc.o 00:28:21.757 CC lib/event/scheduler_static.o 00:28:21.757 LIB libspdk_accel.a 00:28:22.016 LIB libspdk_event.a 00:28:22.016 CC lib/bdev/bdev.o 00:28:22.016 CC lib/bdev/bdev_zone.o 00:28:22.016 CC lib/bdev/bdev_rpc.o 00:28:22.016 CC lib/bdev/part.o 00:28:22.016 CC lib/bdev/scsi_nvme.o 00:28:22.275 LIB libspdk_nvme.a 00:28:22.844 LIB libspdk_blob.a 00:28:22.844 CC lib/lvol/lvol.o 00:28:22.844 CC lib/blobfs/tree.o 00:28:22.844 CC lib/blobfs/blobfs.o 00:28:23.413 LIB libspdk_blobfs.a 00:28:23.413 LIB libspdk_lvol.a 00:28:23.413 LIB libspdk_bdev.a 00:28:23.671 CC lib/scsi/dev.o 00:28:23.671 CC lib/nbd/nbd.o 00:28:23.671 CC lib/scsi/lun.o 00:28:23.671 CC lib/nbd/nbd_rpc.o 00:28:23.671 CC lib/ublk/ublk.o 00:28:23.671 CC lib/scsi/port.o 00:28:23.671 CC lib/ublk/ublk_rpc.o 00:28:23.671 CC lib/scsi/scsi.o 00:28:23.671 CC lib/nvmf/ctrlr.o 00:28:23.671 CC lib/ftl/ftl_core.o 00:28:23.671 CC lib/scsi/scsi_bdev.o 00:28:23.671 CC lib/scsi/scsi_pr.o 00:28:23.671 CC lib/nvmf/ctrlr_discovery.o 00:28:23.671 CC lib/scsi/scsi_rpc.o 00:28:23.930 CC lib/scsi/task.o 00:28:23.930 CC lib/ftl/ftl_init.o 00:28:23.930 CC lib/ftl/ftl_layout.o 00:28:23.930 CC lib/nvmf/ctrlr_bdev.o 00:28:23.930 LIB libspdk_nbd.a 00:28:23.930 CC lib/ftl/ftl_debug.o 00:28:23.930 CC lib/nvmf/subsystem.o 00:28:23.930 LIB libspdk_ublk.a 00:28:23.930 CC lib/ftl/ftl_io.o 00:28:23.930 CC lib/ftl/ftl_sb.o 00:28:23.930 CC lib/ftl/ftl_l2p.o 00:28:23.930 CC lib/ftl/ftl_l2p_flat.o 00:28:24.189 LIB libspdk_scsi.a 00:28:24.189 CC lib/ftl/ftl_nv_cache.o 00:28:24.189 CC lib/ftl/ftl_band.o 00:28:24.189 CC lib/ftl/ftl_band_ops.o 00:28:24.189 CC lib/nvmf/nvmf.o 00:28:24.189 CC lib/nvmf/nvmf_rpc.o 00:28:24.189 CC lib/ftl/ftl_writer.o 00:28:24.189 CC lib/iscsi/conn.o 00:28:24.189 CC lib/vhost/vhost.o 00:28:24.448 CC lib/vhost/vhost_rpc.o 00:28:24.448 CC lib/vhost/vhost_scsi.o 00:28:24.448 CC lib/iscsi/init_grp.o 00:28:24.448 CC lib/vhost/vhost_blk.o 00:28:24.707 CC lib/vhost/rte_vhost_user.o 00:28:24.707 CC lib/iscsi/iscsi.o 00:28:24.707 CC lib/nvmf/transport.o 00:28:24.707 CC lib/iscsi/md5.o 00:28:24.707 CC lib/iscsi/param.o 00:28:24.707 CC lib/ftl/ftl_rq.o 00:28:24.966 CC lib/ftl/ftl_reloc.o 00:28:24.966 CC lib/iscsi/portal_grp.o 00:28:24.966 CC lib/iscsi/tgt_node.o 00:28:24.966 CC lib/nvmf/tcp.o 00:28:25.224 CC lib/ftl/ftl_l2p_cache.o 00:28:25.224 CC lib/ftl/ftl_p2l.o 00:28:25.224 CC lib/ftl/mngt/ftl_mngt.o 00:28:25.224 CC lib/iscsi/iscsi_subsystem.o 00:28:25.224 CC lib/iscsi/iscsi_rpc.o 00:28:25.483 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:28:25.483 CC lib/iscsi/task.o 00:28:25.483 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:28:25.483 CC lib/ftl/mngt/ftl_mngt_startup.o 00:28:25.483 CC lib/ftl/mngt/ftl_mngt_md.o 00:28:25.483 CC lib/ftl/mngt/ftl_mngt_misc.o 00:28:25.483 CC lib/nvmf/rdma.o 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_band.o 00:28:25.742 LIB libspdk_iscsi.a 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:28:25.742 LIB libspdk_vhost.a 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:28:25.742 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:28:25.742 CC lib/ftl/utils/ftl_conf.o 00:28:25.742 CC lib/ftl/utils/ftl_md.o 00:28:25.742 CC lib/ftl/utils/ftl_mempool.o 00:28:25.742 CC lib/ftl/utils/ftl_bitmap.o 00:28:26.001 CC lib/ftl/utils/ftl_property.o 00:28:26.001 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:28:26.001 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:28:26.001 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:28:26.001 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:28:26.001 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:28:26.001 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:28:26.001 CC lib/ftl/upgrade/ftl_sb_v3.o 00:28:26.001 CC lib/ftl/upgrade/ftl_sb_v5.o 00:28:26.001 CC lib/ftl/nvc/ftl_nvc_dev.o 00:28:26.001 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:28:26.260 CC lib/ftl/base/ftl_base_dev.o 00:28:26.260 CC lib/ftl/base/ftl_base_bdev.o 00:28:26.260 LIB libspdk_ftl.a 00:28:26.521 LIB libspdk_nvmf.a 00:28:26.798 CC module/env_dpdk/env_dpdk_rpc.o 00:28:26.798 CC module/sock/posix/posix.o 00:28:26.798 CC module/accel/error/accel_error.o 00:28:26.798 CC module/blob/bdev/blob_bdev.o 00:28:26.798 CC module/accel/ioat/accel_ioat.o 00:28:26.799 CC module/accel/dsa/accel_dsa.o 00:28:26.799 CC module/accel/iaa/accel_iaa.o 00:28:26.799 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:28:26.799 CC module/scheduler/gscheduler/gscheduler.o 00:28:26.799 CC module/scheduler/dynamic/scheduler_dynamic.o 00:28:27.080 LIB libspdk_env_dpdk_rpc.a 00:28:27.080 CC module/accel/iaa/accel_iaa_rpc.o 00:28:27.080 LIB libspdk_scheduler_gscheduler.a 00:28:27.080 LIB libspdk_scheduler_dpdk_governor.a 00:28:27.080 CC module/accel/error/accel_error_rpc.o 00:28:27.080 CC module/accel/dsa/accel_dsa_rpc.o 00:28:27.080 CC module/accel/ioat/accel_ioat_rpc.o 00:28:27.080 LIB libspdk_scheduler_dynamic.a 00:28:27.080 LIB libspdk_blob_bdev.a 00:28:27.080 LIB libspdk_accel_iaa.a 00:28:27.080 LIB libspdk_accel_ioat.a 00:28:27.080 LIB libspdk_accel_error.a 00:28:27.080 LIB libspdk_accel_dsa.a 00:28:27.352 CC module/blobfs/bdev/blobfs_bdev.o 00:28:27.352 CC module/bdev/lvol/vbdev_lvol.o 00:28:27.352 CC module/bdev/delay/vbdev_delay.o 00:28:27.352 CC module/bdev/gpt/gpt.o 00:28:27.352 CC module/bdev/malloc/bdev_malloc.o 00:28:27.352 CC module/bdev/error/vbdev_error.o 00:28:27.352 CC module/bdev/null/bdev_null.o 00:28:27.352 CC module/bdev/passthru/vbdev_passthru.o 00:28:27.352 CC module/bdev/nvme/bdev_nvme.o 00:28:27.352 CC module/bdev/gpt/vbdev_gpt.o 00:28:27.352 LIB libspdk_sock_posix.a 00:28:27.352 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:28:27.352 CC module/bdev/nvme/bdev_nvme_rpc.o 00:28:27.610 CC module/bdev/error/vbdev_error_rpc.o 00:28:27.610 CC module/bdev/null/bdev_null_rpc.o 00:28:27.610 CC module/bdev/malloc/bdev_malloc_rpc.o 00:28:27.610 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:28:27.610 CC module/bdev/delay/vbdev_delay_rpc.o 00:28:27.610 LIB libspdk_blobfs_bdev.a 00:28:27.610 CC module/bdev/nvme/nvme_rpc.o 00:28:27.610 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:28:27.610 LIB libspdk_bdev_gpt.a 00:28:27.610 LIB libspdk_bdev_error.a 00:28:27.610 LIB libspdk_bdev_null.a 00:28:27.610 LIB libspdk_bdev_passthru.a 00:28:27.610 LIB libspdk_bdev_malloc.a 00:28:27.610 LIB libspdk_bdev_delay.a 00:28:27.869 CC module/bdev/raid/bdev_raid.o 00:28:27.869 CC module/bdev/split/vbdev_split.o 00:28:27.869 CC module/bdev/raid/bdev_raid_rpc.o 00:28:27.869 CC module/bdev/raid/bdev_raid_sb.o 00:28:27.869 CC module/bdev/zone_block/vbdev_zone_block.o 00:28:27.869 CC module/bdev/aio/bdev_aio.o 00:28:27.869 CC module/bdev/aio/bdev_aio_rpc.o 00:28:27.869 LIB libspdk_bdev_lvol.a 00:28:27.869 CC module/bdev/raid/raid0.o 00:28:27.869 CC module/bdev/split/vbdev_split_rpc.o 00:28:27.869 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:28.127 CC module/bdev/ftl/bdev_ftl.o 00:28:28.127 CC module/bdev/iscsi/bdev_iscsi.o 00:28:28.127 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:28.127 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:28.127 LIB libspdk_bdev_aio.a 00:28:28.127 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:28.127 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:28.127 LIB libspdk_bdev_split.a 00:28:28.127 LIB libspdk_bdev_zone_block.a 00:28:28.127 CC module/bdev/raid/raid1.o 00:28:28.127 CC module/bdev/raid/concat.o 00:28:28.385 CC module/bdev/raid/raid5f.o 00:28:28.385 CC module/bdev/nvme/bdev_mdns_client.o 00:28:28.385 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:28:28.385 LIB libspdk_bdev_ftl.a 00:28:28.385 CC module/bdev/nvme/vbdev_opal.o 00:28:28.385 CC module/bdev/nvme/vbdev_opal_rpc.o 00:28:28.385 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:28.385 LIB libspdk_bdev_virtio.a 00:28:28.385 LIB libspdk_bdev_iscsi.a 00:28:28.644 LIB libspdk_bdev_nvme.a 00:28:28.644 LIB libspdk_bdev_raid.a 00:28:28.903 CC module/event/subsystems/iobuf/iobuf.o 00:28:28.903 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:28:28.903 CC module/event/subsystems/vmd/vmd.o 00:28:28.903 CC module/event/subsystems/sock/sock.o 00:28:28.903 CC module/event/subsystems/vmd/vmd_rpc.o 00:28:28.903 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:28:28.903 CC module/event/subsystems/scheduler/scheduler.o 00:28:28.903 LIB libspdk_event_sock.a 00:28:28.903 LIB libspdk_event_vmd.a 00:28:28.903 LIB libspdk_event_vhost_blk.a 00:28:28.903 LIB libspdk_event_scheduler.a 00:28:29.162 LIB libspdk_event_iobuf.a 00:28:29.162 CC module/event/subsystems/accel/accel.o 00:28:29.162 LIB libspdk_event_accel.a 00:28:29.421 CC module/event/subsystems/bdev/bdev.o 00:28:29.680 LIB libspdk_event_bdev.a 00:28:29.680 CC module/event/subsystems/scsi/scsi.o 00:28:29.680 CC module/event/subsystems/ublk/ublk.o 00:28:29.680 CC module/event/subsystems/nbd/nbd.o 00:28:29.680 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:28:29.680 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:28:29.680 LIB libspdk_event_nbd.a 00:28:29.939 LIB libspdk_event_ublk.a 00:28:29.939 LIB libspdk_event_scsi.a 00:28:29.939 LIB libspdk_event_nvmf.a 00:28:29.939 CC module/event/subsystems/iscsi/iscsi.o 00:28:29.939 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:28:30.199 LIB libspdk_event_iscsi.a 00:28:30.199 LIB libspdk_event_vhost_scsi.a 00:28:30.199 CXX app/trace/trace.o 00:28:30.199 CC app/trace_record/trace_record.o 00:28:30.199 CC app/spdk_nvme_identify/identify.o 00:28:30.199 CC app/spdk_lspci/spdk_lspci.o 00:28:30.199 CC app/spdk_nvme_perf/perf.o 00:28:30.458 CC app/nvmf_tgt/nvmf_main.o 00:28:30.458 CC app/iscsi_tgt/iscsi_tgt.o 00:28:30.458 CC examples/accel/perf/accel_perf.o 00:28:30.458 CC app/spdk_tgt/spdk_tgt.o 00:28:30.458 CC test/accel/dif/dif.o 00:28:30.458 LINK spdk_lspci 00:28:30.458 LINK spdk_trace_record 00:28:30.458 LINK nvmf_tgt 00:28:30.717 LINK iscsi_tgt 00:28:30.717 LINK spdk_tgt 00:28:30.717 LINK accel_perf 00:28:30.717 LINK dif 00:28:30.717 LINK spdk_trace 00:28:30.976 LINK spdk_nvme_identify 00:28:30.976 LINK spdk_nvme_perf 00:28:35.161 CC app/spdk_nvme_discover/discovery_aer.o 00:28:36.538 LINK spdk_nvme_discover 00:28:46.515 CC test/app/bdev_svc/bdev_svc.o 00:28:46.774 LINK bdev_svc 00:29:01.680 CC examples/bdev/hello_world/hello_bdev.o 00:29:01.680 CC app/spdk_top/spdk_top.o 00:29:01.680 LINK hello_bdev 00:29:04.966 LINK spdk_top 00:29:05.532 CC examples/bdev/bdevperf/bdevperf.o 00:29:09.720 LINK bdevperf 00:29:09.720 CC app/vhost/vhost.o 00:29:11.101 LINK vhost 00:29:13.003 CC app/spdk_dd/spdk_dd.o 00:29:14.904 LINK spdk_dd 00:29:24.876 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:29:25.877 LINK nvme_fuzz 00:30:47.312 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:30:47.312 LINK iscsi_fuzz 00:31:13.849 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:31:13.849 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:31:15.228 LINK vhost_fuzz 00:31:30.105 CC app/fio/nvme/fio_plugin.o 00:31:31.042 LINK spdk_nvme 00:31:32.945 CC test/app/histogram_perf/histogram_perf.o 00:31:33.882 LINK histogram_perf 00:31:38.075 CC test/app/jsoncat/jsoncat.o 00:31:38.334 CC examples/blob/hello_world/hello_blob.o 00:31:38.593 LINK jsoncat 00:31:39.974 LINK hello_blob 00:31:40.233 CC examples/ioat/perf/perf.o 00:31:41.610 LINK ioat_perf 00:31:49.821 CC examples/ioat/verify/verify.o 00:31:49.821 CC examples/blob/cli/blobcli.o 00:31:49.821 LINK verify 00:31:50.388 LINK blobcli 00:31:51.765 CC test/bdev/bdevio/bdevio.o 00:31:51.765 CC test/app/stub/stub.o 00:31:52.332 LINK stub 00:31:52.898 LINK bdevio 00:31:53.157 CC app/fio/bdev/fio_plugin.o 00:31:54.533 LINK spdk_bdev 00:31:54.533 CC test/blobfs/mkfs/mkfs.o 00:31:54.792 CC examples/nvme/hello_world/hello_world.o 00:31:55.359 LINK mkfs 00:31:55.359 LINK hello_world 00:31:57.261 TEST_HEADER include/spdk/config.h 00:31:57.261 CXX test/cpp_headers/accel.o 00:31:57.829 CXX test/cpp_headers/accel_module.o 00:31:58.087 CC examples/sock/hello_world/hello_sock.o 00:31:59.024 CXX test/cpp_headers/assert.o 00:31:59.282 LINK hello_sock 00:31:59.850 CXX test/cpp_headers/barrier.o 00:32:01.225 CXX test/cpp_headers/base64.o 00:32:02.158 CXX test/cpp_headers/bdev.o 00:32:03.536 CXX test/cpp_headers/bdev_module.o 00:32:05.443 CXX test/cpp_headers/bdev_zone.o 00:32:07.350 CXX test/cpp_headers/bit_array.o 00:32:07.610 CC examples/vmd/lsvmd/lsvmd.o 00:32:08.549 LINK lsvmd 00:32:08.807 CXX test/cpp_headers/bit_pool.o 00:32:10.188 CXX test/cpp_headers/blob.o 00:32:12.093 CXX test/cpp_headers/blob_bdev.o 00:32:14.002 CXX test/cpp_headers/blobfs.o 00:32:15.908 CXX test/cpp_headers/blobfs_bdev.o 00:32:18.442 CXX test/cpp_headers/conf.o 00:32:19.821 CXX test/cpp_headers/config.o 00:32:19.821 CXX test/cpp_headers/cpuset.o 00:32:21.719 CXX test/cpp_headers/crc16.o 00:32:23.671 CXX test/cpp_headers/crc32.o 00:32:24.238 CXX test/cpp_headers/crc64.o 00:32:24.804 CXX test/cpp_headers/dif.o 00:32:26.183 CC examples/nvmf/nvmf/nvmf.o 00:32:26.183 CXX test/cpp_headers/dma.o 00:32:28.086 CXX test/cpp_headers/endian.o 00:32:28.654 LINK nvmf 00:32:29.221 CXX test/cpp_headers/env.o 00:32:30.600 CXX test/cpp_headers/env_dpdk.o 00:32:31.975 CXX test/cpp_headers/event.o 00:32:33.353 CC examples/nvme/reconnect/reconnect.o 00:32:33.353 CXX test/cpp_headers/fd.o 00:32:35.258 CXX test/cpp_headers/fd_group.o 00:32:35.517 LINK reconnect 00:32:35.777 CXX test/cpp_headers/file.o 00:32:36.715 CC examples/util/zipf/zipf.o 00:32:37.284 CXX test/cpp_headers/ftl.o 00:32:37.852 LINK zipf 00:32:38.790 CXX test/cpp_headers/gpt_spec.o 00:32:40.168 CXX test/cpp_headers/hexlify.o 00:32:41.546 CXX test/cpp_headers/histogram_data.o 00:32:42.924 CXX test/cpp_headers/idxd.o 00:32:42.924 CC examples/vmd/led/led.o 00:32:43.860 CXX test/cpp_headers/idxd_spec.o 00:32:44.119 LINK led 00:32:44.686 CC test/dma/test_dma/test_dma.o 00:32:45.254 CXX test/cpp_headers/init.o 00:32:46.630 CXX test/cpp_headers/ioat.o 00:32:47.567 LINK test_dma 00:32:48.135 CXX test/cpp_headers/ioat_spec.o 00:32:49.514 CXX test/cpp_headers/iscsi_spec.o 00:32:51.421 CXX test/cpp_headers/json.o 00:32:53.327 CXX test/cpp_headers/jsonrpc.o 00:32:54.706 CXX test/cpp_headers/likely.o 00:32:56.653 CXX test/cpp_headers/log.o 00:32:58.557 CXX test/cpp_headers/lvol.o 00:32:59.937 CXX test/cpp_headers/memory.o 00:33:01.839 CXX test/cpp_headers/mmio.o 00:33:03.741 CXX test/cpp_headers/nbd.o 00:33:03.741 CXX test/cpp_headers/notify.o 00:33:05.640 CXX test/cpp_headers/nvme.o 00:33:07.542 CXX test/cpp_headers/nvme_intel.o 00:33:09.444 CXX test/cpp_headers/nvme_ocssd.o 00:33:11.975 CXX test/cpp_headers/nvme_ocssd_spec.o 00:33:13.352 CXX test/cpp_headers/nvme_spec.o 00:33:15.252 CXX test/cpp_headers/nvme_zns.o 00:33:17.156 CXX test/cpp_headers/nvmf.o 00:33:17.156 CC examples/nvme/nvme_manage/nvme_manage.o 00:33:18.535 CXX test/cpp_headers/nvmf_cmd.o 00:33:20.435 LINK nvme_manage 00:33:20.435 CXX test/cpp_headers/nvmf_fc_spec.o 00:33:22.333 CXX test/cpp_headers/nvmf_spec.o 00:33:24.235 CXX test/cpp_headers/nvmf_transport.o 00:33:25.171 CC examples/thread/thread/thread_ex.o 00:33:26.112 CXX test/cpp_headers/opal.o 00:33:27.508 LINK thread 00:33:27.766 CXX test/cpp_headers/opal_spec.o 00:33:29.667 CXX test/cpp_headers/pci_ids.o 00:33:31.616 CXX test/cpp_headers/pipe.o 00:33:32.990 CXX test/cpp_headers/queue.o 00:33:33.248 CXX test/cpp_headers/reduce.o 00:33:35.149 CXX test/cpp_headers/rpc.o 00:33:37.049 CXX test/cpp_headers/scheduler.o 00:33:38.950 CXX test/cpp_headers/scsi.o 00:33:40.851 CXX test/cpp_headers/scsi_spec.o 00:33:42.223 CXX test/cpp_headers/sock.o 00:33:44.125 CXX test/cpp_headers/stdinc.o 00:33:45.500 CXX test/cpp_headers/string.o 00:33:46.874 CXX test/cpp_headers/thread.o 00:33:48.796 CXX test/cpp_headers/trace.o 00:33:50.173 CXX test/cpp_headers/trace_parser.o 00:33:52.077 CXX test/cpp_headers/tree.o 00:33:52.077 CXX test/cpp_headers/ublk.o 00:33:53.982 CXX test/cpp_headers/util.o 00:33:55.887 CXX test/cpp_headers/uuid.o 00:33:56.454 CC examples/nvme/arbitration/arbitration.o 00:33:57.390 CXX test/cpp_headers/version.o 00:33:57.390 CXX test/cpp_headers/vfio_user_pci.o 00:33:58.767 LINK arbitration 00:33:58.767 CXX test/cpp_headers/vfio_user_spec.o 00:34:00.144 CXX test/cpp_headers/vhost.o 00:34:02.047 CXX test/cpp_headers/vmd.o 00:34:03.438 CXX test/cpp_headers/xor.o 00:34:04.850 CXX test/cpp_headers/zipf.o 00:34:06.834 CC test/env/mem_callbacks/mem_callbacks.o 00:34:07.092 CC test/event/event_perf/event_perf.o 00:34:08.469 LINK event_perf 00:34:08.728 CC examples/idxd/perf/perf.o 00:34:10.634 LINK idxd_perf 00:34:13.167 LINK mem_callbacks 00:34:18.440 CC test/lvol/esnap/esnap.o 00:34:18.440 CC test/env/vtophys/vtophys.o 00:34:19.008 LINK vtophys 00:34:21.542 CC test/nvme/aer/aer.o 00:34:22.918 LINK aer 00:34:24.316 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:34:25.246 LINK env_dpdk_post_init 00:34:25.812 CC examples/nvme/hotplug/hotplug.o 00:34:26.071 CC test/event/reactor/reactor.o 00:34:27.006 LINK reactor 00:34:27.006 LINK hotplug 00:34:31.193 CC test/event/reactor_perf/reactor_perf.o 00:34:32.130 LINK reactor_perf 00:34:33.506 CC examples/nvme/cmb_copy/cmb_copy.o 00:34:34.881 LINK cmb_copy 00:34:35.140 LINK esnap 00:34:50.116 CC test/nvme/reset/reset.o 00:34:52.022 LINK reset 00:34:57.295 CC test/event/app_repeat/app_repeat.o 00:34:58.232 LINK app_repeat 00:35:03.548 CC examples/nvme/abort/abort.o 00:35:05.452 LINK abort 00:35:06.018 CC test/env/memory/memory_ut.o 00:35:07.920 CC test/nvme/sgl/sgl.o 00:35:09.296 LINK sgl 00:35:11.828 LINK memory_ut 00:35:14.360 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:14.619 LINK interrupt_tgt 00:35:14.877 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:35:15.443 CC test/rpc_client/rpc_client_test.o 00:35:16.009 LINK pmr_persistence 00:35:16.268 CC test/nvme/e2edp/nvme_dp.o 00:35:16.526 LINK rpc_client_test 00:35:17.460 LINK nvme_dp 00:35:20.787 CC test/env/pci/pci_ut.o 00:35:23.319 LINK pci_ut 00:35:27.500 CC test/nvme/overhead/overhead.o 00:35:30.032 LINK overhead 00:35:34.220 CC test/thread/poller_perf/poller_perf.o 00:35:34.220 CC test/thread/lock/spdk_lock.o 00:35:34.787 LINK poller_perf 00:35:36.688 CC test/event/scheduler/scheduler.o 00:35:36.946 CC test/nvme/err_injection/err_injection.o 00:35:37.512 LINK scheduler 00:35:38.078 LINK err_injection 00:35:38.337 CC test/nvme/startup/startup.o 00:35:39.711 LINK startup 00:35:39.711 LINK spdk_lock 00:35:40.646 CC test/nvme/reserve/reserve.o 00:35:42.022 LINK reserve 00:35:52.024 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:35:52.283 LINK histogram_ut 00:35:52.541 CC test/nvme/simple_copy/simple_copy.o 00:35:53.916 LINK simple_copy 00:35:54.174 CC test/nvme/connect_stress/connect_stress.o 00:35:55.550 LINK connect_stress 00:35:57.453 CC test/unit/lib/accel/accel.c/accel_ut.o 00:36:00.738 CC test/nvme/boot_partition/boot_partition.o 00:36:01.673 LINK boot_partition 00:36:05.859 CC test/nvme/compliance/nvme_compliance.o 00:36:07.762 LINK nvme_compliance 00:36:07.762 LINK accel_ut 00:36:14.326 CC test/nvme/fused_ordering/fused_ordering.o 00:36:14.896 CC test/nvme/doorbell_aers/doorbell_aers.o 00:36:16.278 LINK fused_ordering 00:36:16.278 LINK doorbell_aers 00:36:16.845 CC test/nvme/fdp/fdp.o 00:36:19.374 LINK fdp 00:36:29.339 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:36:29.339 CC test/nvme/cuse/cuse.o 00:36:32.622 CC test/unit/lib/bdev/part.c/part_ut.o 00:36:32.622 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:36:35.155 LINK blob_bdev_ut 00:36:36.091 LINK cuse 00:36:39.377 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:36:39.959 LINK tree_ut 00:36:42.500 CC test/unit/lib/dma/dma.c/dma_ut.o 00:36:43.067 LINK part_ut 00:36:43.326 LINK dma_ut 00:36:43.326 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:36:43.586 CC test/unit/lib/blob/blob.c/blob_ut.o 00:36:44.522 LINK bdev_ut 00:36:44.522 CC test/unit/lib/event/app.c/app_ut.o 00:36:45.898 LINK app_ut 00:36:46.465 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:36:46.465 LINK blobfs_async_ut 00:36:47.402 LINK scsi_nvme_ut 00:36:47.402 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:36:47.660 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:36:49.039 LINK ioat_ut 00:36:49.607 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:36:50.542 LINK blobfs_bdev_ut 00:36:50.801 LINK blobfs_sync_ut 00:36:52.175 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:36:52.175 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:36:53.549 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:36:53.549 LINK gpt_ut 00:36:53.549 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:36:53.549 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:36:55.019 LINK conn_ut 00:36:55.019 LINK init_grp_ut 00:36:56.394 LINK reactor_ut 00:36:59.680 LINK iscsi_ut 00:36:59.680 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:36:59.680 LINK blob_ut 00:36:59.938 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:37:00.504 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:37:03.787 LINK vbdev_lvol_ut 00:37:03.787 CC test/unit/lib/iscsi/param.c/param_ut.o 00:37:04.722 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:37:05.657 LINK param_ut 00:37:05.915 LINK bdev_raid_ut 00:37:06.173 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:37:06.740 LINK portal_grp_ut 00:37:08.636 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:37:09.597 LINK bdev_ut 00:37:09.855 LINK jsonrpc_server_ut 00:37:10.790 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:37:10.790 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:37:11.724 LINK json_parse_ut 00:37:11.983 LINK bdev_zone_ut 00:37:13.360 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:37:13.619 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:37:13.879 LINK vbdev_zone_block_ut 00:37:16.411 LINK tgt_node_ut 00:37:16.978 CC test/unit/lib/log/log.c/log_ut.o 00:37:17.915 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:37:17.915 LINK log_ut 00:37:18.482 CC test/unit/lib/notify/notify.c/notify_ut.o 00:37:19.049 LINK notify_ut 00:37:19.049 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:37:19.308 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:37:19.308 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:37:19.308 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:37:19.567 LINK lvol_ut 00:37:19.567 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:37:19.825 LINK bdev_raid_sb_ut 00:37:19.825 LINK json_util_ut 00:37:19.825 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:37:19.825 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:37:19.825 LINK concat_ut 00:37:19.825 LINK bdev_nvme_ut 00:37:21.212 LINK json_write_ut 00:37:22.591 LINK nvme_ut 00:37:22.591 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:37:23.967 LINK raid1_ut 00:37:24.226 LINK ctrlr_ut 00:37:24.494 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:37:25.061 LINK tcp_ut 00:37:25.318 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:37:26.250 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:37:27.625 LINK raid5f_ut 00:37:28.560 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:37:29.495 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:37:29.754 LINK ctrlr_discovery_ut 00:37:29.754 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:37:30.012 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:37:30.271 LINK subsystem_ut 00:37:30.529 LINK dev_ut 00:37:31.464 LINK lun_ut 00:37:33.368 LINK nvme_ctrlr_cmd_ut 00:37:33.626 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:37:34.242 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:37:34.242 LINK ctrlr_bdev_ut 00:37:34.506 LINK nvme_ctrlr_ut 00:37:34.506 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:37:34.765 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:37:35.332 LINK nvmf_ut 00:37:35.332 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:37:35.899 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:37:36.158 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:37:36.416 LINK nvme_ctrlr_ocssd_cmd_ut 00:37:36.675 LINK scsi_ut 00:37:36.675 LINK nvme_ns_ut 00:37:36.934 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:37:37.501 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:37:37.501 LINK scsi_bdev_ut 00:37:37.501 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:37:37.501 LINK nvme_ns_cmd_ut 00:37:37.760 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:37:37.760 LINK scsi_pr_ut 00:37:38.019 LINK nvme_ns_ocssd_cmd_ut 00:37:38.019 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:37:38.278 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:37:38.846 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:37:39.104 LINK nvme_pcie_ut 00:37:39.104 LINK nvme_poll_group_ut 00:37:39.104 LINK nvme_quirks_ut 00:37:39.684 LINK nvme_qpair_ut 00:37:39.685 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:37:39.943 CC test/unit/lib/sock/sock.c/sock_ut.o 00:37:40.202 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:37:40.770 LINK rdma_ut 00:37:41.028 CC test/unit/lib/sock/posix.c/posix_ut.o 00:37:41.029 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:37:41.029 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:37:41.029 LINK sock_ut 00:37:41.029 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:37:41.963 LINK posix_ut 00:37:42.222 LINK transport_ut 00:37:43.157 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:37:43.157 LINK nvme_transport_ut 00:37:43.416 LINK nvme_io_msg_ut 00:37:43.416 LINK nvme_tcp_ut 00:37:43.981 LINK nvme_pcie_common_ut 00:37:45.881 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:37:45.881 CC test/unit/lib/thread/thread.c/thread_ut.o 00:37:46.140 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:37:46.398 LINK nvme_fabric_ut 00:37:46.398 CC test/unit/lib/util/base64.c/base64_ut.o 00:37:46.965 LINK base64_ut 00:37:46.965 LINK nvme_opal_ut 00:37:47.902 LINK thread_ut 00:37:47.902 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:37:47.902 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:37:48.160 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:37:48.160 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:37:48.160 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:37:48.160 LINK bit_array_ut 00:37:48.418 LINK nvme_rdma_ut 00:37:48.676 LINK pci_event_ut 00:37:48.676 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:37:48.676 LINK iobuf_ut 00:37:48.676 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:37:48.934 LINK subsystem_ut 00:37:49.192 LINK rpc_ut 00:37:49.451 LINK idxd_user_ut 00:37:50.386 LINK nvme_cuse_ut 00:37:50.386 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:37:50.644 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:37:50.902 LINK cpuset_ut 00:37:51.159 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:37:51.419 LINK idxd_ut 00:37:51.419 LINK crc16_ut 00:37:51.419 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:37:52.002 CC test/unit/lib/rdma/common.c/common_ut.o 00:37:52.002 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:37:52.002 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:37:52.002 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:37:52.002 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:37:52.263 CC test/unit/lib/util/dif.c/dif_ut.o 00:37:52.263 LINK crc32_ieee_ut 00:37:52.263 LINK crc64_ut 00:37:52.263 LINK crc32c_ut 00:37:52.263 LINK ftl_l2p_ut 00:37:52.263 LINK common_ut 00:37:52.522 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:37:52.522 CC test/unit/lib/util/iov.c/iov_ut.o 00:37:52.781 CC test/unit/lib/util/math.c/math_ut.o 00:37:52.781 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:37:52.781 CC test/unit/lib/util/string.c/string_ut.o 00:37:52.781 LINK iov_ut 00:37:52.781 LINK math_ut 00:37:52.781 LINK dif_ut 00:37:52.781 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:37:53.040 LINK pipe_ut 00:37:53.040 LINK string_ut 00:37:53.298 LINK vhost_ut 00:37:53.298 LINK ftl_io_ut 00:37:53.557 LINK ftl_band_ut 00:37:53.557 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:37:53.557 CC test/unit/lib/util/xor.c/xor_ut.o 00:37:53.815 LINK ftl_bitmap_ut 00:37:54.383 LINK xor_ut 00:37:55.317 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:37:55.318 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:37:55.885 LINK ftl_mempool_ut 00:37:55.885 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:37:55.885 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:37:56.143 LINK ftl_mngt_ut 00:37:56.710 LINK ftl_layout_upgrade_ut 00:37:56.710 LINK ftl_sb_ut 00:39:33.212 11:49:39 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:39:33.212 make[1]: Nothing to be done for 'clean'. 00:39:33.212 11:49:43 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:39:33.212 11:49:43 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:39:33.212 11:49:43 -- common/autotest_common.sh@10 -- $ set +x 00:39:33.212 11:49:43 -- spdk/autopackage.sh@48 -- $ timing_finish 00:39:33.212 11:49:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:33.212 11:49:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:33.212 11:49:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:33.212 + [[ -n 2526 ]] 00:39:33.212 + sudo kill 2526 00:39:33.221 [Pipeline] } 00:39:33.239 [Pipeline] // timeout 00:39:33.243 [Pipeline] } 00:39:33.260 [Pipeline] // stage 00:39:33.265 [Pipeline] } 00:39:33.278 [Pipeline] // catchError 00:39:33.287 [Pipeline] stage 00:39:33.289 [Pipeline] { (Stop VM) 00:39:33.301 [Pipeline] sh 00:39:33.581 + vagrant halt 00:39:36.891 ==> default: Halting domain... 00:39:42.179 [Pipeline] sh 00:39:42.458 + vagrant destroy -f 00:39:45.746 ==> default: Removing domain... 00:39:46.326 [Pipeline] sh 00:39:46.606 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:39:46.615 [Pipeline] } 00:39:46.635 [Pipeline] // stage 00:39:46.641 [Pipeline] } 00:39:46.656 [Pipeline] // dir 00:39:46.662 [Pipeline] } 00:39:46.677 [Pipeline] // wrap 00:39:46.683 [Pipeline] } 00:39:46.696 [Pipeline] // catchError 00:39:46.706 [Pipeline] stage 00:39:46.708 [Pipeline] { (Epilogue) 00:39:46.721 [Pipeline] sh 00:39:47.003 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:05.102 [Pipeline] catchError 00:40:05.104 [Pipeline] { 00:40:05.117 [Pipeline] sh 00:40:05.398 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:05.657 Artifacts sizes are good 00:40:05.666 [Pipeline] } 00:40:05.683 [Pipeline] // catchError 00:40:05.698 [Pipeline] archiveArtifacts 00:40:05.708 Archiving artifacts 00:40:05.965 [Pipeline] cleanWs 00:40:05.977 [WS-CLEANUP] Deleting project workspace... 00:40:05.978 [WS-CLEANUP] Deferred wipeout is used... 00:40:05.984 [WS-CLEANUP] done 00:40:05.986 [Pipeline] } 00:40:06.001 [Pipeline] // stage 00:40:06.006 [Pipeline] } 00:40:06.022 [Pipeline] // node 00:40:06.027 [Pipeline] End of Pipeline 00:40:06.069 Finished: SUCCESS